Streaming & Event-Driven Systems Flashcards

(45 cards)

1
Q

What is streaming data in the context of data engineering?

A

A continuous flow of records generated over time, often as events such as clicks, sensor readings, or log entries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does streaming processing differ from batch processing?

A

Streaming processes data continuously as it arrives with low latency, while batch processes finite chunks at scheduled intervals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an event in an event-driven system?

A

A record representing something that happened at a specific time, often including a key, timestamp, and payload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a message broker or log-based streaming system?

A

A system that accepts, stores, and delivers ordered streams of messages or events to consumers, often partitioned for scalability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a topic in a streaming system?

A

A named stream or category of messages that producers write to and consumers read from.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a partition in a topic?

A

An ordered, append-only sequence of messages that forms a shard of a topic for parallelism and scaling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why are partitions used in streaming systems?

A

To distribute load across brokers and consumers and enable parallel reads and writes for scalability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an offset in a partition?

A

A monotonically increasing position that uniquely identifies a message’s location within a partition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a consumer group?

A

A set of consumers that coordinate to share the partitions of a topic so that each partition is consumed by only one member at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why are consumer groups useful?

A

They allow horizontal scaling of consumption while providing a way to process each message once per group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is at-most-once delivery semantics?

A

Messages are delivered zero or one time; they are never delivered more than once but may be lost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is at-least-once delivery semantics?

A

Messages are delivered one or more times; they are not lost but may be processed more than once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is exactly-once processing semantics (conceptually)?

A

Guaranteeing that each message’s effect is applied logically once, even if the underlying system uses retries or duplicates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why is exactly-once processing hard to achieve in practice?

A

It often requires coordination across storage, compute, and sinks, idempotent operations, or transactional guarantees across systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why is at-least-once delivery commonly used in streaming pipelines?

A

It prioritizes durability and correctness, accepting duplicates that can be handled with idempotent logic or deduplication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is idempotent processing in streaming?

A

Designing consumers so that processing the same message multiple times has the same effect as processing it once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What techniques help implement idempotent processing?

A

Using unique event IDs and upserts, tracking processed offsets or IDs, and designing sinks to ignore duplicates based on keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is event time?

A

The time at which an event actually occurred, as recorded in the event payload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is processing time?

A

The time at which an event is processed by the system, which may lag behind event time due to delays or reordering.

20
Q

Why is distinguishing event time and processing time important?

A

Because analysis and windows should usually be based on when events happened, not when they were processed, especially with late data.

21
Q

What is late-arriving data in streaming systems?

A

Events that arrive after the system has already processed or closed the time window corresponding to their event time.

22
Q

What is a watermark in stream processing?

A

A marker indicating that the system believes it has seen all events up to a certain event-time, used to decide when to close windows.

23
Q

Why are watermarks used?

A

To balance waiting for late data against providing timely results by defining when windows can be considered complete.

24
Q

What is a window in streaming analytics?

A

A finite time range over which events are grouped for aggregation, such as 5-minute or hourly windows.

25
What are tumbling windows?
Non-overlapping, fixed-size windows that partition the timeline, such as consecutive 1-minute intervals.
26
What are sliding windows?
Windows of fixed size that advance by smaller steps, causing overlap, such as a 5-minute window that slides every 1 minute.
27
What are session windows?
Windows defined by periods of activity separated by gaps of inactivity, used to group events into sessions rather than fixed intervals.
28
What is stateful streaming processing?
Processing that keeps and updates state across events, such as counts, aggregates, or keyed state for joins and windows.
29
What is a state store in streaming architectures?
A storage layer (in-memory, local disk, or external) used by stream processors to keep per-key or per-window state.
30
Why does state management complicate streaming applications?
State must be checkpointed, recovered on failure, and scaled across nodes without losing consistency or performance.
31
What is backpressure in streaming systems?
A condition where downstream operators cannot keep up with incoming data rates, causing queues to grow and slowing upstream producers.
32
How can streaming systems handle backpressure?
By throttling producers, buffering with limits, scaling out consumers, or applying load shedding in extreme cases.
33
What is a dead-letter queue (DLQ) in streaming pipelines?
A separate stream where messages that cannot be processed successfully after retries are sent for later inspection and remediation.
34
Why are DLQs useful?
They prevent problematic messages from blocking the main stream and allow targeted manual or automated handling of bad records.
35
What is stream–table duality?
The idea that a changelog stream can be viewed as a table evolving over time, and a table can be viewed as the result of accumulating a stream.
36
What is a stream–stream join?
Joining two continuous streams of events, typically using time windows and keys to correlate events that occur within a time range.
37
What is a stream–table join?
Joining a stream of events to a lookup table or state representing the latest values for keys, often used to enrich events with reference data.
38
Why are ordering guarantees important in streaming?
Many computations, such as aggregations and state updates, assume events for a given key arrive in order; out-of-order data complicates logic.
39
How is ordering typically defined in partitioned topics?
Ordering is guaranteed only within a partition, not across partitions; events may arrive out-of-order by event time even within a partition.
40
What is a common partitioning strategy for topics?
Partitioning by a key such as customer ID or device ID so that all events for a key go to the same partition.
41
What is a trade-off when choosing a partition key?
Keys that are too skewed create hot partitions; keys that are too random may complicate aggregation and join patterns.
42
What is micro-batching in streaming systems?
Processing data in small, time-bounded batches approximating streaming behavior while using batch execution engines.
43
What is end-to-end latency in streaming pipelines?
The time between when an event occurs and when its derived results become available to consumers.
44
Why is monitoring lag important in streaming systems?
Lag indicates how far behind consumers are from the head of the stream, revealing performance issues or backlogs.
45
What is a good one-sentence mental model for streaming data engineering?
Continuously consume ordered event streams, manage state and time carefully, and design idempotent, backpressure-aware pipelines that can handle out-of-order and late data.