Interview 1 Flashcards

(29 cards)

1
Q

Tell me about a time when you improved an existing .NET application or system. What problem did you identify, what solution did you implement, and what was the outcome?”

Answer naturally, and I’ll give feedback immediately afterward.

A

In a previous role, I worked on a backend system where a UI page had more than 20 components, and each component executed SQL queries on page load. This caused performance issues and slow page rendering. I implemented a solution to load data on demand, so data would only be retrieved when a user interacted with a component. This change made the application 3–4 times faster. Through this, I learned the importance of optimizing data access and improving user experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Can you explain how dependency injection works in ASP.NET Core and give an example of how you have used it in a project?

A

“ASP.NET Core has a built-in dependency injection system, which allows you to register services with their interfaces and inject them where needed. The most common type is constructor injection, but property and method injection are also possible. You can also define lifetimes: transient, scoped, and singleton. DI helps follow the Dependency Inversion Principle and makes testing easier because you can inject mock implementations. For example, in a project, I registered an IEmailService as a singleton and injected it into my NotificationController via constructor injection to handle sending emails efficiently.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

In ASP.NET Core, how do you handle long-running tasks or external API calls without blocking the request thread? Can you explain your approach and maybe give an example from your experience?”

Answer as if you’re in a real interview, and I’ll give feedback immediately afterward.

A

In ASP.NET Core, long-running tasks or external API calls should be handled asynchronously using async/await. This allows the thread to be released while waiting for the operation to complete, so it can handle other requests. For example, in one project, I called an external REST API asynchronously using await httpClient.GetAsync(). This approach ensured that our application remained responsive and could handle many concurrent requests efficiently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In a high-traffic ASP.NET Core application, you notice that memory usage is growing unexpectedly over time. How would you identify and fix memory leaks in your application? Can you explain your approach and any tools you might use?”

A

“If I notice memory usage growing unexpectedly in a high-traffic ASP.NET Core application, I first use a memory profiler, like dotMemory or Visual Studio Diagnostics, to identify which objects are not being released. Then, I check if unmanaged resources are disposed properly, either via implementing IDisposable or using using statements. For example, in a project, FileStream objects were not being disposed, causing memory growth. After adding using blocks, memory usage stabilized. This approach ensures resources are released promptly and prevents memory leaks.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Can you explain the Repository pattern in .NET? How have you implemented it in a project, and what benefits did it provide?x

A

The Repository pattern encapsulates all database-related logic and provides an abstraction layer between the data access and business logic. This allows developers to interact with data through well-defined interfaces without dealing directly with low-level database objects. For example, in one project, I created a ProductRepository implementing IRepository<Product>. Service classes could fetch products via GetById() or GetAll() without knowing EF Core queries behind the scenes. This improved testability, reusability, and maintainability, and allowed us to easily swap out the data access layer if needed.</Product>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Can you explain the Decorator pattern and give an example of how you have applied it in a .NET project? Why would you choose it over inheritance?

A

The Decorator pattern allows you to add new functionality to a class without modifying it, typically by implementing the same interface or abstract class. For example, instead of creating separate classes for CoffeeWithMilk, CoffeeWithSugar, and so on, you can create decorators like MilkDecorator or SugarDecorator to extend behavior at runtime. In one of my projects, I implemented a CacheRepository using the Decorator pattern to add Redis caching to an existing database repository. This way, I extended functionality without modifying the original repository, and the decorator could be reused for other repositories implementing the same interface. The benefits include better maintainability, reusability, and avoiding an explosion of subclasses.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Imagine you need to design a high-traffic e-commerce backend in ASP.NET Core. Users can browse products, add items to a cart, and place orders. How would you design the system to handle high concurrency, ensure data consistency, and allow scalability? Walk me through your architecture and reasoning.

A

For a high-traffic e-commerce backend, I would separate responsibilities into domains such as Order, Cart, and Catalog, each with its own API and database. Domains would communicate changes via events, so each service can listen to other domains and update its data accordingly. I would make services stateless, allowing us to scale horizontally by adding instances during traffic spikes. To maintain data consistency, I would use idempotency keys or unique IDs: before processing an order or payment, the system checks if the ID already exists and returns the previous result if it does. Otherwise, the request is processed. This approach reduces inconsistencies, supports high concurrency, and ensures a robust, scalable architecture. Event-driven communication and stateless design also make the system easier to maintain and extend.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In your high-traffic e-commerce backend, how would you handle failures such as a downstream service (like payment processing or inventory service) being temporarily unavailable? How would you design the system to remain responsive and ensure reliability?

A

“To handle failures in a high-traffic backend, I would first implement retries with exponential backoff for transient failures. If failures persist, I would use a circuit breaker to prevent further requests until the service is healthy, protecting both the API and downstream services. For long-running or non-critical tasks where eventual consistency is acceptable, I would offload processing to a message broker, decoupling producers and consumers and allowing each to scale independently. Finally, I would ensure that API responses are user-friendly, returning proper HTTP status codes instead of raw exceptions, which improves both reliability and user experience. In .NET, I would use tools like Polly for implementing retries and circuit breakers, and a broker like RabbitMQ or Kafka for asynchronous processing.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In .NET, what is the difference between Task.Run, async/await, and Parallel.ForEach? When would you use each of them?

A

“In .NET, async/await is used primarily for I/O-bound operations. It allows the thread to be released while waiting for the operation to complete, improving scalability. Task.Run is used to offload CPU-bound work to a thread pool thread, allowing the main thread to remain responsive. Parallel.ForEach is used for parallel execution of CPU-bound operations across multiple threads, especially when processing large collections. For example, I would use async/await for HTTP API calls, Task.Run for CPU-intensive calculations, and Parallel.ForEach to process large datasets efficiently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Can you explain how garbage collection works in .NET? How would you identify and fix memory issues in a .NET application?

A

In .NET, garbage collection (GC) automatically frees memory by cleaning up objects that are no longer referenced. GC uses a generational approach: Gen 0 for short-lived objects, Gen 1 for medium-lived, and Gen 2 for long-lived objects like cached data or singletons. The GC runs periodically to reclaim memory, and it also handles the Large Object Heap for objects over 85 KB. To identify memory issues, I use memory profilers such as dotMemory or Visual Studio Diagnostics, check for objects not being released, and look for potential memory leaks. Fixes often involve properly implementing IDisposable, using using statements, and ensuring resources are released promptly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what are action filters?

A

Filters are executed inside the MVC pipeline, after middleware has routed the request to the controller.

They are used to add cross-cutting concerns at the controller or action level.

Types of filters:

Authorization filters – run first, before any other filter.

Resource filters – run before model binding; good for caching.

Action filters – run before and after controller action execution.

Exception filters – handle exceptions thrown by the action.

Result filters – run before and after the action result executes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is middleware in asp.net core?

A

Middleware are components that handle requests and responses.

Each middleware has the option to:

Process the incoming request.

Call the next middleware in the pipeline (await next()).

Optionally handle the response on the way back.

Common middleware:

Authentication – validate user tokens.

Authorization – check access permissions.

Logging – log requests/responses.

Exception handling – handle errors globally.

Static files – serve files from wwwroot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Can you explain how the ASP.NET Core request pipeline works? What is the role of middleware and filters, and in what order are they executed?

A

In ASP.NET Core, every HTTP request flows through a request pipeline made up of middleware components. Middleware can inspect, modify, or short-circuit requests and responses, and the order they are registered determines their execution. Common examples include authentication, authorization, logging, and exception handling. After middleware, the request reaches the MVC pipeline, where filters provide more granular control. Filters include authorization, resource, action, exception, and result filters, and they allow us to execute logic before or after controller actions. Middleware applies globally, while filters can be applied per controller or action, and both are used for cross-cutting concerns like security, caching, and error handling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

how is an http request comes to server then back to the user?

A
  1. Client Initiates Request

A user interacts with a browser or client app (e.g., Angular, Postman).

The client sends an HTTP request (GET, POST, PUT, etc.) to the server’s URL.

  1. Network / Web Server Layer

The request first hits a web server (like Kestrel, IIS, or Nginx).

The web server handles TCP/IP communication, HTTPS termination (if needed), and forwards the request to the ASP.NET Core app.

  1. ASP.NET Core Request Pipeline

ASP.NET Core receives the request through Kestrel (built-in web server).

The request flows through middleware in the order they are registered:

Authentication middleware – validates user credentials.

Authorization middleware – checks user permissions.

Logging middleware – logs request info.

Exception handling middleware – catches unhandled exceptions.

Routing middleware – determines which controller/action handles the request.

  1. MVC / Controller Execution

Once routed, the request enters the MVC pipeline:

Filters execute (authorization, action, resource, result, exception).

Controller action executes, performing business logic.

Data access layer or services are called (e.g., Entity Framework querying the database).

Action returns a response (e.g., JSON, HTML).

  1. Response Pipeline

The response flows back through the filters and middleware in reverse order:

Result filters may modify the response.

Middleware may log response details or apply compression.

  1. Back to Client

ASP.NET Core passes the response to the web server.

The web server sends the HTTP response over TCP/IP to the client.

The client receives the response and renders it (browser) or processes it (API client).

  1. Summary

Client → Web Server → Middleware → MVC/Filters → Controller → DB/Services → Controller → Filters → Middleware → Web Server → Client

Middleware handles cross-cutting concerns.

Filters are more granular, specific to actions or controllers.

Controllers handle the core business logic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In a .NET application using Entity Framework Core, how would you optimize queries and improve performance when dealing with large datasets?

A

To optimize queries in EF Core when working with large datasets, I follow several strategies. First, I avoid excessive .Include() statements that create heavy joins and only load the data I need. Second, I use IQueryable for deferred execution, so queries are not sent to the database until necessary. Third, I implement pagination using Skip() and Take() to load subsets of data instead of the full dataset. I also ensure that proper clustered and non-clustered indexes exist on frequently queried columns to avoid full table scans. Additionally, for read-only queries, I use AsNoTracking() to reduce overhead, and I project only the needed columns to reduce payload. These practices improve query performance and reduce database load.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Caching strategies?

A

Read-through cache: Cache is checked before DB read; if miss, DB queried and cache updated.

Write-through cache: DB write automatically updates cache.

Write-behind / write-back cache: Updates are written to DB asynchronously after caching.

17
Q

Cache invalidation techniques?

A

Time-based expiration (TTL)

Explicit cache invalidation after updates

Event-driven updates using pub/sub or messaging

18
Q

In a high-traffic .NET application, how would you implement caching to improve performance? What types of caching would you use, and how would you ensure data consistency?

A

In a high-traffic .NET application, I would cache frequently accessed data that doesn’t change often. For single-instance apps, I could use in-memory caching, and for multi-instance or distributed systems, I would use a distributed cache like Redis. To ensure data consistency, I would implement cache expiration with a TTL for infrequently changing data, or explicitly invalidate the cache when the underlying data is updated. I might also use read-through or write-through caching patterns depending on the use case. This approach reduces database load, improves response times, and helps the system scale efficiently.

19
Q

How would you handle concurrency issues in a .NET application — for example, when multiple users try to update the same record at the same time?

A

In .NET, concurrency can be handled using either optimistic or pessimistic strategies. In most scalable applications, I prefer optimistic concurrency. It relies on a version or timestamp column — before saving, EF Core checks if the version matches. If someone else updated the record, it throws a DbUpdateConcurrencyException, which I can handle by retrying or returning a user-friendly error. In EF Core, I can mark a property with [Timestamp] to enable this automatically.
Pessimistic concurrency, on the other hand, locks the record in the database to prevent others from modifying it during a transaction, but it can limit scalability. So generally, I use optimistic concurrency for performance and simplicity.”

20
Q

Imagine you have a microservices architecture with separate Order, Payment, and Inventory services. A user places an order and pays successfully, but the Inventory service fails to update due to a temporary issue. How would you ensure data consistency across these services?

A

In a distributed system with Order, Payment, and Inventory services, I’d ensure data consistency using patterns designed for eventual consistency.

First, I’d implement the Outbox pattern. When a service (like Payment) completes a local transaction, it records both the transaction data and the outbound event in the same database transaction — the Outbox table. A background process or message relay then reliably publishes the event to a message broker. This guarantees at-least-once delivery and prevents lost messages if the service crashes before publishing.

Second, I’d use the Saga or Compensating Transaction pattern. Each service performs its own local transaction, and if one service fails, compensating messages are sent to undo the previous steps — for example, refunding the payment or releasing the reserved inventory. This approach provides eventual consistency across services without relying on distributed transactions.

Together, these patterns make the system fault-tolerant and ensure that even partial failures eventually reach a consistent state.

21
Q

How would you design retries and backoff strategies in distributed communication between microservices?

A

I handle retries and backoff using a layered approach. First, I classify errors as transient (network timeouts, HTTP 5xx, 429) or permanent (4xx), and only retry transient errors. I apply per-call timeouts and a retry policy that uses exponential backoff with jitter (to avoid synchronized retry storms), limited to a small number of attempts—typically 3–5—with a reasonable max delay. If failures persist, I use a circuit breaker to stop hammering the downstream service and fail fast until it recovers. For high-concurrency scenarios I add bulkhead isolation so failures in one area don’t exhaust resources.

For operations with side effects (like payments), I require idempotency (idempotency keys) so retries are safe. For message-based workflows I prefer broker-level retries or staged retry queues with a dead-letter queue for poison messages. Finally, I monitor retry metrics and circuit-breaker state so we can tune policies and detect systemic issues. In .NET I typically implement these patterns with HttpClientFactory + Polly for retries and circuit breakers, and Redis or a persistent store for idempotency keys.

22
Q

What is the difference between IEnumerable, IQueryable, ICollection, and List<T> in C#?
When would you use each one, and how do they affect performance and memory usage?</T>

A

IEnumerable is the most basic interface, providing iteration support through foreach. It works well for in-memory collections but does not support modification. IQueryable extends IEnumerable and is intended for querying external data sources like databases. It supports deferred execution and translates LINQ expressions into SQL, which is efficient for large datasets.

ICollection also implements IEnumerable but adds methods like Add, Remove, and the Count property, so it’s useful when you need to modify a collection without caring about its concrete type.

List<T> is a concrete implementation of ICollection that supports index-based access, inserting, and removing elements. I typically use List<T> when I need random access or frequent modifications, ICollection when I need abstraction, IEnumerable for simple iteration, and IQueryable for database queries to optimize performance.</T></T>

23
Q

Can you explain the difference between a delegate, an event, and an Action/Func in C#? When would you use each one?1

A

“A delegate in C# is a type-safe reference to a method. It can be passed as a parameter, stored in variables, and invoked dynamically.

An event is a specialized delegate used to implement the publisher-subscriber pattern. Only the class that declares the event can raise it, but other classes can subscribe or unsubscribe. Events are commonly used for notifications, like Button.Click in a UI.

Action and Func are predefined generic delegates that simplify delegate usage. Action represents a method that returns void and can take up to 16 parameters. Func represents a method that returns a value and can take up to 16 parameters as input. Both are often used with lambda expressions, for example in LINQ queries or inline callbacks.

I typically use delegates for callbacks, events for observer-like notifications, and Action/Func for concise inline methods or LINQ operations.”

24
Q

Can you explain the Strategy pattern and how it differs from the Decorator pattern? Can you give an example of when you would use it in C#?

A

The Strategy pattern is a behavioral design pattern that allows you to define a family of algorithms or behaviors, encapsulate each one in a class, and make them interchangeable at runtime. It leverages dependency inversion and composition: the context class depends on an interface or abstract class, and you can inject different implementations as needed.

For example, in a payment system, you could have a IPaymentStrategy interface with implementations like CreditCardPayment, PayPalPayment, or CryptoPayment. The order processor class can accept any strategy at runtime and execute the appropriate payment logic.

The main difference from the Decorator pattern is that Strategy selects one behavior dynamically, whereas Decorator adds or layers additional behaviors onto an object.

25
Can you explain the difference between Task, ValueTask, and async/await in C#? When would you use each, and what are the performance implications?
In C#, async and await allow developers to write asynchronous code in a natural, sequential style. Task represents an asynchronous operation, either returning a value (Task) or void (Task). It always allocates an object on the heap, which is fine for most use cases. ValueTask is a lightweight alternative to Task designed for performance-critical scenarios. It avoids heap allocation when the operation completes synchronously or has a cached result. However, it should be used carefully because it can only be awaited once, and misuse can lead to subtle bugs. I typically use Task for most async operations, and ValueTask when the method is called frequently and can complete synchronously, such as reading from a cache or in high-throughput libraries.
26
What’s the difference between value types and reference types in C#? And can you explain how they are stored in memory, and what happens when you pass each type to a method?
In C#, value types (like int, double, bool, and struct) are stored directly on the stack. They hold their actual data, and when you assign one value type to another, a copy of the value is created — meaning the two variables are completely independent. Example: int a = 10; int b = a; // copy created b = 20; // a is still 10 Reference types (like class, string, array, and object) are stored on the heap, and the variable itself holds a reference (pointer) to that object in memory. So, when you assign one reference type variable to another, both variables point to the same object in the heap: var list1 = new List() { 1, 2 }; var list2 = list1; // both reference same object list2.Add(3); // list1 now also contains 3 When passing to methods: Value types are passed by value (a copy is made). Reference types are passed by reference to the object, so the method can modify the same underlying data. You can force by-reference passing using ref or out keywords.
27
Can you explain boxing and unboxing in C# — what they are, when they happen, and why they can be a performance concern?
Boxing is the process of converting a value type (like an int) into an object type (reference type). It happens automatically when a value type is assigned to a variable of type object or any interface it implements. Unboxing is the reverse — extracting the value type from the boxed object. Both operations involve allocating memory on the heap and copying data, so they introduce performance overhead and extra garbage collection pressure. Example: int num = 10; object obj = num; // Boxing int result = (int)obj; // Unboxing Repeated boxing/unboxing in loops can significantly degrade performance, so it’s better to use generics when possible to avoid it.
28
What’s the difference between a struct and a class in C#? When would you choose to use a struct instead of a class?
In C#, structs are value types, while classes are reference types. Structs store their data directly, so when you assign one struct to another, it creates a full copy. Classes, on the other hand, store a reference to data on the heap. Structs cannot use inheritance but can implement interfaces, and they can’t have parameterless constructors or destructors. I typically use structs for small, immutable data types that represent simple values — like coordinates or colors — where copying is cheap and heap allocation isn’t necessary.
29
Can you explain the difference between ref, out, and in parameters in C#? When would you use each? How do they affect performance and behavior?
In C#, ref, out, and in are keywords used to pass arguments by reference, mainly for value types: ref The variable must be initialized before being passed. The method can read and modify the value, and changes are reflected back to the caller. Example: void Increment(ref int x) => x++; int a = 5; Increment(ref a); // a becomes 6 out The variable does not need to be initialized before being passed. The method must assign a value before returning. Often used for returning multiple values. Example: void GetValues(out int x, out int y) { x = 10; y = 20; } in The variable is passed by reference but cannot be modified inside the method. Useful for performance with large value types, avoiding copying while ensuring immutability. Example: void PrintPoint(in Point p) { Console.WriteLine($"{p.X}, {p.Y}"); // p.X = 5; // not allowed } Summary: ref → input + output, must initialize before call. out → output only, must assign in method. in → input only, cannot modify, avoids copy overhead for large structs.