Tell me about a time when you improved an existing .NET application or system. What problem did you identify, what solution did you implement, and what was the outcome?”
Answer naturally, and I’ll give feedback immediately afterward.
In a previous role, I worked on a backend system where a UI page had more than 20 components, and each component executed SQL queries on page load. This caused performance issues and slow page rendering. I implemented a solution to load data on demand, so data would only be retrieved when a user interacted with a component. This change made the application 3–4 times faster. Through this, I learned the importance of optimizing data access and improving user experience.
Can you explain how dependency injection works in ASP.NET Core and give an example of how you have used it in a project?
“ASP.NET Core has a built-in dependency injection system, which allows you to register services with their interfaces and inject them where needed. The most common type is constructor injection, but property and method injection are also possible. You can also define lifetimes: transient, scoped, and singleton. DI helps follow the Dependency Inversion Principle and makes testing easier because you can inject mock implementations. For example, in a project, I registered an IEmailService as a singleton and injected it into my NotificationController via constructor injection to handle sending emails efficiently.”
In ASP.NET Core, how do you handle long-running tasks or external API calls without blocking the request thread? Can you explain your approach and maybe give an example from your experience?”
Answer as if you’re in a real interview, and I’ll give feedback immediately afterward.
In ASP.NET Core, long-running tasks or external API calls should be handled asynchronously using async/await. This allows the thread to be released while waiting for the operation to complete, so it can handle other requests. For example, in one project, I called an external REST API asynchronously using await httpClient.GetAsync(). This approach ensured that our application remained responsive and could handle many concurrent requests efficiently.
In a high-traffic ASP.NET Core application, you notice that memory usage is growing unexpectedly over time. How would you identify and fix memory leaks in your application? Can you explain your approach and any tools you might use?”
“If I notice memory usage growing unexpectedly in a high-traffic ASP.NET Core application, I first use a memory profiler, like dotMemory or Visual Studio Diagnostics, to identify which objects are not being released. Then, I check if unmanaged resources are disposed properly, either via implementing IDisposable or using using statements. For example, in a project, FileStream objects were not being disposed, causing memory growth. After adding using blocks, memory usage stabilized. This approach ensures resources are released promptly and prevents memory leaks.”
Can you explain the Repository pattern in .NET? How have you implemented it in a project, and what benefits did it provide?x
The Repository pattern encapsulates all database-related logic and provides an abstraction layer between the data access and business logic. This allows developers to interact with data through well-defined interfaces without dealing directly with low-level database objects. For example, in one project, I created a ProductRepository implementing IRepository<Product>. Service classes could fetch products via GetById() or GetAll() without knowing EF Core queries behind the scenes. This improved testability, reusability, and maintainability, and allowed us to easily swap out the data access layer if needed.</Product>
Can you explain the Decorator pattern and give an example of how you have applied it in a .NET project? Why would you choose it over inheritance?
The Decorator pattern allows you to add new functionality to a class without modifying it, typically by implementing the same interface or abstract class. For example, instead of creating separate classes for CoffeeWithMilk, CoffeeWithSugar, and so on, you can create decorators like MilkDecorator or SugarDecorator to extend behavior at runtime. In one of my projects, I implemented a CacheRepository using the Decorator pattern to add Redis caching to an existing database repository. This way, I extended functionality without modifying the original repository, and the decorator could be reused for other repositories implementing the same interface. The benefits include better maintainability, reusability, and avoiding an explosion of subclasses.”
Imagine you need to design a high-traffic e-commerce backend in ASP.NET Core. Users can browse products, add items to a cart, and place orders. How would you design the system to handle high concurrency, ensure data consistency, and allow scalability? Walk me through your architecture and reasoning.
For a high-traffic e-commerce backend, I would separate responsibilities into domains such as Order, Cart, and Catalog, each with its own API and database. Domains would communicate changes via events, so each service can listen to other domains and update its data accordingly. I would make services stateless, allowing us to scale horizontally by adding instances during traffic spikes. To maintain data consistency, I would use idempotency keys or unique IDs: before processing an order or payment, the system checks if the ID already exists and returns the previous result if it does. Otherwise, the request is processed. This approach reduces inconsistencies, supports high concurrency, and ensures a robust, scalable architecture. Event-driven communication and stateless design also make the system easier to maintain and extend.
In your high-traffic e-commerce backend, how would you handle failures such as a downstream service (like payment processing or inventory service) being temporarily unavailable? How would you design the system to remain responsive and ensure reliability?
“To handle failures in a high-traffic backend, I would first implement retries with exponential backoff for transient failures. If failures persist, I would use a circuit breaker to prevent further requests until the service is healthy, protecting both the API and downstream services. For long-running or non-critical tasks where eventual consistency is acceptable, I would offload processing to a message broker, decoupling producers and consumers and allowing each to scale independently. Finally, I would ensure that API responses are user-friendly, returning proper HTTP status codes instead of raw exceptions, which improves both reliability and user experience. In .NET, I would use tools like Polly for implementing retries and circuit breakers, and a broker like RabbitMQ or Kafka for asynchronous processing.”
In .NET, what is the difference between Task.Run, async/await, and Parallel.ForEach? When would you use each of them?
“In .NET, async/await is used primarily for I/O-bound operations. It allows the thread to be released while waiting for the operation to complete, improving scalability. Task.Run is used to offload CPU-bound work to a thread pool thread, allowing the main thread to remain responsive. Parallel.ForEach is used for parallel execution of CPU-bound operations across multiple threads, especially when processing large collections. For example, I would use async/await for HTTP API calls, Task.Run for CPU-intensive calculations, and Parallel.ForEach to process large datasets efficiently.
Can you explain how garbage collection works in .NET? How would you identify and fix memory issues in a .NET application?
In .NET, garbage collection (GC) automatically frees memory by cleaning up objects that are no longer referenced. GC uses a generational approach: Gen 0 for short-lived objects, Gen 1 for medium-lived, and Gen 2 for long-lived objects like cached data or singletons. The GC runs periodically to reclaim memory, and it also handles the Large Object Heap for objects over 85 KB. To identify memory issues, I use memory profilers such as dotMemory or Visual Studio Diagnostics, check for objects not being released, and look for potential memory leaks. Fixes often involve properly implementing IDisposable, using using statements, and ensuring resources are released promptly.
what are action filters?
Filters are executed inside the MVC pipeline, after middleware has routed the request to the controller.
They are used to add cross-cutting concerns at the controller or action level.
Types of filters:
Authorization filters – run first, before any other filter.
Resource filters – run before model binding; good for caching.
Action filters – run before and after controller action execution.
Exception filters – handle exceptions thrown by the action.
Result filters – run before and after the action result executes.
what is middleware in asp.net core?
Middleware are components that handle requests and responses.
Each middleware has the option to:
Process the incoming request.
Call the next middleware in the pipeline (await next()).
Optionally handle the response on the way back.
Common middleware:
Authentication – validate user tokens.
Authorization – check access permissions.
Logging – log requests/responses.
Exception handling – handle errors globally.
Static files – serve files from wwwroot.
Can you explain how the ASP.NET Core request pipeline works? What is the role of middleware and filters, and in what order are they executed?
In ASP.NET Core, every HTTP request flows through a request pipeline made up of middleware components. Middleware can inspect, modify, or short-circuit requests and responses, and the order they are registered determines their execution. Common examples include authentication, authorization, logging, and exception handling. After middleware, the request reaches the MVC pipeline, where filters provide more granular control. Filters include authorization, resource, action, exception, and result filters, and they allow us to execute logic before or after controller actions. Middleware applies globally, while filters can be applied per controller or action, and both are used for cross-cutting concerns like security, caching, and error handling.
how is an http request comes to server then back to the user?
A user interacts with a browser or client app (e.g., Angular, Postman).
The client sends an HTTP request (GET, POST, PUT, etc.) to the server’s URL.
The request first hits a web server (like Kestrel, IIS, or Nginx).
The web server handles TCP/IP communication, HTTPS termination (if needed), and forwards the request to the ASP.NET Core app.
ASP.NET Core receives the request through Kestrel (built-in web server).
The request flows through middleware in the order they are registered:
Authentication middleware – validates user credentials.
Authorization middleware – checks user permissions.
Logging middleware – logs request info.
Exception handling middleware – catches unhandled exceptions.
Routing middleware – determines which controller/action handles the request.
Once routed, the request enters the MVC pipeline:
Filters execute (authorization, action, resource, result, exception).
Controller action executes, performing business logic.
Data access layer or services are called (e.g., Entity Framework querying the database).
Action returns a response (e.g., JSON, HTML).
The response flows back through the filters and middleware in reverse order:
Result filters may modify the response.
Middleware may log response details or apply compression.
ASP.NET Core passes the response to the web server.
The web server sends the HTTP response over TCP/IP to the client.
The client receives the response and renders it (browser) or processes it (API client).
Client → Web Server → Middleware → MVC/Filters → Controller → DB/Services → Controller → Filters → Middleware → Web Server → Client
Middleware handles cross-cutting concerns.
Filters are more granular, specific to actions or controllers.
Controllers handle the core business logic.
In a .NET application using Entity Framework Core, how would you optimize queries and improve performance when dealing with large datasets?
To optimize queries in EF Core when working with large datasets, I follow several strategies. First, I avoid excessive .Include() statements that create heavy joins and only load the data I need. Second, I use IQueryable for deferred execution, so queries are not sent to the database until necessary. Third, I implement pagination using Skip() and Take() to load subsets of data instead of the full dataset. I also ensure that proper clustered and non-clustered indexes exist on frequently queried columns to avoid full table scans. Additionally, for read-only queries, I use AsNoTracking() to reduce overhead, and I project only the needed columns to reduce payload. These practices improve query performance and reduce database load.
Caching strategies?
Read-through cache: Cache is checked before DB read; if miss, DB queried and cache updated.
Write-through cache: DB write automatically updates cache.
Write-behind / write-back cache: Updates are written to DB asynchronously after caching.
Cache invalidation techniques?
Time-based expiration (TTL)
Explicit cache invalidation after updates
Event-driven updates using pub/sub or messaging
In a high-traffic .NET application, how would you implement caching to improve performance? What types of caching would you use, and how would you ensure data consistency?
In a high-traffic .NET application, I would cache frequently accessed data that doesn’t change often. For single-instance apps, I could use in-memory caching, and for multi-instance or distributed systems, I would use a distributed cache like Redis. To ensure data consistency, I would implement cache expiration with a TTL for infrequently changing data, or explicitly invalidate the cache when the underlying data is updated. I might also use read-through or write-through caching patterns depending on the use case. This approach reduces database load, improves response times, and helps the system scale efficiently.
How would you handle concurrency issues in a .NET application — for example, when multiple users try to update the same record at the same time?
In .NET, concurrency can be handled using either optimistic or pessimistic strategies. In most scalable applications, I prefer optimistic concurrency. It relies on a version or timestamp column — before saving, EF Core checks if the version matches. If someone else updated the record, it throws a DbUpdateConcurrencyException, which I can handle by retrying or returning a user-friendly error. In EF Core, I can mark a property with [Timestamp] to enable this automatically.
Pessimistic concurrency, on the other hand, locks the record in the database to prevent others from modifying it during a transaction, but it can limit scalability. So generally, I use optimistic concurrency for performance and simplicity.”
Imagine you have a microservices architecture with separate Order, Payment, and Inventory services. A user places an order and pays successfully, but the Inventory service fails to update due to a temporary issue. How would you ensure data consistency across these services?
In a distributed system with Order, Payment, and Inventory services, I’d ensure data consistency using patterns designed for eventual consistency.
First, I’d implement the Outbox pattern. When a service (like Payment) completes a local transaction, it records both the transaction data and the outbound event in the same database transaction — the Outbox table. A background process or message relay then reliably publishes the event to a message broker. This guarantees at-least-once delivery and prevents lost messages if the service crashes before publishing.
Second, I’d use the Saga or Compensating Transaction pattern. Each service performs its own local transaction, and if one service fails, compensating messages are sent to undo the previous steps — for example, refunding the payment or releasing the reserved inventory. This approach provides eventual consistency across services without relying on distributed transactions.
Together, these patterns make the system fault-tolerant and ensure that even partial failures eventually reach a consistent state.
How would you design retries and backoff strategies in distributed communication between microservices?
I handle retries and backoff using a layered approach. First, I classify errors as transient (network timeouts, HTTP 5xx, 429) or permanent (4xx), and only retry transient errors. I apply per-call timeouts and a retry policy that uses exponential backoff with jitter (to avoid synchronized retry storms), limited to a small number of attempts—typically 3–5—with a reasonable max delay. If failures persist, I use a circuit breaker to stop hammering the downstream service and fail fast until it recovers. For high-concurrency scenarios I add bulkhead isolation so failures in one area don’t exhaust resources.
For operations with side effects (like payments), I require idempotency (idempotency keys) so retries are safe. For message-based workflows I prefer broker-level retries or staged retry queues with a dead-letter queue for poison messages. Finally, I monitor retry metrics and circuit-breaker state so we can tune policies and detect systemic issues. In .NET I typically implement these patterns with HttpClientFactory + Polly for retries and circuit breakers, and Redis or a persistent store for idempotency keys.
What is the difference between IEnumerable, IQueryable, ICollection, and List<T> in C#?
When would you use each one, and how do they affect performance and memory usage?</T>
IEnumerable is the most basic interface, providing iteration support through foreach. It works well for in-memory collections but does not support modification. IQueryable extends IEnumerable and is intended for querying external data sources like databases. It supports deferred execution and translates LINQ expressions into SQL, which is efficient for large datasets.
ICollection also implements IEnumerable but adds methods like Add, Remove, and the Count property, so it’s useful when you need to modify a collection without caring about its concrete type.
List<T> is a concrete implementation of ICollection that supports index-based access, inserting, and removing elements. I typically use List<T> when I need random access or frequent modifications, ICollection when I need abstraction, IEnumerable for simple iteration, and IQueryable for database queries to optimize performance.</T></T>
Can you explain the difference between a delegate, an event, and an Action/Func in C#? When would you use each one?1
“A delegate in C# is a type-safe reference to a method. It can be passed as a parameter, stored in variables, and invoked dynamically.
An event is a specialized delegate used to implement the publisher-subscriber pattern. Only the class that declares the event can raise it, but other classes can subscribe or unsubscribe. Events are commonly used for notifications, like Button.Click in a UI.
Action and Func are predefined generic delegates that simplify delegate usage. Action represents a method that returns void and can take up to 16 parameters. Func represents a method that returns a value and can take up to 16 parameters as input. Both are often used with lambda expressions, for example in LINQ queries or inline callbacks.
I typically use delegates for callbacks, events for observer-like notifications, and Action/Func for concise inline methods or LINQ operations.”
Can you explain the Strategy pattern and how it differs from the Decorator pattern? Can you give an example of when you would use it in C#?
The Strategy pattern is a behavioral design pattern that allows you to define a family of algorithms or behaviors, encapsulate each one in a class, and make them interchangeable at runtime. It leverages dependency inversion and composition: the context class depends on an interface or abstract class, and you can inject different implementations as needed.
For example, in a payment system, you could have a IPaymentStrategy interface with implementations like CreditCardPayment, PayPalPayment, or CryptoPayment. The order processor class can accept any strategy at runtime and execute the appropriate payment logic.
The main difference from the Decorator pattern is that Strategy selects one behavior dynamically, whereas Decorator adds or layers additional behaviors onto an object.