Software Development & Testing Flashcards

(674 cards)

1
Q

The Six Lean Principles

A
  1. Value
  2. Value Stream
  3. Flow
  4. Pull
  5. Continuous Improvement
  6. Respect for People
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Lean Principle of Value in Software Engineering

A
  1. Identify and prioritize features and functionalities that provide the most value to users and stakeholders.
  2. Focus on delivering valuable software increments in each development iteration.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Lean Principle of Value Stream in Software Engineering

A
  1. Understand the end-to-end software development process, from requirements gathering to deployment and maintenance.
  2. Identify and eliminate non-value-adding activities and bottlenecks that hinder the delivery of value.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Lean Principle of Flow in Software Engineering

A
  1. Optimize the flow of work by maintaining a continuous delivery pipeline.
  2. Reduce batch sizes and cycle times to achieve a steady flow of high-quality features.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Lean Principle of Pull in Software Design

A
  1. Adopt a pull-based approach to work, where new work is pulled into the development process based on the team’s capacity and customer demand.
  2. Avoid overloading team members with excessive work items.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Lean Principle of Continuous Improvement in Software Design

A
  1. Foster a culture of continuous improvement by encouraging feedback and retrospectives.
  2. Regularly review processes, tools, and practices to identify areas for optimization.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Lean Concept of Respect for People in Software Design

A
  1. Recognize the importance of collaboration and communication within the development team and with stakeholders.
  2. Empower team members to make decisions and contribute to the success of the project.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The Six Kanban Principles

A

The core principles of Kanban include:
1. Visualize the Workflow
2. Limit Work in Progress (WIP)
3. Manage Flow
4. Make Process Policies Explicit
5. Feedback and Improvement
6. Collaborative Approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Kanban

A

Kanban is a specific implementation of Lean principles, initially developed by Toyota for inventory management. In the context of software development and project management, Kanban is a visual management method that helps teams manage their work and optimize workflow.
Kanban boards often use visual cues like cards to represent work items, with each card progressing through the different stages of the workflow. This visual representation makes it easy for teams to understand the status of work and identify potential areas for improvement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Lean Philosophy

A

Lean is a philosophy and a set of principles aimed at maximizing value while minimizing waste in a process. It was first developed by Toyota in the 1950s and revolutionized the manufacturing industry. Lean principles have since been applied to various domains, including software development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Software Development Life Cycle?

A

Series of General Steps in software development (will not always be exactly the same from one methodology to the next)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

12 Steps of the Software Development Life cycle

A
  1. Requirements Gathering and Analysis
  2. System Design
  3. Detailed Design
  4. Implementation (Coding)
  5. Testing
  6. Deployment
  7. Maintenance and Support
  8. Documentation
  9. Project Management
  10. Quality Assurance
  11. Version Control and configuration Management
  12. Deployment and Release Management
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Requirements Gathering and Analysis in Software Engineering

A

The very first step in SDFC
1. Understand the needs and requirements of stakeholders and users.
2. Analyze and document the functional and non-functional requirements of the software.
Example: Conduct interviews and surveys with stakeholders to understand their needs and preferences for a new e-commerce website. Document the required features, such as user registration, product catalog, shopping cart, and payment options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

System Design for Software Development

A
  1. Create a high-level system design that outlines the architecture and components of the software.
  2. Break down the system into smaller modules and define their interactions.
    Example: Design the architecture of a mobile application. Plan to use a three-tier architecture with a front-end for the user interface, a middle-tier for business logic, and a back-end for data storage and retrieval.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Creating detailed designs from system designs in Software Engineering

A
  1. Design each module in detail, specifying algorithms, data structures, and interfaces.
  2. Create detailed design documents or diagrams to guide the development.
    Example: For the mobile application, design the login module in detail. Specify the algorithms for password hashing and user authentication, as well as the data structures for storing user credentials.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

“Implementation” in Software Design

A
  1. Write the actual code for the software based on the detailed design.
  2. Use programming languages and tools to develop the functionality.
    Example: Write the code for the login module using a programming language like Java or Python. Develop the necessary functions and classes for handling user authentication and data storage.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Testing in Software Design

A
  1. Conduct various types of testing, such as unit testing, integration testing, system testing, and user acceptance testing (UAT)
  2. Identify and fix defects to ensure the software meets the requirements.
    Example: Perform unit testing on the login module to verify that individual functions work correctly. Conduct integration testing to ensure that the module interacts seamlessly with other components.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Software Deployment

A
  1. Package the software and prepare it for installation on the target environment.
  2. Deploy the software to production or a testing environment for final validation.
    Example: Package the mobile application and make it available for download on app stores like Google Play or the Apple App Store.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Maintenance and Support for Software

A
  1. Provide ongoing support and maintenance for the software.
  2. Address bug fixes, enhancements, and updates as needed.
    Example: After deployment, provide ongoing support for the mobile application, address bug reports, and release updates with new features and improvements.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Software Documentation

A

Create comprehensive documentation throughout the development process, including design documents, user manuals, and technical guides.
Example: Create user manuals that explain how to use the e-commerce website or the mobile application. Prepare technical documentation detailing the system architecture and API specifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Project Management for Software

A
  1. Plan and monitor the project, including resource allocation, timeframes, and risk management.
  2. Collaborate with stakeholders to manage expectations and communicate progress.
    Example: Use project management tools like Jira or Trello to track progress, allocate tasks to team members, and monitor deadlines.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Quality Assurance in Software

A

Implement quality assurance practices to ensure that the software meets the required standards and quality criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Version Control and Configuration Management in Software

A
  1. Use version control systems to track changes and manage different versions of the software.
  2. Apply configuration management practices to control changes and maintain consistency.
    Example: Use Git as the version control system to manage code changes for the software. Ensure that every code change is committed and tracked with appropriate comments.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Deployment and Release Management for Software

A
  1. Plan and manage the release of software updates and new features to users.
  2. Ensure smooth deployment and minimize downtime during releases.
    Example: Plan a controlled deployment of a new version of a web application during off-peak hours to minimize the impact on users. Have a rollback plan in case of any unexpected issues during the release.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Object-Oriented Design (OOD)
OOD is one of the most widely used methodologies. It focuses on designing software using objects, which encapsulate data and behavior. OOD emphasizes principles like inheritance, encapsulation, and polymorphism to create modular and reusable software components. Example: Designing a banking software system using OOD principles, where classes like "Account," "Transaction," and "Customer" are designed to encapsulate relevant data and behavior.
26
Model-Driven Architecture (MDA)
MDA is an approach that emphasizes modeling software at a higher level of abstraction. It uses models to represent system requirements, design, and implementation. Model transformations are applied to automatically generate code from the models. Example: Using UML models to represent the structure and behavior of a web application, and then automatically generating code from these models using model transformation tools.
27
Domain-Driven Design (DDD)
DDD focuses on modeling software based on the domain or business context it serves. It involves close collaboration between domain experts and developers to create a rich and expressive domain model. Example: Employing DDD to build an e-commerce platform, where the domain model represents concepts like "Product," "ShoppingCart," and "Order" in a way that closely aligns with the business domain.
28
Service-Oriented Design (SOD)
SOD is an architectural approach that designs software as a collection of services. Services are loosely coupled, autonomous, and communicate through well-defined interfaces. This approach supports interoperability and scalability. Certainly! Here's an example of each of the software design methodologies listed earlier: Object-Oriented Design (OOD): Example: Designing a banking software system using OOD principles, where classes like "Account," "Transaction," and "Customer" are designed to encapsulate relevant data and behavior. Model-Driven Architecture (MDA): Example: Using UML models to represent the structure and behavior of a web application, and then automatically generating code from these models using model transformation tools. Domain-Driven Design (DDD): Example: Employing DDD to build an e-commerce platform, where the domain model represents concepts like "Product," "ShoppingCart," and "Order" in a way that closely aligns with the business domain. Example: Designing a distributed system using a microservices architecture, where each microservice represents a specific service with its well-defined API and responsibilities.
29
Component-Based Design (CBD)
CBD involves designing software by assembling pre-built, reusable components. Components encapsulate specific functionality and can be composed to create larger systems. Example: Developing a content management system (CMS) using pre-built components for user authentication, content editing, and user interface elements that can be assembled to create the complete CMS.
30
Data-Driven Design
Data-driven design focuses on designing software by understanding the data requirements and modeling the data structures and relationships first Example: Designing a data analytics platform, where the structure and flow of data are at the center of the design to ensure efficient data processing and analysis.
31
Event-Driven Design (EDD)
EDD involves designing software based on the handling of events. Components respond to events and messages asynchronously, enabling systems to be more reactive and loosely coupled. Example: Implementing a real-time notification system that relies on event-driven architecture, where events like "NewMessage" or "PaymentProcessed" trigger relevant actions and notifications.
32
Structured Design
Structured design uses a systematic approach to divide the software into smaller, manageable modules. It employs techniques like data flow diagrams and structure charts to organize the system. Example: Creating a software solution using structured programming techniques, with a clear hierarchy of functions and a top-down approach to problem-solving.
33
Rapid Application Development (RAD)
RAD is a methodology that prioritizes rapid prototyping and iterative development. It involves close collaboration between developers and stakeholders and focuses on quickly delivering a functional product. Example: Prototyping and iterating on the design of a mobile app to quickly incorporate user feedback and deliver a minimum viable product (MVP) in a short time frame.
34
Agile Design
Agile design methodologies, like Scrum and Extreme Programming (XP), emphasize flexibility and adaptability. They promote incremental and iterative development with a focus on customer collaboration and feedback.
35
User-Centered Design (UCD)
UCD focuses on designing software with a strong emphasis on understanding user needs, preferences, and behaviors. It involves user research, usability testing, and user feedback throughout the design process Example: Building a user-friendly interface for a video conferencing application, where designers conduct usability tests and user interviews to inform the design decisions.
36
Aspect-Oriented Design (AOD)
AOD is an extension of object-oriented design that focuses on separating cross-cutting concerns, such as logging, security, and error handling, from the core business logic. It allows developers to address concerns that cut across multiple modules or components in a more modular and maintainable way. Example: Applying aspect-oriented design to a web application to separate cross-cutting concerns, such as logging, security, and error handling, from the core business logic.
37
Data-Flow Design
Data-Flow Design focuses on designing software by modeling the flow of data between different components or modules. It emphasizes how data moves through the system and how it is processed at various stages.
38
Responsibility-Driven Design (RDD)
RDD is a design methodology that focuses on identifying responsibilities for each module or component in a software system. It emphasizes designing modules based on their responsibilities and interactions with other modules.
39
Architectural Design Patterns in software engineering
Architectural design patterns, such as MVC (Model-View-Controller), MVVM (Model-View-ViewModel), and Hexagonal Architecture, provide reusable solutions to common architectural challenges. They guide the overall structure and organization of software systems.
40
Key Benefits for the Model-View-Controller Architecture
1. Separation of Concerns: Each component has a specific role, making the codebase easier to maintain and understand. 2. Reusability: The Model and View can be reused in different parts of the application or even in different applications altogether. 3. Flexibility: Changes to one component can be made without affecting the others, facilitating code modifications and updates. 4. Testability: Isolated components allow for easier unit testing of the application's logic.
41
"Flow of interaction" in the MVC architecture
1. A user interacts with the View by providing input, such as clicking a button or entering data into a form. 2. The View forwards the user input to the Controller. 3. The Controller processes the input and, if necessary, updates the Model. 4. The Model notifies the View of any changes in the data. 5. The View updates its display based on the new data from the Model.
42
"Model" in Architectual design patterns
The Model represents the application's data and business logic. It encapsulates the data and provides methods to access, modify, and manipulate that data. The Model is independent of the user interface and user input. It notifies the View of any changes to the data so that the View can update itself accordingly.
43
"View" in MVC
The View is responsible for presenting the data to the user and displaying the user interface. It represents the visual representation of the Model's data. Views observe the Model for changes and update themselves whenever the data changes. Views do not contain any business logic; they only display information provided by the Model.
44
"Controller" in Architectual design patterns
The Controller acts as an intermediary between the Model and the View. It handles user input and updates the Model or View accordingly. When a user interacts with the user interface, the Controller processes the input, modifies the Model if needed, and updates the View to reflect any changes in the data. The Controller facilitates communication between the Model and the View without the two components being directly aware of each other.
45
What is Model-View-Controller (MVC) in Software engineering?
software architectural pattern commonly used in software engineering to design user interfaces and organize the interaction between components in a software application. It separates the concerns of data management, user interface, and user input into distinct components, allowing for more maintainable and flexible designs. The MVC pattern is widely used in web development frameworks, desktop applications, and other software systems.
46
Model-View-ViewModel (MVVM) architectural pattern
An evolution of the Model-View-Controller (MVC) pattern. MVVM is commonly used in software engineering, especially in the context of developing user interfaces for modern applications. It was first introduced by Microsoft as part of its development framework, Windows Presentation Foundation (WPF), and is now widely used in other frameworks like Xamarin and Angular. MVVM separates the concerns of data, user interface, and user interaction into distinct components. However, MVVM introduces a more explicit and systematic approach to handling data binding and user interactions, making it particularly suitable for applications with graphical user interfaces (GUIs) and data-driven views.
47
"View" in MVC v. MVVM
The View in MVVM corresponds to the user interface. It is responsible for rendering the visual representation of the data provided by the ViewModel and handling user interactions. Unlike the traditional View in MVC, the View in MVVM has no direct dependency on the Model.
48
ViewModel in MVVM
The ViewModel is a new component introduced in MVVM, and it serves as an intermediary between the Model and the View. The ViewModel exposes data and commands to the View, allowing the View to bind and interact with the data without being aware of the Model's details. The ViewModel is responsible for preparing and shaping the data from the Model into a form suitable for the View to present.
49
The flow of interaction in the MVVM pattern
1. The ViewModel retrieves data from the Model or other data sources and formats it for presentation. 2. The View binds to the data provided by the ViewModel and displays it on the user interface. 3. The user interacts with the View (e.g., clicks a button, enters data). 4. The View communicates the user's interactions to the ViewModel. 5. The ViewModel processes the user input, updates the Model if necessary, and prepares any changes to be reflected in the View. 6. The View updates its display based on the changes made by the ViewModel.
50
The key benefits of using the MVVM pattern
1. Separation of Concerns: MVVM cleanly separates the responsibilities of data presentation, user interaction, and data management. 2. Testability: The ViewModel can be easily tested in isolation, independent of the View and the Model, allowing for comprehensive unit testing. 3. Data Binding: The MVVM pattern leverages data binding techniques to establish a dynamic connection between the ViewModel and the View, ensuring that UI elements automatically update when underlying data changes. 4. Code Reusability: The ViewModel can be reused in multiple Views, promoting code reusability.
51
Hexagonal Architecture
Software design pattern that emphasizes the separation of concerns and the independence of the application core from external systems, frameworks, and interfaces. The architecture gets its name from its characteristic hexagonal shape, with the core application logic at the center and various ports and adapters around it.
52
Key principles of Hexagonal Architecture
1. Core Application Logic: The core application logic represents the business rules and domain-specific functionality. It is at the heart of the architecture and is completely decoupled from external dependencies. 2. Ports: Ports define interfaces through which the core application communicates with the external world. These interfaces act as entry and exit points for data and interactions. 3. Adapters: Adapters are the implementations of the ports. They provide the necessary conversions and mappings to connect the core application with external systems, such as databases, user interfaces, or third-party services. 4. Dependency Inversion Principle: Hexagonal Architecture adheres to the Dependency Inversion Principle, where higher-level modules (the core application) do not depend on lower-level modules (adapters and frameworks). Instead, both depend on abstractions (ports).
53
The flow of communication in Hexagonal Architecture
1. External systems or actors interact with the application through the defined ports. 2. The core application logic processes the incoming data and business rules without being aware of the specific external sources. 3. When necessary, the core application interacts with external systems (e.g., persisting data to a database or sending notifications) through the defined ports. 4. The adapters, which implement the ports, handle the communication between the core application and external systems. These adapters handle the specifics of integration and communication protocols.
54
Benefits of Hexagonal Architecture
1. Separation of Concerns: The clear separation between the core application and external systems makes the codebase easier to maintain and test. 2. Testability: The core application can be extensively unit-tested in isolation since it is independent of the external dependencies. 3. Flexibility and Adaptability: The architecture allows for easy swapping or modification of adapters to integrate with different external systems without affecting the core application. 4. Clean Code: Hexagonal Architecture encourages a clean and structured design, promoting maintainability and readability. 5. Domain Focus: The core application can focus solely on domain logic without being cluttered with technical concerns.
55
Design by Contract (DbC)
DbC is a methodology that emphasizes specifying explicit contracts between components. Contracts define the expectations and responsibilities of components, providing a clear and enforceable set of rules for interaction.
56
User Story Mapping
User Story Mapping is a visual technique that helps in understanding and organizing the features and functionalities of a software system from a user's perspective. It allows teams to prioritize and plan development based on user needs.
57
Data-Centric Design
Data-Centric Design focuses on designing software systems around data and databases. It ensures that data is the central focus and all functionalities are designed to efficiently handle and process data
58
Test-Driven Design (TDD)
TDD is a methodology where developers write automated tests before writing the actual code. The design of the software evolves through test creation, which leads to better testability and maintainability.
59
Iterative and Incremental Design
This approach involves designing and developing the software in small increments, with each iteration adding new features or enhancements based on user feedback.
60
Formal Methods/Formal Design
Formal methods involve using mathematical techniques for the specification, validation, and verification of software. It ensures high levels of correctness and reliability.
61
Adaptive Software Development (ASD)
ASD is a flexible methodology that adapts to changing requirements and priorities. It promotes collaboration and continuous learning to deliver the most valuable features.
62
Service-Oriented Architecture (SOA)
SOA is an architectural design approach that focuses on building software as a collection of loosely coupled services that communicate through standardized interfaces. It promotes reusability and interoperability.
63
Event-Driven Architecture (EDA)
EDA is an architectural approach that emphasizes the production, detection, and consumption of events in a software system. It enables loosely coupled and reactive systems
64
Feature-Driven Development (FDD)
FDD is an iterative and incremental design methodology that organizes software development around feature-driven activities. It emphasizes modeling, design inspection, and regular builds.
65
Cognitive Walkthrough
Cognitive Walkthrough is a design methodology that involves simulating user interactions with the software to identify potential usability issues and improvements
66
Six Sigma for Software Design
Six Sigma is a data-driven approach that aims to improve the quality of software design by reducing defects and variations in the development process.
67
SOLID Principles
SOLID is an acronym for five principles (Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion) that guide object-oriented design to promote maintainable and flexible software.
68
Concurrent Design
Concurrent Design emphasizes designing software with concurrent or parallel processing in mind. It is used for applications that require multiple threads or processes to run simultaneously.
69
Aspect-Oriented Modeling (AOM)
AOM is a modeling methodology that focuses on capturing cross-cutting concerns or aspects in software systems, allowing for separate management of concerns such as security, logging, and error handling
70
Contextual Design
Contextual Design is a user-centered design methodology that emphasizes understanding the context in which users will interact with the software. It involves observation and analysis of users in their work environment to inform the design process.
71
Rational Unified Process (RUP)
RUP is an iterative software development process that follows the Unified Modeling Language (UML) and emphasizes iterative development, use cases, and architecture-centric design.
72
Dynamic Systems Development Method (DSDM)
DSDM is an agile methodology that focuses on delivering software in a fixed time frame and budget while prioritizing the most critical features.
73
Data Modeling
Data Modeling is a methodology that involves creating a conceptual, logical, and physical representation of data in a software system. It helps ensure data integrity and consistency.
74
User-Centered Analysis (UCA)
UCA is an analysis methodology that focuses on understanding user needs and goals to inform the design and development of software
75
Domain-Specific Modeling (DSM)
DSM is an approach that involves creating models and languages specifically tailored to a particular domain, making it easier to express domain concepts in software.
76
Hierarchical Input Process Output (HIPO)
HIPO is a top-down design methodology that uses a hierarchical structure to represent the modules or components of a software system.
77
Use Case-Driven Development
Use Case-Driven Development focuses on designing software functionality based on specific use cases or user scenarios.
78
Software Architecture Patterns
Software Architecture Patterns, such as Client-Server, Peer-to-Peer, and Microservices, provide high-level structures and guidelines for organizing software systems.
79
Information Hiding
Information Hiding is a design principle that emphasizes encapsulating implementation details within modules to reduce complexity and improve maintainability.
80
Single Responsibility Principle (SRP)
A class or module should have only one reason to change. It should be responsible for only one specific functionality or behavior.
81
Open/Closed Principle (OCP)
Software entities (classes, modules, functions) should be open for extension but closed for modification. New functionality should be added through extension, not by changing existing code.
82
Liskov Substitution Principle (LSP)
Subtypes should be substitutable for their base types. Objects of derived classes should be able to replace objects of the base class without affecting the correctness of the program
83
Interface Segregation Principle (ISP)
Clients should not be forced to depend on interfaces they do not use. Keep interfaces focused and specific to the needs of their clients.
84
Dependency Inversion Principle (DIP)
High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details, but details should depend on abstractions.
85
Overengineering in Software Design
Unintentionally adding unnecessary complexity to your design. Overengineering can make the software difficult to maintain, understand, and modify in the future.
86
11 Benefits of Modular Code
1. Ease of Maintenance: 2. Code Reusability: 3. Scalability 4. Parallel Development 5. Testing and Debugging 6. Encapsulation and Information Hiding 7. Flexibility 8. Reduced Complexity 9. Code Organization 10. Improved collaboration 11. Domain Understanding (By dividing the software into modules that reflect the domain or functional areas, developers can better align the software design with the real-world problem it solves.)
87
Modular Code for Improving Software Maintenence
Want to Design your software using a modular approach, where each module represents a specific functionality or feature. Encapsulate implementation details within modules, exposing only essential interfaces to other parts of the system.
88
"Separation of Concerns" for Software Maintenance
Want to ensure that different concerns (e.g., user interface, business logic, data access) are separated into distinct components. This separation makes it easier to understand and modify specific parts of the codebase without affecting others.
89
Consistent Coding Standards for Software Maintenence
Want to enforce consistent coding standards and best practices across the development team. Consistency in code style and structure improves readability and makes maintenance tasks more predictable
90
Documentation for Software Maintenance
Want to provide clear and concise comments and documentation within the code. Explain complex algorithms, non-obvious decisions, and the purpose of functions or classes. Well-documented code helps future developers understand the intent behind the design.
91
Version control for Software Maintenence
Want to use a version control system (e.g., Git) to track changes to the codebase and collaborate effectively with the team. Version control allows you to revert to previous versions and understand the evolution of the software.
92
Automated Testing for Software Maintenance
Want to implement automated unit tests, integration tests, and regression tests. Tests ensure that changes to the codebase do not introduce unintended bugs and verify that the software functions correctly after modifications.
93
Error Handling for Software Maintenance
Implement robust error handling and logging mechanisms to identify and diagnose issues effectively. Proper error handling helps maintainers understand the system's state and identify the root cause of problems.
94
Refactoring for Software Maintenance
Want to regularly refactor the code to improve its structure, readability, and maintainability. Refactoring helps eliminate technical debt and ensures the codebase remains clean and organized.
95
Minimizing Dependencies for Software Maintenance
Keep dependencies between modules and components as minimal as possible. Reducing dependencies makes it easier to make changes to individual modules without affecting the entire system.
96
Continuous Integration and Continuous Deployment (CI/CD) for Software Maintenance
Want to implement CI/CD pipelines to automate the build, testing, and deployment processes. CI/CD helps ensure that changes are quickly validated and delivered to production.
97
Code Reviews for Software Maintenance
Want to conduct regular code reviews to catch potential issues, share knowledge, and maintain code quality standards.
98
Factory Method Pattern
The Factory Method Pattern allows for the creation of objects without specifying the exact class of the object that will be created. This pattern helps decouple the client code from the concrete implementation, making it easier to introduce new classes without modifying existing code.
99
Dependency Injection Pattern
Dependency Injection (DI) is a technique used to inject dependencies into a class rather than having the class create them. By injecting dependencies, the code becomes more modular, easier to test, and promotes loose coupling.
100
Strategy Pattern
The Strategy Pattern defines a family of algorithms and allows them to be interchangeable. It helps to isolate algorithmic logic, making it easier to add or modify algorithms without changing the context using them.
101
Observer Pattern
The Observer Pattern establishes a one-to-many dependency between objects, so that when one object (the subject) changes state, all its dependents (observers) are notified and updated automatically. This pattern is helpful for decoupling components and ensuring consistency between related objects.
102
Decorator Pattern
The Decorator Pattern allows behavior to be added to individual objects without affecting the behavior of other objects from the same class. It promotes the principle of open-closed design, enabling easy extension of functionality without modifying existing code.
103
Adapter Pattern
The Adapter Pattern allows incompatible interfaces to work together. It acts as a bridge between two interfaces, making it easier to integrate new components or systems without changing the existing codebase.
104
Facade Pattern
The Facade Pattern provides a unified interface to a set of interfaces in a subsystem, simplifying the client's interaction with the system. It helps hide complex system structures and provides a clear entry point for clients.
105
Template Method Pattern
The Template Method Pattern defines the skeleton of an algorithm but allows subclasses to override specific steps. This pattern promotes code reuse and consistency across multiple implementations.
106
Command Pattern
The Command Pattern encapsulates a request as an object, allowing clients to parameterize objects with queues, undo operations, and log requests. This pattern makes it easier to support undo/redo functionality and to decouple senders and receivers of commands.
107
Composite Pattern
The Composite Pattern treats individual objects and compositions of objects uniformly. It allows you to compose objects into tree-like structures to represent part-whole hierarchies, making it easier to work with complex object structures.
108
Proxy Pattern
The Proxy Pattern provides a surrogate or placeholder object that controls access to another object. It is useful for adding an additional layer of control or caching without altering the underlying object's implementation.
109
State Pattern
The State Pattern allows an object to change its behavior when its internal state changes. It helps manage complex conditional logic by representing each state as a separate class, making it easier to add or modify states without affecting other states.
110
Null Object Pattern
The Null Object Pattern provides an object that represents "null" or "no result" scenarios. It ensures that code can handle null values safely and reduces the need for explicit null checks.
111
Command-Query Responsibility Segregation (CQRS)
CQRS separates the read and write operations in a system, using different models for querying data (read) and updating data (write). It helps optimize performance and maintainability by focusing on specific concerns for each type of operation.
112
Event Sourcing
Event Sourcing is a pattern where the state of an application is determined by a sequence of events rather than the current state. This pattern facilitates auditing, versioning, and easy restoration of past states.
113
Mediator Pattern
The Mediator Pattern centralizes communication between objects, reducing direct dependencies between them. It helps to manage complex communication patterns and promotes loose coupling.
114
Chain of Responsibility Pattern
The Chain of Responsibility Pattern allows multiple objects to handle a request without the sender needing to know which object will process it. This pattern is helpful for decoupling sender and receiver and providing flexibility in handling requests.
115
Flyweight Pattern
The Flyweight Pattern is used to minimize memory usage by sharing common data between multiple objects. It is particularly useful when dealing with large numbers of similar objects.
116
Interpreter Pattern
The Interpreter Pattern defines a grammar for interpreting sentences in a language and provides an interpreter for the language. It is helpful for defining a domain-specific language and implementing parsers.
117
Command Dispatcher Pattern
The Command Dispatcher Pattern centralizes command handling and allows for easy extension of command processing. It helps maintain the separation of concerns and facilitates the addition of new commands
118
Snapshot Pattern
The Snapshot Pattern captures the current state of an object and allows it to be restored to that state later. It is useful for implementing undo/redo functionality or for restoring objects to specific states.
119
Composite View Pattern
The Composite View Pattern allows hierarchical composition of views, making it easier to work with complex user interfaces. Each view can have child views, forming a tree-like structure.
120
Double Dispatch Pattern
The Double Dispatch Pattern resolves method calls at runtime based on both the receiver and argument types. It is helpful for implementing flexible and extensible object interactions.
121
Specification Pattern
The Specification Pattern encapsulates business rules and conditions as separate objects, making it easier to modify or combine them to create complex queries or validations.
122
Mixin Pattern
The Mixin Pattern allows the dynamic addition of new behavior to objects at runtime. It promotes code reuse and flexibility by enabling classes to inherit from multiple sources.
123
Immutable Pattern
The Immutable Pattern ensures that objects cannot be modified after creation, reducing the risk of unintended side effects and promoting thread safety.
124
Object Pool Pattern
The Object Pool Pattern manages a pool of reusable objects to avoid the overhead of object creation and destruction. It improves performance and reduces memory usage
125
Resource Acquisition Is Initialization (RAII)
RAII is an idiom rather than a design pattern, but it is essential for managing resources (e.g., memory, files) in C++ and other languages. It ensures that resources are properly initialized and released automatically.
126
Scalability
Designing software that can handle increased loads and data volume without sacrificing performance and responsiveness.
127
Flexibility and Extensibility
Creating software that can easily accommodate future changes and additions to features or functionality.
128
Modularity
Breaking down complex systems into smaller, manageable modules to promote code organization and maintainability.
129
Maintainability
Designing software that is easy to understand, modify, and enhance over its lifecycle.
130
Performance Optimization
Balancing performance considerations and resource usage while ensuring the software meets its performance requirements.
131
Concurrency and Multithreading
Handling multiple threads and concurrent processes without introducing race conditions and synchronization issues.
132
Security
Addressing potential vulnerabilities and ensuring that the software is protected against security threats and attacks.
133
Interoperability
Ensuring that the software can interact and integrate seamlessly with other systems and technologies.
134
User Experience (UX)
Creating intuitive and user-friendly interfaces to enhance user satisfaction and usability.
135
Error Handling and Fault Tolerance
Devising robust error-handling mechanisms to detect, report, and recover from errors gracefully
136
Data Management
Designing efficient data storage, retrieval, and manipulation mechanisms to handle large datasets effectively.
137
Integration of Third-Party Services
Designing systems that must process data and respond quickly in real-time or near-real-time scenarios.
138
"Cross-Platform Compatibility"
Ensuring that the software functions correctly on different platforms and devices.
139
Adhering to Standards and Regulations
Complying with industry standards, legal regulations, and best practices related to the software's domain.
140
Code Reusability
Maximizing code reuse to minimize duplication and improve maintainability.
141
Version Control and Collaboration
Effectively managing code changes and facilitating collaboration among team members.
142
Abstraction
Encapsulate implementation details and create abstract interfaces to represent the functionality of modules. Abstract classes, interfaces, and inheritance allow different implementations to share common behavior, enhancing code reuse.
143
Design Patterns
Design patterns provide proven solutions to recurring design problems and often encourage code reuse.
144
Dependency Injection
Use dependency injection to inject dependencies into classes rather than creating them within the class itself. This promotes loose coupling and allows for swapping out implementations without modifying the dependent class.
145
"Separation of Concerns" for Reusability
Separate business logic from presentation and data access. This separation ensures that business logic can be reused without being tied to specific user interfaces or data sources.
146
Libraries and Frameworks for reusability
Leverage existing libraries, frameworks, and open-source components that are designed for reusability. Many popular libraries offer reusable functionalities that can save development time and effort.
147
Generic Programming for Reusability
Use generics and templates to create code that works with various data types. This approach allows algorithms and data structures to be reused with different data types, increasing code versatility.
148
Single Responsibility Principle (SRP) for code reusability
Want to Design classes with a single responsibility, making them more focused and reusable. Classes with clear responsibilities are easier to understand and more likely to be reused in different contexts.
149
APIs for Reusability
Design clear and intuitive APIs for reusable components. Well-designed APIs make it easier for other developers to understand and utilize the code effectively
150
Singleton Pattern
Singleton Pattern ensures that a class has only one instance and provides a global point of access to it. This pattern can be useful when certain resources or objects need to be shared across the application.
151
Mixin Inheritance Pattern
The Mixin Inheritance Pattern allows a class to inherit from one or more mixins to acquire the behaviors and properties of those mixins. This pattern enables code to combine functionalities from multiple sources.
152
Horizontal Scaling
Can Design the system to scale horizontally by adding more servers or instances to distribute the workload. This approach allows you to handle increased traffic by adding more resources
153
Load Balancing
Implement load balancing to evenly distribute incoming requests across multiple servers or instances. Load balancers help prevent bottlenecks and ensure resources are efficiently utilized.
154
Stateless Architecture
Minimize server-side state as much as possible. Stateless architecture allows requests to be handled independently, making it easier to add or remove servers without affecting the application's overall state.
155
Caching
Use caching mechanisms to store frequently accessed data and reduce the need for repeated computations. Caching improves response times and reduces the load on backend services.
156
Asynchronous Processing
Offload time-consuming tasks to asynchronous queues or background jobs. This approach frees up resources to handle incoming requests more efficiently.
157
Database Optimization
Optimize database queries, use indexing, and denormalize data where appropriate. Proper database design and optimization are crucial for handling increased data loads.
158
Microservices Architecture
Divide the application into smaller, independent services that can be deployed and scaled separately. Microservices allow you to scale specific parts of the application as needed.
159
Content Delivery Networks (CDNs) for scalability
Utilize CDNs to cache and distribute static assets, reducing server load and improving content delivery speed for users globally.
160
Auto-Scalaing
Implement auto-scaling mechanisms to automatically add or remove resources based on demand. Cloud platforms like AWS and Azure offer auto-scaling features for easy scalability.
161
Performance Monitoring and Profiling:
Continuously monitor the application's performance and identify potential bottlenecks or resource-intensive operations. Profiling helps optimize critical sections of code.
162
Distributed Caching
Use distributed caching systems to share cached data across multiple nodes, enabling better utilization of memory resources.
163
Decoupling Components
Want to Decouple components to promote independent scaling. Services should communicate through well-defined APIs, allowing you to scale individual components as needed.
164
"Design for Failure"
Plan for potential failures and implement redundancy and failover mechanisms to ensure high availability and fault tolerance
165
Cloud Computing for Scalability
Utilize cloud computing platforms that offer scalable infrastructure, allowing you to adjust resources as demand fluctuates.
166
Hotspots
Specific areas or components of a software system that experience a disproportionately high volume of activity or usage compared to other parts of the system. These hotspots can be in the form of specific functions, classes, database tables, or any other resource that is frequently accessed or heavily utilized. Hotspots can significantly hurt scalability
167
Flexibility
Refers to adapting to changing requirements, integrating new features, and maintaining the system over time
168
Configuration and Externalization
Externalize configuration settings and parameters from the code. This approach allows changes to be made without recompiling the application, making it more flexible and adaptable.
169
Feature Toggles
Use feature toggles or feature flags to enable or disable specific features at runtime. This technique allows for easy experimentation and enables you to roll back or activate features without redeploying the entire application.
170
Configuration Management for Flexibility
Adopt a robust configuration management process to manage variations between different deployments and environments, making it easier to adapt the software for different use cases.
171
Robust
Refers to the quality of a software system to remain stable, reliable, and perform well under various conditions, even when facing unexpected or erroneous inputs or events
172
Input Sanitization
Robust systems validate and sanitize user inputs and external data to prevent security vulnerabilities and unexpected behavior caused by invalid or malicious inputs.
173
Graceful Degradation
In situations where certain features or components fail or become unavailable, a robust system should degrade gracefully, maintaining core functionality and informing users about the degraded state.
174
Compatibility
Robust software is compatible with various platforms, browsers, and operating systems, providing consistent functionality across different environments.
175
Security
Robust software prioritizes security measures to protect against potential threats and attacks, safeguarding sensitive data and ensuring system integrity.
176
Monitoring and Alerting
Robust software incorporates monitoring and alerting mechanisms to promptly detect anomalies and performance degradations, enabling timely responses and proactive maintenance.
177
Progressive Disclosure for UX design
Present complex information in a layered or progressive manner to prevent overwhelming users with too much information at once. Reveal additional details as needed.
178
User Empowerment for UX design
Allow users to have control over their actions and the product's behavior. Avoid forcing users into unwanted interactions or decisions.
179
Accessibility for UX Design
Design the product to be accessible to users with disabilities. Ensure that all users can interact with and understand the content, regardless of their abilities.
180
Unit Testing
Unit testing involves testing individual components or units of code in isolation to ensure they function as expected. Developers typically write unit tests to verify that each unit works correctly and to detect bugs early in the development process.
181
Integration Testing
Integration testing focuses on testing how different components or modules of the software interact with each other. It aims to identify issues that arise when integrating units into a larger system.
182
Functional Testing
Functional testing verifies that the software's features and functions work as intended and align with the specified requirements. Test cases are designed to validate the software's functionality against the business and user requirements.
183
User Interface (UI) Testing
UI testing checks the user interface's usability, responsiveness, and appearance. It ensures that the interface elements are correctly displayed and that users can interact with them effectively.
184
Regression Testing
Regression testing is performed after making changes or enhancements to the software to ensure that existing functionality remains unaffected. It helps detect unintended side effects that might arise due to code modifications.
185
Performance Testing
Performance testing evaluates the software's responsiveness, scalability, and resource usage under different conditions. It assesses the system's speed, stability, and capacity to handle anticipated workloads.
186
Security Testing
Security testing assesses the software's vulnerability to security threats, such as unauthorized access, data breaches, and other potential risks. It ensures that sensitive data and resources are adequately protected.
187
Usability Testing
Usability testing evaluates the software's user-friendliness and measures how easy it is for users to navigate and interact with the application.
188
Load Testing
Load testing determines how the software performs under expected and peak loads. It helps identify performance bottlenecks and ensures the system can handle concurrent user activities.
189
Stress Testing
Stress testing pushes the software beyond its limits to assess its behavior under extreme conditions. It helps identify the software's breaking point and potential failure modes.
190
Acceptance Testing
Acceptance testing involves validating that the software meets the end-users' requirements and is ready for deployment. It typically includes user acceptance testing (UAT) performed by actual end-users
191
Exploratory Testing
Exploratory testing is an informal and unscripted approach where testers explore the software freely to discover defects and assess the overall user experience.
192
Data Migration
Updating the software may require migrating existing data to a new schema or format. Data migration can be challenging, especially when dealing with large datasets or complex data structures.
193
Rollback Plan
Having a well-defined rollback plan is essential in case the update encounters unexpected issues. Knowing how to revert to the previous version swiftly and safely is crucial.
194
Interoperability
Refers to the ability of different software systems, applications, or components to communicate, exchange data, and work together seamlessly. It is a critical aspect of software development, especially in today's interconnected and distributed computing environments
195
Communication Protocols for interoperability
Interoperability requires agreement on communication protocols that define how different systems exchange data and interact. Common communication protocols include HTTP, REST, SOAP, and MQTT
196
Data Formats for interoperability
Systems must agree on data formats to ensure that information can be accurately interpreted and processed by both the sender and receiver. Common data formats include JSON, XML, and CSV
197
Sources of software standards
ISO standards, W3C recommendations, and OASIS standards
198
Middleware
Middleware technologies act as intermediaries between disparate systems, translating and facilitating communication between them to achieve interoperability.
199
Documenting your exceptions
You absolutely positively need to do it for any code that you write so that anyone using your code can plan for it when it comes to exception handling
200
Exception Handler
A section of program code that is executed when a particular exception occurs
201
Exception
Basically an anomaly in the output of your program. An unusual event, detectable by software or hardware, that requires special processing
202
Six options for handling exceptions
1. Assume errors will not occur (not an option) 2. Print a descriptive error message 3. Return an unusual value to indicate an error has occurred 4. Alter a status variable's value 5. Use assertions to block further execution 6. Use exception handlers
203
Four types of errors in program output
1. A user may accidently or deliberately (hackers) enter incorrect inputs 2. Hardware does not actually have the resources that you need to actually execute what you need to execute (disk drives and random access memory have size limits) 3. Hardware devices may fail or becomes inaccessible 4. Software components may contain defects (bugs)
204
Robustness
The ability of a system to recover following an error
205
Robustness v. Fault-Tolerance
Robustness Characteristics: A robust system can handle deviations from ideal or expected conditions without catastrophic failures or unacceptable degradation in performance. It is resilient against perturbations, uncertainties, or variations in input, environment, or operating conditions. Robustness helps a system to gracefully degrade its performance, recover, and continue functioning under less-than-ideal circumstances. Fault Tolerance Characteristics: A fault-tolerant system can detect, isolate, and recover from faults to maintain essential functionality and prevent system-wide failures. It involves designing redundancy, error detection mechanisms, and fault recovery strategies to ensure uninterrupted operation in the face of faults. Fault tolerance aims to minimize downtime and data loss, ensuring the system remains available and operational even during failures.
206
Features of Good Code
It works: delivers required functionality/compatibility/reliability/security It can be modified without excessive time/effort It is reusable It is complete ON TIME and WITHIN BUDGET: This is what makes the difference between successful software companies and unsuccessful software companies
207
Exponential cost growth of debugging software
The longer you leave a bug in your software, the more expensive it will be to correct it
208
Cost breakdown in software engineering
Development: 25% Maintenance: 75% Main takeaway: maintenance is 3 times more expensive than development so you need to make sure that you code is maintainable as possible
209
Expected Error rate when writing code
You can expect about one error for every ten lines of code. Example: If you have 200 lines of code, you're looking at roughly 20 errors that you will need to look out for in debugging
210
Six phases of Software Development
1. Requirements derivation phase 2. Requirements Specification phase 3. Design phase 4. Implementation phase 5. Testing and Verification phase 6. Postdelivery Maintenance phase
211
Requirements DERIVATION Phase
First phase in the software development procedure. These requirements typically come from the customer Prototype and/or high-level description of product
212
Requirements SPECIFICATION Phase
Comes immediately after requirements derivation Phase. Involves actual software engineers developing a detailed description of functional requirements and non-functional requirements (constraints)
213
Functional Software Requirements
Describe the specific functions or features that the software system must provide to meet the needs of its users. - Functional requirements are typically expressed as specific actions or operations that the software should perform. - They are often described in use cases, user stories, or process flow diagrams. - Functional requirements are directly related to the system's functionality and how it interacts with users, other systems, or hardware.
214
Non-functional Software requirements
Define the quality attributes or constraints that the system must satisfy. They focus on how the system should perform its functions rather than what functions it should perform. - Non-functional requirements are concerned with aspects like performance, reliability, usability, security, maintainability, and scalability. - They define the overall behavior and attributes of the system, impacting its effectiveness, efficiency, and user satisfaction. - Non-functional requirements are often harder to measure and quantify compared to functional requirements.
215
Design Phase
Architectural design (high level design) and detailed design (low-level design) - Often done using UML
216
Implementation Phase
Translation of the design into program code
217
Testing and Verification Phase
Detecting and fixing errors and demonstrating th correctness of the program
218
Postdelivery Maintenance Phase
Correct defects reported by users, modify or enhance functionality This phase alone is three times more expensive than all the other phases combined
219
Software Process
A standard sequence of steps for the development or maintenance of software
220
Waterfall Process
-Delivered activities conducted in order previously presented -Each activity produces a document or product that is the input for the next activity
221
Agile Process
A family of software development processes that emphasize - Individuals and interactions > processes and tools -Working software > comprehensive documentation -Customer collaboration > contract negotiation -Responding to change > following a plan
222
SCRUM
1. Continue breaking down the problem into smaller problems until you have a list of individual tasks (the set of tasks is called a story) 2. Group tasks into two week time segments called "sprints"
223
Program Specification Process
1. Start with a problem statement 2. Ask questions (inputs/outputs) 3. Describe the interaction between users and the software (use cases) *The specification should be detailed enough that a programmer not familiar with the project can follow them to produce the product
224
Three elements of Program Design
1. Abstraction 2. Information Hiding 3. Step-Wise Refinement
225
Abstraction
Model of a complex system including only key details
226
Information Hiding
Hiding data and function details to limit access to implementation details
227
Step-Wise Refinement
Iterative, incremental approach to problem solving. Comes in two varieties: 1. Top-Down 2. Bottom-Up
228
Top-Down Program Design
Basically, you are deferring the details as long as you possibly can. You are starting with them most abstract elements and behaviors and decomposing them into more and more detail as you go
229
Bottom-Up Program Design
Basically, you are starting with the details and working your way up to a picture design (usually so that you can enjoy the advantages of modular programming) EXAMPLE: If you know that your software project will involve a handful of algorithms and that many of those algorithms will share individual steps, but in different order... then you can work on writing the code for those individual steps first, and then combine them into larger algorithms
230
Classical Design
-Focuses on actions (functions or operations) to be performed -Functional decomposition
231
Functional Decomposition
Functional decomposition is a method used to break down a complex system into smaller, more manageable functional components. It focuses on identifying the major functions or tasks that the system needs to perform and then decomposing these functions into smaller sub-functions or sub-tasks. This decomposition helps in understanding the functionality of the system and the relationships between different functions. Key Points: Begins with identifying the major functions or tasks of the system. Decomposes functions into smaller sub-functions or sub-tasks. Focuses on understanding the functions and their relationships.
232
Top-Down Design v. Functional Analysis
Key Differences: Focus and Starting Point: Top-down design starts with a high-level understanding and gradually decomposes the system into smaller components, focusing on architecture and major features. Functional decomposition starts by identifying major functions or tasks and breaks them down into smaller functional components, focusing on understanding the functions and their relationships. Level of Detail: Top-down design emphasizes defining the overall architecture and major features before delving into detailed components. Functional decomposition focuses on breaking down functions into smaller sub-functions, providing detailed views of the system's functionality. Purpose: Top-down design is more concerned with the design and architectural aspects of the system. Functional decomposition is focused on understanding the functionality and tasks that the system needs to perform.
233
Metric-Based Testing
Measurable factors used to evaluate thoroughness of test. Need to be able to judge how much, how complete, testing yuo have actually done. You also need to be able to tell your boss or customer this information to give confidence in your work.
234
Extreme programming
A software development methodology known for its iterative and incremental practices, promoting flexibility, collaboration, and adaptability.
235
Continuous Integration (CI)
The practice of automatically integrating code changes into a shared repository multiple times a day. Process: 1. Developers regularly merge their code changes into a shared version control repository (e.g., Git). 2. Automated build and test processes are triggered whenever new code changes are integrated. 3. This ensures early detection of integration issues and helps maintain a consistent and stable codebase.
236
Continuous Deployment
Continuous Deployment is the practice of automatically deploying code changes to production or staging environments after successful integration and testing. Process: 1. After successful integration and testing (CI), the code is automatically deployed to production or staging environments. 2. Automated deployment pipelines ensure that the software is packaged, deployed, and configured consistently across different environments. 3. This enables faster and more reliable releases to end-users.
237
Four Benefits of CI/CD
1. Accelerated Development: CI/CD automates the build, test, and deployment processes, allowing for faster development cycles and quicker feedback on code changes. 2. Enhanced Quality: Automated testing at each integration step helps catch and address issues early in the development process, improving software quality. 3. Consistency and Reliability: Automated deployment ensures a consistent and reliable process, reducing human error and ensuring identical deployments across different environments. 4. Rapid Feedback Loop: Developers receive immediate feedback on the quality and functionality of their code, enabling them to iterate and make improvements quickly.
238
CI/CD Pipeline
Represents the automated workflow from integrating code changes to deploying the software. It typically includes stages such as code build, unit testing, integration testing, packaging, and deployment.
239
Code coverage
Refers to the measure of the extent to which the source code of a software system has been tested. It quantifies the proportion of the code that has been executed during testing, providing insights into the thoroughness and effectiveness of the testing process. Four types of code coverage: 1. Statement Coverage 2. Branch coverage 3. Function Coverage 4. Path Coverage
240
Statement Coverage
Measures the percentage of individual statements in the code that have been executed at least once during testing.
241
Branch coverage
Measures the percentage of decision branches (e.g., if-else, switch) that have been traversed during testing.
242
Function Coverage
Measures the percentage of functions or methods that have been called during testing.
243
Path coverage
Measures the percentage of unique paths through the code that have been executed.
244
Three Benefits of Code Coverage
1. Identifies areas of the code that have not been tested, enabling the creation of additional test cases to increase coverage in those areas. 2. Assists in assessing the thoroughness of the testing process and determining the readiness of the software for release. 3. Guides developers and testers to improve the test suite by targeting untested or under-tested parts of the code.
245
15 ways to improve code reliability.
1. Read and obey the standards 2. Use consistent code formatting and style 3. Embrace modular programming (Break down the code into smaller, manageable modules or functions) 4. Use version control systems (like Git) 5. Do regular reviews: have your team try to 'break' your code 6. Prioritizing Erro Handling (Implement robust error handling mechanisms to gracefully handle exceptions, errors, and edge cases.) 7. Embrace the "Don't Repeat Yourself" (DRY) principle (Reuse common functionality through functions, modules, or libraries to maintain consistency and reduce the risk of errors.) 8. Perform code refactoring (Refactoring helps in optimizing performance, reducing technical debt, and eliminating potential sources of errors) 9. Manage your dependencies carefully 10. Perform code analysis (Utilize static code analysis tools to identify potential bugs, code smells, and adherence to coding standards) 11. Document your code 12. CAREFULLY Optimize (Premature optimization can introduce bugs and make the code less reliable) 13. Regression Testing: After making changes or updates to the codebase, re-run all relevant tests to ensure that the modifications haven't inadvertently introduced new bugs or affected existing functionality (regression testing) 14. Plan for Scalability and Load Testing (Design your code to handle increased loads gracefully. Conduct load tests to simulate heavy usage and identify potential bottlenecks or failure points.) 15. Stay up to date with the latest best practices, tools, and technologies
246
Data Dictionary
The Data Dictionary defines the structure, format, and meaning of data used within the system, ensuring consistency and understanding of data elements across the project.
247
Human-System Interface Design Document
The Human-System Interface Design Document provides guidelines and specifications for designing the user interface, ensuring it is intuitive, efficient, and aligns with user needs.
248
Operational Maintenance Plan
The Operational Maintenance Plan details how routine maintenance, updates, and patches will be applied to the system during its operational phase to ensure optimal performance and security.
249
System Configuration Index (SCI)
The System Configuration Index is a detailed index that correlates configuration items to specific versions or baselines, aiding in tracking changes and updates throughout the system development life cycle.
250
Quality Assurance Plan
The Quality Assurance Plan outlines the processes, procedures, and standards that will be employed to ensure the quality and correctness of the system throughout its development and maintenance.
251
Lessons Learned Report
The Lessons Learned Report summarizes the experiences and insights gained from the software development process. It provides valuable feedback for future projects and helps improve processes.
252
Configuration Management Database
The Configuration Management Database is a repository that tracks the configuration items, their versions, and relationships within the system. It supports configuration management by providing a centralized source of information.
253
Software Lifecycle Plan
The Software Life Cycle Plan outlines the phases, activities, and tasks involved in the system's entire life cycle, from concept to retirement. It helps manage the progression of the project from initiation to completion.
254
RAM Analysis Report
The RAM Analysis Report assesses the system's reliability, availability, and maintainability characteristics. It helps identify potential reliability issues and guides decisions to improve system performance and minimize downtime.
255
System Security Plan
The System Security Plan details the security measures and protocols implemented to protect the system from unauthorized access, data breaches, and cyber threats.
256
User Acceptance Test (UAT) Plan
The User Acceptance Test Plan outlines the procedures for testing the system's functionality from the user's perspective. It involves end-users validating that the system meets their needs and requirements.
257
Configuration Management Plan
The Configuration Management Plan defines how changes to the system's components, documents, and artifacts will be managed throughout the development process. It ensures that changes are controlled, documented, and properly communicated.
258
V&V Plan
The Verification and Validation (V&V) Plan outlines the strategy and approach for testing and validating the system. It defines the testing procedures, methodologies, and acceptance criteria to ensure that the system meets its requirements and functions as intended.
259
Scope
"scope" refers to the defined boundaries, features, functionalities, and deliverables of a software development project. It outlines the extent and depth of what will be included or excluded from the project. The scope essentially defines what the project will achieve and what it will not. Includes Six Parameters: 1) Features and Functionalities 2) Requirements Inclusions and Exclusions 3) Constraints and Assumptions 4) Data Inclusions and Exclusions 5) Interfaces 6) Quality Attributes
260
Role of Quality Attributes in software engineering
Describes the non-functional requirements related to performance, usability, reliability, security, and other quality aspects that need to be addressed in the project.
261
Role of Data Interfaces in Software engineering
Specifies the external systems, devices, or applications that the software will interact with or connect to.
262
Role of Data Inclusions and Exclusions in Software engineering
Defines the data that will be used or manipulated by the software, including data types, sources, and data-related functionalities.
263
Role of Constraints and Assumptions in Software engineering
Identifies the limitations or restrictions that will affect the project, such as time, budget, technology constraints, and any assumptions made during project planning.
264
Role of Requirements Inclusions and Exclusions in Software Engineering
Clearly outlines what is part of the project's requirements and what is not. This helps manage stakeholder expectations and prevent scope creep.
265
Role of "Features and Functionalities" in software engineering
Describes the specific capabilities and functions that the software will possess, typically based on the requirements gathered from stakeholders.
266
Defensive Code
Refers to a programming approach and practice that focuses on cautious error handling. Helps ensure that a program behaves robustly and gracefully even in the face of unexpected situations
267
Nine key elements of Defensive Code
1) Error Handling 2) Input Validation: Validating and sanitizing input data from users or external sources to prevent potential security vulnerabilities, buffer overflows, or other harmful effects that could result from malicious or incorrect input. 3) Boundary Checks: Verifying and validating boundaries and constraints for data, arrays, and other data structures to prevent issues like buffer overflows or out-of-bounds access, improving program robustness. 4) Resource Management: Properly managing resources such as memory, file handles, database connections, or network sockets by releasing them when they are no longer needed to prevent memory leaks or resource exhaustion. 5) Fail Safely: Ensuring that when an unexpected error occurs, the system fails in a safe and predictable manner, avoiding data corruption, crashes, or adverse effects on other parts of the system. 6) Testing and Validation: Rigorous testing of the code to identify potential weaknesses, vulnerabilities, or edge cases that might cause errors. Automated testing and manual testing are essential components of defensive coding. 7) Code Modularity: Encouraging modularity and breaking down code into manageable, well-encapsulated units. This improves maintainability and allows for easier identification and isolation of errors. 8) Documentation and Comments: Providing comprehensive and clear documentation, along with meaningful comments in the code, to aid in understanding the purpose and behavior of the code. This helps other developers and future maintainers handle the code effectively. 9) Robust Algorithms: Selecting algorithms and data structures that are robust and efficient, considering worst-case scenarios and handling them gracefully.
268
12 Elements of effective error handling
1) Detecting when/how the errors occur 2) Use meaningful error messages 2) Categorize errors based on their severity, origin, or impact and tailor error handling based on the error category 3) Use error codes, error objects, or enums to communicate error states 4) Use exception handling mechanisms (try-catch) to gracefully handle errors and exceptional conditions 5) Implement logging mechanisms to record errors (with contextual information such as timestamp, user, request details, and stack traces) 6) Implement graceful degradation strategies to ensure that the system remains functional even when errors occur 7) Implement graceful degradation strategies to ensure that the system remains functional even when errors occur 8) Implement retry mechanisms for transient errors, allowing the system to automatically retry the operation after a delay or a specified number of attempts 9) Write unit tests to cover error scenarios, ensuring that error paths are tested to validate correct error handling and recovery mechanisms. 10) Continuously monitor the software in production to detect errors and performance issues in real-time 11) Establish a feedback loop between users and developers to gather insights on encountered errors and prioritize improvements based on user feedback 12) Define and adhere to error handling standards and guidelines within the development team or organization to ensure consistent and effective error handling practices.
269
'Warnings' in software development
Cautionary messages about potential issues. The developer is notified but the program is allowed to proceed
270
'Errors' in software development
Critical issues that prevent normal program execution and require immediate attention and resolution.
271
'Exceptions' as a problem
Refers to an unexpected or exceptional condition that occurs during the execution of a program. This could be a divide-by-zero operation, an attempt to access an invalid memory location, or any situation that deviates from the normal flow of the program.
272
'Exceptions' as a mechanism
In some (but not all) programming languages, an "exception" is also a programming construct or mechanism provided by the language to deal with these exceptional conditions. It allows developers to write code that handles these unexpected conditions in a structured way, preventing the program from crashing and enabling recovery or appropriate actions.
273
'Exceptions' in software engineering
An "exception" can refer to an unexpected condition or problem that occurs during program execution. An "exception" can also refer to the mechanism provided by programming languages to handle and recover from these unexpected conditions in a structured and controlled manner. The context in which we use the term "exception" depends on whether we're discussing the problem itself or how the programming language allows us to handle and recover from such problems.
274
Error Handling
Error handling is the process of anticipating, detecting, and managing errors that may occur during the execution of a software program. The primary goal is to ensure that the software can respond to these exceptional conditions in a controlled and predictable manner (preventing undesirable consequences)
275
Things software engineers want to avoid
1) Crashes and application failures 2) Compromised data CIA (data confidentiality, data integrity, and data availability) 3) Corruption of data
276
Input Validation
Involves examining and verifying data provided by a user, a system, or another application before it is processed. The objective of input validation is to ensure that the data meets specified criteria, conforms to expected formats, and is safe for further use within the software system
277
Mechanisms for Input Validations
1) Implementing validation both on the client side AND the server side 2) Detecting and preventing injection attacks, such as SQL injection, XML injection, or command injection 3) Data Sanitization: Cleaning and removing any potentially harmful or unnecessary characters, scripts, or tags from the input to prevent cross-site scripting (XSS) attacks or SQL injection attacks 4) "Whitelisting": allowing only approved characters or patterns 5) "Blacklisting": disallowing specific characters or patterns known to be harmful or potentially problematic. 5) "Sanity Checking" the user input: -Correct Format? -Correct datatype? -Correct length/size? -Within expected range of values?
278
Boundry Checks
Verifying and validating boundaries and constraints for data, arrays, and other data structures to prevent issues like buffer overflows or out-of-bounds access, improving program robustness.
279
DevOps
DevOps is a cultural and organizational approach that aims to bridge the gap between development (Dev) and operations (Ops) teams. It promotes collaboration, communication, and integration between these traditionally separate teams to streamline the software development lifecycle, improve efficiency, and accelerate delivery. Focus: DevOps primarily focuses on integrating development and operations, enhancing collaboration, automation, and continuous delivery practices to achieve faster software releases. Goals: Goals of DevOps include faster delivery, more frequent releases, improved quality, and a more efficient and collaborative development process. Security Integration: While security is considered in DevOps, it's not the primary focus. Security measures are integrated as part of the development process but may not be as comprehensive or deeply ingrained compared to DevSecOps.
280
DevSecOps
DevSecOps extends the DevOps philosophy by incorporating security principles and practices directly into the software development lifecycle. It emphasizes "security as code" and shifts security left, integrating it from the very beginning of the development process. Focus: DevSecOps places equal importance on integrating security into the DevOps model, ensuring security measures are integrated early and throughout the development lifecycle. Goals: Goals of DevSecOps include not only faster delivery and improved quality but also enhanced security, reduced vulnerabilities, and more proactive threat detection and mitigation. Security Integration: Security is a core and integral component of the development process. Automated security checks, testing, and secure coding practices are incorporated from the outset.
281
DevSecCompliance
DevSecCompliance expands on DevSecOps by emphasizing compliance with regulatory requirements and industry standards. It ensures that software development not only incorporates security but also adheres to relevant compliance standards. Focus: DevSecCompliance adds a layer of compliance-focused practices to DevSecOps, ensuring that security measures meet regulatory and compliance requirements. Goals: Goals include aligning development and security practices with compliance mandates, reducing legal and regulatory risks, and ensuring that the software complies with industry standards. Security Integration: Security is tightly integrated, addressing both security concerns and compliance requirements throughout the development lifecycle.
282
"Edit time" in software engineering
Phase in which the programmer is writing and editing the source code, designing the architecture, and planning the software.
283
"Build Time" in software engineering
Phase in which the programmer is compiling the source code into machine code or an intermediate form (e.g., bytecode in languages like Java). Linking various code modules, libraries, and dependencies to create the final executable or deployable unit.
284
Eight Steps of "Edit Time" In Software Engineering
1) Requirement Analysis: Gather and analyze software requirements to understand what the software needs to achieve. 2) Specification and Planning: Define specifications based on requirements and plan the development process, including milestones, timelines, and resources. 3) System Design: Design the overall system architecture, including high-level design, module interactions, and technology choices. 4) Detailed Design: Create detailed designs for each component, defining algorithms, data structures, interfaces, and interactions. 5) Coding: Write the actual source code based on the designs, following coding standards and best practices. 6) Unit Testing: Develop and run tests to verify that individual units (e.g., functions, methods) of code function correctly. 7) Integration Testing: Test the integration of multiple units or components to ensure they work together as intended. 8) Code Review: Conduct peer reviews to ensure code quality, adherence to coding standards, and identify potential issues.
285
Four steps of the Build Time process
1) Setup and Configuration: Prepare the development environment, configure build tools, and set up necessary dependencies. 2) Compilation: Use compilers or interpreters to translate the source code into machine-readable instructions (e.g., machine code, bytecode). 3) Linking: Link various compiled modules, libraries, and dependencies to create the final executable or deployable unit. 4) Artifact Generation: Generate deployable artifacts, which could be executable files, libraries, or packages, based on the linked components.
286
Six steps of the Runtime process
1) Loading: Load the compiled code and required data into memory to prepare for execution. 2) Initialization: Initialize necessary variables, data structures, and components before the main execution begins. 3) Execution: Execute the software, allowing users to interact with the application and perform desired tasks. 4) Error Handling: Implement mechanisms to handle errors and unexpected situations that may occur during runtime. Performance 5) Monitoring: Monitor the software's performance, resource usage, and responsiveness during execution. 6) Shutdown and Cleanup: Gracefully shut down the application, release resources, and perform necessary cleanup tasks before exiting.
287
Dynamic Application Security Testing (DAST)
- DAST involves testing an application during RUNTIME (while it's running). It interacts with the live application, sending various requests and inputs to identify vulnerabilities and security flaws and simulate real-world attacks -It can mimic SQL injection, cross-site scripting (XSS), cross-site request forgery (CSRF), and more. -It dynamically analyzes the application's responses to the simulated attacks, looking for signs of vulnerabilities. - The vulnerabilities and security issues identified during DAST provide valuable feedback to developers, enabling them to fix the detected problems and improve the application's security.
288
Continuous Integration (CI)
A software development practice that involves the frequent and automated integration of code changes into a shared repository, typically several times a day. Each integration triggers an automated build and a suite of automated tests to ensure that the code changes don't break the application. The primary goal of continuous integration is to enable early detection of integration issues and to ensure that the software remains functional and maintainable throughout the development process.
289
Continuous Integration/Continuous Deployment (CI/CD) pipeline
An automated process that facilitates the integration of code changes, automated testing, and continuous deployment of software to production. It's a set of principles and practices aimed at automating and streamlining the software delivery process, from development to production, to achieve faster, more reliable releases.
290
Continuous Integration/Continuous Deployment (CI/CD) pipelines stages
1) Code Integration (Continuous Integration): Developers integrate their code changes into a shared version control repository, triggering an automated build process. 2) Automated Build and Testing: The CI server automatically builds the code, compiles it, and runs automated tests to ensure that the changes didn't introduce any errors or regressions. 3) Artifact Generation: The CI process generates deployable artifacts, which could be executable files, binaries, container images, or any other deployable package. 4) Deployment to Staging/Testing Environment (Continuous Deployment): The generated artifacts are deployed to a staging or testing environment, allowing further testing in an environment that closely resembles the production setup. 5) Automated Testing (Continuous Testing): The application undergoes various automated tests, including unit tests, integration tests, performance tests, security tests, and other types of checks to validate its functionality and quality. 6) Deployment to Production (Continuous Deployment): If all tests pass successfully and the code is deemed ready, the CI/CD pipeline automatically deploys the artifacts to the production environment. 7) Monitoring and Feedback: Once in production, the application is closely monitored to ensure it performs as expected. Monitoring data and user feedback are collected to guide future development and improvements.
291
Key benefits of a CI/CD pipeline
1) Faster Delivery: Automation reduces manual intervention, allowing for quicker and more frequent releases. 2) Early Bug Detection: Automated tests catch bugs early in the development cycle, reducing the cost of fixing issues. 3) Consistency and Reliability: Automation ensures a consistent and reliable deployment process every time. Increased 4) Collaboration: Teams collaborate more effectively since everyone works on a shared and consistent integration process. 5) Rapid Feedback: Developers receive immediate feedback on their code changes, encouraging continuous improvement. 6) Efficiency and Cost Reduction: Automation and streamlined processes reduce manual effort and operational costs.
292
Static Application Security Testing (SAST)
Testing from the "inside out": looking at the source code and predicating the output or behavior It's an analysis of a computer software performed without actually executing anything. Literally just looking at the source code -Examines the code files for security vulnerabilities and coding errors. The analysis includes reviewing the syntax, structure, and logic of the code. -Use predefined security patterns, rules, and signatures to identify known security issues. These patterns encompass common coding mistakes, insecure coding practices, and vulnerabilities like SQL injection, cross-site scripting, etc. -Analyze the paths that the application's control flow can take, searching for vulnerabilities related to incorrect or insecure control flow. -Trace the flow of data within the application to identify potential security risks related to how sensitive information is handled and manipulated
293
Dynamic Application Security Testing (DAST)
Testing from the "outside in": looking at the outputs and behavior and drawing conclusions about the source code It's an analysis of a computer software that tests the application by executing the it, with NO knowledge of the code/technologies/frameworks -The DAST tool navigates through the application, simulating how a user would interact with it, injecting various inputs to test for vulnerabilities like SQL injection, brute force attacks, cross-site scripting (XSS), and other security weaknesses. -It analyzes the application's responses, looking for signs of vulnerabilities based on deviations from expected behavior.
294
Docker Image
Docker image is an immutable (unchangeable) file that contains the source code, libraries, dependencies, tools, and other files needed for an application to run. key components of a Docker image: Filesystem Snapshot: An image is essentially a filesystem snapshot, capturing all the files and configurations needed to run an application. Metadata and Configuration: It includes metadata and configuration settings specifying how the application should run, what processes to start, which ports to expose, and other runtime settings. Layers: Docker images are composed of layers. Each layer represents a specific change or addition to the filesystem. Layers are additive and can be shared between multiple images, improving storage and download efficiency. Due to their read-only quality, these images are sometimes referred to as snapshots. They represent an application and its virtual environment at a specific point in time. This consistency is one of the great features of Docker. It allows developers to test and experiment software in stable, uniform conditions. Since images are, in a way, just templates, you cannot start or run them. What you can do is use that template as a base to build a container.
295
Benefits of Using a Docker Image
Portability: Docker images are portable and can be run on any system that supports Docker, ensuring consistency across different environments. Reproducibility: Docker images ensure that the application runs in the same way regardless of where it's deployed. Efficiency: Images are built using layers, allowing for efficient storage, caching, and distribution of common layers. Isolation: Containers created from images are isolated from the host system and other containers, enhancing security and minimizing conflicts.
296
Docker Container v. Docker Image
Definition and Purpose: A Docker image is a lightweight, standalone, and executable software package that includes the application code, runtime, system libraries, dependencies, and configurations. It's a static and read-only snapshot of an application and its environment at a specific point in time. But a Docker container is a runnable instance of a Docker image. It's a lightweight and portable executable that contains the application and its dependencies, running in an isolated and consistent environment. Containers are dynamic and can be started, stopped, deleted, and managed. Immutability: Docker images are immutable and cannot be changed after creation. Any changes result in the creation of a new image. Containers on the other hand can be modified during their runtime; however, these modifications are not persisted by default. If you want to keep the changes, you can commit the container to create a new image. Lifecycle: Images serve as a blueprint for containers. They are used to create and run containers, which are the runtime instances of images. Containers are created from images and can be started, stopped, deleted, or restarted. Usage: Images are used for distribution, sharing, and deployment. Developers create and share images, and these images serve as a foundation for creating containers, which are the actual executable units that run applications and services. Containers encapsulate the runtime environment and allow applications to run consistently across different environments. Persistence: Docker images are read-only and cannot be modified. Changes in the image result in the creation of a new image layer. Containers however can write data, modify the file system, and make changes during their runtime. However, these changes are lost by default when the container is removed, unless data persistence mechanisms are used (e.g., volumes).
297
Docker Container
Docker container is a runnable instance of a Docker image. It's a lightweight and portable executable that contains the application and its dependencies, running in an isolated and consistent environment. Containers are dynamic and can be started, stopped, deleted, and managed. Menas that containers can write data, modify the file system, and make changes during their runtime. However, these changes are lost by default when the container is removed, unless data persistence mechanisms are used (e.g., volumes). Containers can be modified during their runtime; however, these modifications are not persisted by default. If you want to keep the changes, you can commit the container to create a new image. Containers are the actual executable units that run applications and services. They encapsulate the runtime environment and allow applications to run consistently across different environments.
298
Docker
Docker is an open-source platform and a set of tools designed to automate the deployment, scaling, and management of applications using containers. It allows you to package an application and its dependencies into a standardized unit called a container, ensuring that the application runs consistently across various environments. Docker is widely used in software development, testing, and production environments. It simplifies the process of setting up development environments, accelerates continuous integration and continuous deployment (CI/CD), and enhances the scalability and reliability of applications.
299
Dockerfile
A Dockerfile is a simple text-based configuration file used to define the instructions and steps needed to create a Docker image. It specifies the base image, dependencies, environment setup, and how the application should run.
300
Docker Engine
The Docker Engine is the core of the Docker platform. It's a client-server application that manages and orchestrates containers. It includes a daemon (server) that manages container lifecycle and a command-line interface (CLI) that allows users to interact with Docker
301
Docker Hub
Docker Hub is a cloud-based registry that hosts a vast collection of pre-built Docker images. It allows users to share, distribute, and collaborate on Docker images. Users can also publish their own images for public or private use.
302
Continuous Deployment
Continuous Deployment is a software delivery practice where every code change that passes automated testing is automatically deployed to production or a production-like environment without manual intervention. The aim of CD is to release reliable and deployable software increments to production at any point, enabling a streamlined and automated release process.
303
Continuous Integration (CI) v. Continuous Deployment (CD)
1) Scope: CI focuses on integrating code changes and running automated tests to ensure the changes don't break the application, but it does not automatically deploy the code to production. CD includes CI but goes further by automating the deployment of code changes that pass testing directly to production or production-like environments. 2) Automated Deployment: CI does not involve automated deployment to production. It's primarily concerned with integration and testing. CD automates the deployment process, ensuring that every successful integration can potentially be released to production. 3) Deployment Decision: In CI, the decision to deploy to production is manual and separate from the integration process. In CD, once the code passes tests, the decision to deploy is automated, aiming for immediate or near-immediate production release. 4) Frequency of Deployment: CI does not dictate how often deployments should happen; it's primarily about integrating code frequently. CD advocates for automating deployments as frequently as possible, often after every successful integration.
304
"Secure Defaults"
One of the main elements of Zero-Trust security Principle where systems, applications, or devices are designed and configured with the max security settings by default. These defaults are established to provide a strong baseline level of security from the moment a system or application is deployed, minimizing potential vulnerabilities and risks.
305
"Fail Securely"
Principle in system design that emphasizes ensuring that when failures or errors occur within a system, they do so in a manner that minimizes potential harm, damage, or security risks. The objective is to have a system fail in a predictable, controlled, and safe manner to protect users, data, and the overall system's integrity.
306
"Secured environment" in software development
Refers to a controlled and protected computing environment where security measures are implemented to safeguard data. The primary goal of a secured environment is to ensure confidentiality/integrity/availability of information and services while minimizing the risk of security breaches When a failure occurs, the impact on users should be minimized. Critical services should still function, and users should be provided with clear and informative messages about the issue.
307
"Graceful degradation" in software development
The system should degrade gracefully in the face of failure, allowing essential functionalities to continue operating even if non-critical components or features are unavailable.
308
"Defense in Depth"
Involves implementing multiple layers of redundant security measures to protect an organization's data
309
Perimeter Security
An antiquated cybersecurity strategy, built around the concept of a trusted internal network protected by a strong perimeter defense (often referred to as the "castle and moat" approach) With the evolution of remote work, cloud computing, and the rise of sophisticated cyber threats, the perimeter-based security model proved inadequate. Attackers found ways to bypass perimeter defenses through various means like phishing, social engineering, and exploiting vulnerabilities
310
Zero-Trust v. Perimeter Security
Perimeter security relies heavily on fortifying the network boundary, assuming that threats are external. Once inside the network, users and devices are often given more trust, potentially creating vulnerabilities if a threat penetrates the perimeter. Zero Trust Security: - Assumes that threats can come from both external and internal sources - Emphasizes strict access controls and verification for every user/device/application (regardless of their location) -Trust is never assumed, even for those in the network
311
Benefits of the K.I.S.S. principle in software design
1) Fewer Interfaces/Attack Surfaces 2) Easier/Faster Code Review and Analysis 3) Reduced potential for mistakes/errors 4) Easier to maintain/update 5) Faster Incident Response/Mitigation 6) Easier Compliance with standards/ regulations 7) Easier to use (easier to train on) 8) Reduced need for third party solutions
312
Principle of Least Astonishment
Suggests that a system, particularly its user interface, should behave in a way that minimizes surprise or astonishment among users when they interact with it. Indicates that systems should: - Be Consistent and Predictable - Have an Intuitive Design - Avoid Misleading Elements - Help Minimize Cognitive Load
313
K.I.S.S. principle to simplify outsourcing in software development
Find a versatile vendor that can do many of the things you are looking for instead of insisting on the best vendor for each of the things you are looking for (Best-in-suite over best-in-breed) The fewer third party vendors, the better. Each third party vendor you use, the more attack surfaces you introduce into your system
314
Security Information Event Management (SIEM)
A comprehensive approach to security management that involves collecting, correlating, and analyzing security-related data from various sources across an organization's IT infrastructure. SIEM systems help in detecting and responding to security incidents by providing real-time insights into security events and incidents occurring within the network.
315
Spiral Process
Software development process that allows for multiple iterations of the waterfall process. Each "loop" in the process represents the development of a new prototype Solves a major criticism of the waterfall model: it allows developers to regularly return to the planning stage so that they can adapt to changing requirements Key word here is "iterative"
316
Capability Maturity Model (CMM)
A framework used in software engineering and development to evaluate/improve the processes and practices of an organization. The ultimate goal is enhancing the quality of software products and the efficiency of development. The CMM consists of five maturity levels, each representing a different stage in the organization's software development process improvement journey: Initial (Level 1): Processes are ad hoc and often chaotic. There is no defined process and success depends on individual efforts. Repeatable (Level 2): Basic project management processes are established to track cost, schedule, and functionality. Processes are defined enough to repeat past successes. Defined (Level 3): Processes are well characterized and understood. Detailed guidelines and procedures are established to standardize the software development lifecycle. Managed (Level 4): Quantitative metrics are used to manage the software development process. Process performance is measured and controlled. Optimizing (Level 5): Continuous process improvement is enabled by quantitative feedback and process change management. The focus is on optimizing processes based on quantitative understanding.
317
Level 1 of CMM
INITIAL: Processes are ad hoc and often chaotic. There is no defined process and success depends on individual efforts.
318
Level 2 of CMM
REPEATABLE: Basic project management processes are established to track cost, schedule, and functionality. Processes are defined enough to repeat past successes.
319
Level 3 of CMM
DEFINED: Processes are well characterized and understood. Detailed guidelines and procedures are established to standardize the software development lifecycle.
320
Level 4 of CMM
MANAGED: Quantitative metrics are used to manage the software development process. Process performance is measured and controlled.
321
Level 5 of CMM
OPTIMIZED: Continuous process improvement is enabled by quantitative feedback and process change management. The focus is on optimizing processes based on quantitative understanding.
322
IDEAL Model of Software Development
A framework used in software development for continuous process improvement. It stands for Initiating, Diagnosing, Establishing, Acting, and Learning. The IDEAL model guides organizations through a cycle of steps to drive improvement in their software development processes.
323
The "Initiate" step of the IDEAL model
Objective: Identify and initiate the need for process improvement. Activities: 1) Recognize Improvement Opportunities: Identify areas where processes can be enhanced or streamlined. 2) Create a Vision for Improvement: Define a clear vision of what the improved processes should look like. 3) Obtain Management Support: Secure commitment and support from management for the improvement initiative. Outcome: A clear understanding of the need for improvement and the initiation of the improvement process.
324
The "Diagnosing" step of the IDEAL model
Objective: Assess the current state of the processes and performance. Activities: 1) Analyze Current Processes: Evaluate existing processes to identify strengths, weaknesses, and areas for improvement. 2) Collect Data and Feedback: Gather data, feedback, and insights from relevant stakeholders. 3) Benchmarking: Compare the organization's processes with industry standards or best practices. Outcome: A thorough diagnosis of the current state of processes, identifying areas for improvement.
325
The "Establish" step of the IDEAL model
Objective: Establish specific goals and targets for process improvement. Activities: 1) Set Improvement Goals: Define achievable and measurable improvement goals that align with the organization's objectives. 2) Plan Improvement Activities: Develop detailed plans for implementing changes and achieving the defined goals. 3) Define Metrics and Measurement Plans: Establish metrics to measure progress and success in achieving the improvement goals. Outcome: Clear, defined improvement goals and a structured plan to achieve them.
326
The "Act" step of the IDEAL model
Objective: Implement the planned improvements and measure their effectiveness. Activities: 1) Implement Changes: Execute the planned process improvements and changes. 2) Collect Data and Measure Performance: Gather data during and after the changes to measure process performance and improvements. 3) Analyze Results: Evaluate the outcomes of the improvements and assess their impact on processes and product quality. Outcome: Implemented improvements and measured impact, providing valuable insights
327
The "Learn" step of the IDEAL model
Objective: Learn from the improvements and experiences to refine the process and plan for the next cycle. Activities: 1) Review and Evaluate: Review the results of the improvement efforts and evaluate the overall process changes. 2) Document Lessons Learned: Document lessons learned and best practices for future reference and improvement. 3) Update Processes and Plans: Use the knowledge gained to update processes and plans for the next improvement cycle. Outcome: Insights and knowledge to further refine processes and plan for the next round of improvements.
328
Request Control
Objective: Managing incoming requests for changes or enhancements to the web application. Scenario: 1) Request for Feature Addition: A stakeholder submits a request for adding a new feature to the web application, such as a real-time chat feature for improved user engagement. 2) Request Evaluation: The development team evaluates the request in terms of feasibility, impact on existing functionality, resource availability, and alignment with business objectives. 3) Decision: Based on the evaluation, the team decides whether to accept, decline, or defer the request. Outcome: Effective management and assessment of requests, ensuring alignment with project goals.
329
Change Control
Objective: Managing changes to the software and ensuring they are controlled, tested, and documented. Scenario: 1) Change Request Approval: After evaluating the request, if approved, it is considered for implementation. 2) Change Implementation: The development team implements the approved change in a controlled manner, following established procedures and guidelines. 3) Testing and Verification: The change undergoes thorough testing to ensure it doesn't introduce bugs or adversely affect existing functionality. 4) Documentation: Detailed documentation of the change, including code modifications, is recorded for future reference and audits. Outcome: Controlled and structured application of changes, maintaining system stability and reliability.
330
Release Control
Objective: Managing the release of new versions or updates of the software to production environments. Scenario: 1) Release Planning: The team plans the release, considering factors like feature completeness, bug fixes, and stakeholder expectations. 2) Versioning and Tagging: The release is assigned a version number, and codebase is tagged to ensure a specific set of changes is bundled together for deployment. 3) Deployment and Monitoring: The new version is deployed to the production environment, and the team closely monitors its performance and user feedback. 4) Rollback Plan: A rollback plan is prepared in case any critical issues arise post-release, ensuring a swift return to the previous stable version if needed. Outcome: Controlled and successful deployment of software updates with minimal disruptions to users.
331
Four main propagation techniques of viruses
1) File Infection 2) Service Injection 3) Boot Sector Infection 4) Macro Infection
332
Best way for a software application do defend itself from malware
Can build virus detecting algorithms into the software. The viruses work by detecting the ways in which the viruses spread through the system: 1) File Infection 2) Service Injection 3) Boot Sector Infection 4) Marco Infection
333
Virus propagation via File Infection
In this method, the virus attaches itself to executable files or program files. When a user runs an infected program, the virus activates and replicates by attaching itself to other executable files on the system. The infected files then spread the virus to other devices when shared or transferred.
334
Virus propagation via Service Injection
Service injection involves injecting malicious code into system processes or services running in the background. The virus modifies system executables or libraries, allowing it to execute whenever the affected service is initiated. This method is harder to detect and remove since the virus is integrated into critical system components.
335
Virus Propagation via Boot Sector Infection
Viruses that use boot sector infection target the boot sector of storage devices like hard drives, SSDs, or USB drives. When an infected device is booted, the virus loads into memory and runs before the operating system starts. The virus can spread to other devices when bootable media (like a USB drive) is connected to an infected system.
336
Virus Propagation via Macro Infection
Macro viruses utilize macros, which are sequences of instructions in a document or file format like Microsoft Word or Excel. When a user opens an infected document containing macros, the virus executes and can infect the system. These viruses can replicate by attaching to other documents or spreadsheets and spreading when shared or sent via email.
337
Importance of threat modeling in software development
This is the first step in building malware protection into any software project Four major frameworks: 1) PASTA 2) STRIDE 3) VAST 4) DREAD 5) TRIKE
338
PASTA Threat modeling framework
One of the strategies for planning built-in malware protection Stage 1) Definition of Objectives Stage 2) Definition of Technical Scope Stage 3) App Decomposition & Analysis Stage 4) Threat Analysis Stage 5) Weakness and Vulnerability Analysis Stage 6) Attack Modeling & Simulation Stage 7) Risk Analysis and Management
339
STRIDE Threat modeling framework
Helps you remember the categories of attacks you need to be aware of Spoofing Tampering Repudiation Information disclosure Denial of Service
340
In many companies, **“software engineer”** and **“software developer”** are basically the same job title. True or False?
TRUE ## Footnote The distinction often lies in scope and systems thinking rather than in the nature of the work.
341
What are some job titles that are often used interchangeably with **“Software Engineer”**?
* Software Developer * Backend Developer * Full-stack Engineer ## Footnote HR and recruiters focus more on experience, tech stack, and seniority than on title distinctions.
342
When distinguishing between **“Software Developer”** and **“Software Engineer”**, what is the core focus of a Software Developer?
Writing features, fixing bugs ## Footnote This is a narrower focus compared to the broader responsibilities of a Software Engineer.
343
When distinguishing between **“Software Developer”** and **“Software Engineer”**, what is the core focus of a Software Engineer?
Designing & building systems end-to-end ## Footnote This includes a broader scope than just writing code.
344
Fill in the blank: A **Software Developer** focuses on individual components or apps, while a **Software Engineer** focuses on the whole system: ______.
APIs, data flows, scalability, failure modes ## Footnote This highlights the broader responsibilities of a Software Engineer.
345
What are some aspects that a **Software Engineer** explicitly focuses on?
* Tradeoffs * Reliability * Testing * Architecture ## Footnote These aspects are often not explicitly referenced by Software Developers.
346
What are the typical titles in the **career ladder** for software professionals?
* Software Engineer I/II/III * Senior Software Engineer * Staff/Principal Engineer ## Footnote These titles represent the standard individual contributor ladder in most tech companies.
347
In smaller shops or agencies, which title is more commonly used?
Developer ## Footnote This contrasts with larger tech companies where 'Engineer' is more prevalent.
348
What should you focus on when **reading job posts** instead of the titles?
* Ownership: bugs only or also architecture? * Non-functional requirements: scalability, reliability, latency, security? * Process: design docs, code reviews, SRE/DevOps interactions? ## Footnote This approach helps in understanding the actual responsibilities of the role.
349
When describing yourself, what should you consider?
* Industry/region you’re targeting * Roles you want ## Footnote This ensures that your title aligns with your career goals.
350
If your day consists mainly of implementing features and merging code, you are functioning as a ______.
developer ## Footnote This indicates a narrower focus on coding tasks.
351
If your day includes defining interfaces, capacity planning, and writing design docs, you are functioning as a ______.
software engineer ## Footnote This reflects a broader scope of responsibilities.
352
What does **SDLC** stand for?
Software Development Life Cycle ## Footnote It’s the structured way a team takes software from *idea* to *working system in production*.
353
What is the core idea of **SDLC**?
To reliably turn vague business needs into working, maintainable software with minimal chaos ## Footnote This is achieved by breaking work into stages.
354
List the **classic SDLC phases**.
* Planning / Requirements * Analysis * Design (Architecture & Detailed Design) * Implementation (Coding) * Testing / Verification * Deployment / Release * Operations & Maintenance ## Footnote These phases help structure the software development process.
355
What is the output of the **Planning / Requirements** phase?
High-level goals, requirements, initial scope ## Footnote This phase addresses the problem being solved, user identification, and success criteria.
356
In the **Analysis** phase, what is refined?
Requirements into detailed specifications ## Footnote This includes use cases, user flows, and identifying risks.
357
What does the **Design** phase focus on?
* High-level architecture * Low-level design * Scalability * Security * Fault tolerance * Observability ## Footnote Outputs include design docs, diagrams, and tickets.
358
What is the main task during the **Implementation (Coding)** phase?
Translate the design into actual code ## Footnote This includes following coding standards and conducting unit tests.
359
What types of tests are included in the **Testing / Verification** phase?
* Unit tests * Integration tests * End-to-end / UI tests * Performance & load testing * Security testing ## Footnote The goal is to ensure the software meets requirements and does not break existing functionality.
360
What happens during the **Deployment / Release** phase?
Move software to production or to users ## Footnote This may involve CI/CD pipelines, canary releases, and setting up monitoring.
361
What is the focus of the **Operations & Maintenance** phase?
* Fix bugs * Patch security issues * Improve performance * Handle incidents and outages * Refactor to manage tech debt ## Footnote User feedback is also taken into account for the next cycle.
362
True or false: In **Waterfall**, phases are mostly sequential.
TRUE ## Footnote This approach is harder to change once mid-stream and is good when requirements are stable.
363
In **Agile**, how are the SDLC phases organized?
Compressed into iterations (sprints) ## Footnote Each sprint includes planning, design, building, testing, and shipping a small increment.
364
What does understanding **SDLC** help you do?
* Write better design docs * Identify process gaps * Communicate effectively with PMs and leadership ## Footnote It helps in recognizing where the current team’s process may be broken.
365
What is the **10-second TL;DR** of SDLC?
Plan → Analyze → Design → Implement → Test → Deploy → Operate & Maintain → repeat ## Footnote This summarizes the lifecycle of software development.
366
What does SDLC stand for?
Software Development Life Cycle ## Footnote It represents the process of planning, creating, testing, and deploying software.
367
During which decades did the **ad-hoc era** and the **software crisis** occur?
1950s–1960s ## Footnote Software was developed in a chaotic manner, leading to significant issues in project management.
368
What model was introduced in the late 1960s that drew software development into **sequential stages**?
Waterfall model ## Footnote This model was influenced by systems and hardware engineering practices.
369
Name two models that emerged in the **1980s–1990s** as alternatives to the Waterfall model.
* Spiral model * Incremental / iterative development ## Footnote These models addressed the rigidity of the Waterfall approach.
370
What significant document was published in 2001 that reframed SDLC?
Agile Manifesto ## Footnote Agile methods compressed and repeated the SDLC process.
371
What does CI/CD stand for in the context of modern SDLC?
Continuous Integration / Continuous Deployment ## Footnote These practices allow for frequent releases and tight feedback loops.
372
List three benefits of having a deliberate **SDLC**.
* Predictability & planning * Better alignment with business/user needs * Higher quality & fewer production disasters ## Footnote These benefits help streamline the software development process.
373
What is one key benefit of having explicit **testing phases** in SDLC?
Bugs are caught earlier and cheaper ## Footnote This leads to more stable releases and fewer production issues.
374
True or false: A real SDLC helps in identifying **technical risks** and **project risks**.
TRUE ## Footnote It allows for early prototyping and risk mitigation strategies.
375
What does SDLC clarify regarding team roles?
Who does what when ## Footnote It reduces chaos by defining responsibilities in the software development process.
376
What is a benefit of having an explicit SDLC for **onboarding** new team members?
New people can learn how a feature goes from idea to production ## Footnote This creates a repeatable pattern for development processes.
377
In which domains is compliance and audit particularly important?
* Aerospace * Automotive * Medical devices * Defense * Finance ## Footnote These domains often require a clear paper trail for regulatory purposes.
378
Fill in the blank: SDLC is just the software analogue of a __________.
systems engineering lifecycle ## Footnote This analogy highlights the structured approach to software development.
379
What is one question to ask regarding the **lifecycle** of a software project?
Where are we in the lifecycle? ## Footnote This helps clarify the current phase of development.
380
What is a lightweight version of a requirements phase that still provides benefits?
One-page requirements doc ## Footnote This approach avoids excessive documentation while still capturing essential information.
381
What is the main concept of an **iterative SDLC**?
Running **mini–software projects in loops** ## Footnote Each loop passes through the same core phases on a small slice of scope.
382
What is the **goal** of the **iteration framing** phase?
* Define the time box * Define the cadence ## Footnote Example: 2-week iterations with code freeze on Day 9, release on Day 10.
383
What are the **key decisions** made during **iteration framing**?
* Length of iteration * Definition of Done (DoD) * Environments: dev → test → staging → prod ## Footnote These decisions help establish a stable planning and release schedule.
384
What are the **inputs** for the **Iteration Planning & Requirements** phase?
* Product roadmap / high-level requirements * Backlog of user stories / features / bugs * Capacity of the team for this iteration ## Footnote These inputs guide the selection of stories and tasks for the iteration.
385
What is produced as an **output** of the **Iteration Planning & Requirements** phase?
* Iteration goal * Selected stories/tasks with acceptance criteria * Rough estimates ## Footnote The iteration goal is a concise statement of success for the iteration.
386
What is the **focus** of the **Analysis & Iteration-Level Design** phase?
Determining the **simplest technical approach** to achieve iteration goals ## Footnote This includes clarifying edge cases and identifying impacted components.
387
What are the **activities** involved in the **Implementation (Build)** phase?
* Write code, configs, migrations, and tests * Refactor as needed * Pair programming / code review ## Footnote This phase focuses on turning the plan into working, reviewable code.
388
What is the purpose of the **Testing & Integration** phase?
To verify if the iteration’s increment **works as a system** ## Footnote This includes running automated tests and checking for regressions.
389
What are the **activities** during the **Deployment / Release** phase?
* Promote build from staging to production * Use safe release strategies * Monitor logs/metrics after release ## Footnote Ensures that the new version is safely delivered to users.
390
What is the main question addressed in the **Feedback, Review & Retrospective** phase?
* What did we learn from this increment and from how we worked? ## Footnote This phase emphasizes continuous improvement and learning.
391
What is the **loop back** process in the iterative SDLC?
Returning to **Iteration Planning** with new information and insights ## Footnote This ensures continuous improvement and adaptation of the process.
392
Fill in the blank: The iterative SDLC is: **Plan → Analyze/Design → Build → Test → Deploy → Get feedback → Improve → _______**.
repeat ## Footnote This cycle is performed on small slices continuously.
393
What is the **objective** of Phase 1 in an **iterative SDLC**?
* Choose the right work for this iteration * Clarify requirements * Set a realistic target for the team’s capacity ## Footnote Leaving Phase 1 fuzzy can lead to confusion in later phases.
394
Who typically participates in Phase 1 of the iterative SDLC?
* Product Owner / PM * Tech Lead / Architect * Developers * QA / Test / SRE ## Footnote At least one representative from each perspective is needed to avoid later surprises.
395
What are the **inputs** into Phase 1?
* Product vision / roadmap * Backlog (user stories, bugs, spikes, tech-debt items) * Constraints for this iteration * Feedback & metrics from last iteration ## Footnote These inputs help determine what the system needs most in the next iteration.
396
What is a **good** iteration goal statement?
Users can reset their passwords via email without admin intervention ## Footnote A good goal is outcome-oriented and small enough to feel successful if accomplished.
397
What types of work items are selected in Phase 1?
* Feature stories for the iteration goal * Supporting tasks (DB migrations, refactors) * Critical bugs / risk items ## Footnote Rough prioritization is done to determine must-do and nice-to-have items.
398
What does refining requirements into implementable stories involve?
* Identifying the user * Defining what they want to do * Explaining why it matters * Establishing acceptance criteria ## Footnote Clear acceptance criteria are essential for readiness for implementation.
399
What is the purpose of **slicing work** in Phase 1?
* Deliver user-visible or system-visible value * Complete within one iteration ## Footnote Slicing helps manage the scope and ensures that work can be demoed.
400
What should teams do to **estimate and balance** against capacity?
* Assign story points or size labels * Compare planned work to past iteration throughput * Cut scope if total exceeds capacity ## Footnote This discipline helps avoid overloading the team.
401
What are some **iteration-level risks and dependencies** to identify?
* External dependencies * Story dependencies * High-risk technical unknowns ## Footnote Scheduling spikes can help de-risk future iterations.
402
What are the **outputs/artifacts** from Phase 1?
* Iteration goal * Committed story list with acceptance criteria * Estimates & capacity check * Known risks & mitigation plan * Definition of Done reminder ## Footnote These artifacts provide clarity for the team and any new members.
403
What does a **good Phase 1** look like?
* Clear iteration focus * Crisp acceptance criteria * Slack for unplanned issues * Explicitly surfaced risks ## Footnote A good Phase 1 avoids vague plans and overcommitment.
404
What are the **5 questions** to check at the end of Phase 1?
* Can we state the iteration goal in one sentence? * Can each story be tested by someone not in the planning meeting? * Is the total load consistent with previous iterations? * Have we acknowledged risky stories and converted some to spikes? * Would we be proud to demo the result if only committed stories were completed? ## Footnote If any answer is “no”, Phase 1 is not complete.
405
What is the **purpose of Phase 2** in the development process?
* Eliminate ambiguity in feature behavior * Choose a technical approach that fits existing architecture * Reduce risk before coding ## Footnote Phase 2 focuses on converting stories and acceptance criteria into concrete technical decisions.
406
What are the **inputs to Phase 2** from Phase 1?
* Iteration goal * Selected stories with user-centric description and acceptance criteria * Known constraints (performance, security, legacy systems) * Existing architecture (services, data models, APIs) ## Footnote These inputs help in deciding how the iteration's changes fit into the existing system.
407
What is involved in **deepening the functional understanding** during Phase 2?
* Walk through scenario flows * Main flow * Alternate flows * Error conditions ## Footnote This process helps clarify how features should behave in different situations.
408
What are the key activities in Phase 2?
* Deepen functional understanding * Identify impacted components and boundaries * Choose technical approaches and patterns * Define iteration-level non-functional requirements * Plan tests and validation strategy * Create or update simple design artifacts * Handle spikes and risks ## Footnote Each activity contributes to a clearer understanding of the iteration's requirements and design.
409
What should be included in the **technical notes** for each story by the end of Phase 2?
* Modules that will change * Data model impact * New APIs/endpoints/events ## Footnote These notes ensure clarity on what needs to be modified or created.
410
What are some examples of **non-functional requirements** for an iteration?
* Performance (e.g., p95 < 250 ms) * Reliability (e.g., helpful messages on failure) * Security (e.g., auth token required) * Observability (e.g., metrics for reset requests) ## Footnote Non-functional requirements make vague expectations concrete for the iteration.
411
What is a **spike task** in Phase 2?
* A narrow question to explore high-uncertainty areas * Time-boxed (e.g., 1–2 days) * Expected output: evidence and recommendation ## Footnote Spikes help inform design decisions before coding begins.
412
What does a **good Phase 2** look like?
* Clear, agreed technical approaches * No surprises mid-coding * Planned tests * Short design notes ## Footnote A well-executed Phase 2 minimizes confusion and technical debt.
413
What does a **bad Phase 2** look like?
* Jumping straight to code from vague stories * Mid-implementation disagreements * Ad-hoc DB schema changes * No consideration for performance/observability ## Footnote A poorly executed Phase 2 can lead to significant issues during implementation.
414
Fill in the blank: The main goal of Phase 2 is to **________** how features should behave.
eliminate ambiguity ## Footnote This goal is crucial for ensuring that the development team has a clear understanding of the requirements.
415
What should be captured in **design artifacts** during Phase 2?
* Design note or ADR * Diagrams (e.g., sequence, data model) ## Footnote Artifacts should be lightweight but explicit to aid understanding for other engineers.
416
What is the main goal of **Phase 3** in the implementation process?
* Turn stories + designs + acceptance criteria into working software * Keep the codebase healthy * Ensure everything is ready to integrate and test ## Footnote Implementation is controlled change to a long-lived system, not just typing code until it works.
417
What are the **inputs** to Phase 3?
* Iteration goal * Stories with user-facing and non-functional criteria * Technical design from Phase 2 * Test plan sketch ## Footnote If Phase 1 & 2 were done well, coding is mostly execution of a decision, not guessing.
418
What is the purpose of breaking work into **small coding tasks** in Phase 3?
* Allows parallelization * Provides clear, small completion targets ## Footnote Example tasks for “password reset” include adding tables, implementing logic, and adding REST endpoints.
419
In Phase 3, why is it important to follow **coding standards and architecture**?
* Match existing style and conventions * Respect architecture boundaries ## Footnote This protects the architecture from quick hacks that could spread.
420
What should be included when writing **code + unit tests together**?
* Production code * Unit tests that validate core logic and cover key edge cases ## Footnote Don’t leave test creation for later; ensure tests are created alongside implementation.
421
What are the common patterns for **incremental commits** in Phase 3?
* Feature branches with small, frequent commits * Trunk-based with feature flags ## Footnote Key principle: keep changes reviewable and revertable.
422
What should every significant change go through in Phase 3?
Code review ## Footnote Reviewers check for correctness, style, clarity, tests, and architecture compliance.
423
What is the role of **refactoring** during implementation?
* Extract helpers * Rename for clarity * Split big functions/classes ## Footnote Refactoring is about cleaning the area around what you’re touching to make future work easier.
424
What non-functional requirements should be kept in mind during implementation?
* Performance * Security * Observability ## Footnote These are integral to implementation, not an afterthought.
425
What does it mean for a story to be **'done'** in Phase 3?
* Code is merged or ready to merge * Tests exist and pass * Code review is complete * Integration readiness is achieved ## Footnote If it runs on a local machine but isn’t reviewed, tested, or mergeable, it’s not complete.
426
What is a common failure mode in Phase 3 related to **vague stories**?
* Devs disagree on behavior mid-implementation * PRs are huge and inconsistent ## Footnote This often occurs when Phase 1 & 2 were skipped or weak.
427
What symptoms indicate **no tests or minimal tests** during implementation?
* PRs have lots of code, no tests * Bugs appear immediately after integration ## Footnote This results from treating testing as optional.
428
What are symptoms of **massive, unreviewable PRs**?
* PRs with thousands of lines * Reviewers skim and approve or stall the review ## Footnote This happens when work is not broken into small steps.
429
What is a symptom of **architecture drift** during implementation?
* Quick hacks that violate boundaries * System grows harder to reason about ## Footnote This occurs when implementation ignores the design and long-term system view.
430
What checklist items should be considered for each story in Phase 3?
* Is there a branch/PR that maps clearly to this story? * Does the new code follow our architecture & conventions? * Do we have meaningful unit tests? * Is the change small enough to review in one sitting? * Is there any refactor I should do now? * Are non-functional requirements addressed? * Can this be integrated and deployed without manual heroics? ## Footnote Answering “yes” to these indicates Phase 3 is in good shape.
431
What is the **purpose of Phase 4** in the SDLC?
* Verify functional and non-functional requirements * Catch defects and regressions * Validate integration ## Footnote Phase 4 ensures that the system works correctly and does not break other components.
432
What are the **inputs** to Phase 4?
* Merged / ready-to-merge code * Acceptance criteria * Technical constraints * Test plan sketch * Test/staging environment ## Footnote These inputs are essential for effective testing and integration.
433
What are the **main activities** in Phase 4?
* Run and extend automated tests * Manual / exploratory testing * Environment and integration checks * Validate non-functional requirements * Bug triage and fixes * Test reporting and sign-off ## Footnote These activities ensure comprehensive testing and integration of the system.
434
What types of **automated tests** are run in Phase 4?
* Unit tests * Integration tests * End-to-end (E2E) / UI tests * Regression test suite ## Footnote Each type of test serves a specific purpose in validating different aspects of the system.
435
True or false: **Exploratory testing** is unnecessary if automated tests are in place.
FALSE ## Footnote Exploratory testing helps find issues not covered by automated tests and validates usability.
436
What checks are performed during **environment and integration checks**?
* Config correctness * Database migrations * External dependencies ## Footnote These checks ensure that the environment is set up correctly and that the system integrates well with external components.
437
What **non-functional requirements** are validated in Phase 4?
* Performance * Reliability / fault tolerance * Security basics * Observability ## Footnote Validating these requirements ensures the system meets quality standards beyond just functionality.
438
What is the **bug triage and fixes** process in Phase 4?
* Log bugs clearly * Prioritize bugs * Fix and retest loop ## Footnote This process ensures that all critical issues are addressed before release.
439
What should be included in **test reporting and sign-off**?
* What was tested * What passed/failed * Known issues accepted for now ## Footnote Clear reporting helps determine if the build is ready for deployment.
440
What are the outputs of Phase 4?
* A tested build * Updated test suites * Bug tickets * Evidence / reports * Go/no-go decision for release ## Footnote These outputs indicate the readiness of the system for deployment.
441
What does a **good Phase 4** look like?
* Tests run automatically * Stable integration environment * Clear, reproducible failures * Non-functional requirements measured * New features come with tests ## Footnote A good Phase 4 ensures quality and readiness for release.
442
What does a **bad Phase 4** look like?
* Manual testing last-minute * Unstable test environment * Common regression bugs * No time for performance/security checks * Decisions based on vibes ## Footnote A bad Phase 4 can lead to poor quality and unexpected issues post-release.
443
Fill in the blank: In Phase 4, you should check if all stories pass their acceptance criteria in a __________.
realistic environment ## Footnote This ensures that the system behaves as expected in conditions similar to production.
444
What is the **quick Phase 4 checklist**?
* Run unit, integration, and E2E tests * Check acceptance criteria * Validate performance, security, observability * Fix critical bugs * Record tested items and known issues * Assess readiness for production ## Footnote This checklist helps ensure thorough testing and integration before deployment.
445
What is the **main goal** of Phase 5?
* Move the approved build into production * Do it in a repeatable, low-risk, and observable way * Be ready to roll back or mitigate if something goes wrong ## Footnote This phase involves deploying the tested increment safely to real users.
446
What do you enter Phase 5 with?
* A tested build from Phase 4 * Release decision / approval * Deployment artifacts * Runbooks / procedures * Environment details ## Footnote These inputs ensure a structured and safe deployment process.
447
What is the purpose of **building and packaging the release artifact** in Phase 5?
* Select the build by commit SHA, tag, or Docker image * Ensure the artifact is immutable * Build is traceable back to code version, tests, and stories included ## Footnote This step ensures clarity about what version is being shipped.
448
What should be confirmed before changing anything in the **environment**?
* Environment parity * Dependencies availability * Infrastructure changes staged * Operational prerequisites validated ## Footnote This preparation is crucial for a smooth deployment.
449
What are typical patterns for **database & schema migrations**?
* Backward-compatible first * Apply migrations in a controlled order * Use background jobs for large data changes ## Footnote These patterns help evolve the data layer without breaking running code.
450
What are common **rollout strategies** in Phase 5?
* Big-bang deployment * Blue-green deployment * Canary release * Feature flags / toggles ## Footnote These strategies help manage the risk associated with new releases.
451
What is the purpose of **post-deploy smoke testing**?
* Check app starts and responds to basic requests * Ensure key flows work * Monitor error rates ## Footnote Smoke tests help confirm the deployment was successful before finalizing the release.
452
What should be monitored after the initial smoke test?
* Technical metrics (error rates, latency, resource use) * Business metrics (user actions, failed payments) ## Footnote Monitoring ensures the system performs well under real traffic.
453
What should a **rollback rule** include?
* Clear criteria for rolling back * Known, tested process for rollback ## Footnote This ensures a quick response to issues that arise post-deployment.
454
What are the outputs of Phase 5 when it is 'done'?
* New version running in the target environment * Deployment record * Updated documentation * Known issues list * Rollback readiness ## Footnote These outputs provide a comprehensive overview of the deployment status.
455
What does a **good Phase 5** look like?
* Deployments are boring and repeatable * Everyone knows the running version and rollback process * Minimal or zero downtime ## Footnote A good Phase 5 indicates a mature deployment process.
456
What does a **bad Phase 5** look like?
* Manual commands on production servers * No clear rollback path * No monitoring during rollout ## Footnote A bad Phase 5 can lead to deployment failures and team burnout.
457
Before and during release, what should you confirm?
* Exact build/version being deployed * Infra changes and DB migrations planned * Rollout strategy in place * Deployment process is automated * Post-deploy smoke tests are running * Dashboards and alerts are set up * Clear rollback path exists ## Footnote This checklist helps ensure a robust Phase 5.
458
What is the purpose of **Phase 6** in the software development lifecycle?
* Keep the system running reliably * Handle incidents and problems * Maintain and evolve the system * Feed real-world feedback back into Phase 1 ## Footnote Phase 6 focuses on the in-service life of the software after release.
459
What are the **inputs** to Phase 6?
* A live system in production * Monitoring and logging set up * Runbooks & playbooks for common issues * Support channels (user tickets, internal requests) * SLAs / SLOs / SLIs ## Footnote These inputs are essential for managing the system effectively.
460
What are the core activities in **Phase 6**?
* Monitoring & Observability * Incident Management * Problem Management & Root Cause Analysis * Routine Maintenance & Bug Fixes * Performance Tuning & Capacity Planning * Security Operations * Support & User Feedback * Cost Management ## Footnote These activities ensure the system remains healthy and responsive.
461
What does **Monitoring & Observability** involve in Phase 6?
* Metrics (availability, latency, error rates, resource usage) * Logs (structured logs, audit logs) * Traces (distributed tracing) * Alerts for critical issues ## Footnote The goal is to notice problems before users do.
462
What are the steps in **Incident Management** when something goes wrong?
* Detect * Triage * Respond * Communicate * Resolve * Post-incident review ## Footnote This process helps restore service and improve future responses.
463
What is the purpose of **Root Cause Analysis (RCA)** in Phase 6?
* Grouping related incidents into problems * Identifying design flaws, missing tests, monitoring gaps, process issues ## Footnote RCA aims for systemic fixes rather than temporary solutions.
464
What activities are included in **Routine Maintenance & Bug Fixes**?
* Fixing non-critical bugs * Updating dependencies * Cleaning up feature flags * Small UX tweaks * Refactoring hotspots ## Footnote This work is essential for keeping the system healthy.
465
What does **Performance Tuning & Capacity Planning** involve?
* Analyzing performance metrics * Profiling and optimizing * Capacity planning for load growth ## Footnote These activities help avoid resource limits as traffic increases.
466
What are the key components of **Security Operations** in Phase 6?
* Patch management * Vulnerability scanning * Access reviews * Security monitoring ## Footnote Security is an ongoing process, not a one-time task.
467
What types of feedback are handled in **Support & User Feedback**?
* Support tickets (bugs, usability issues, feature requests) * Feedback from customer success/account managers * Analytics (drop-offs, usage patterns) ## Footnote This feedback informs future iterations and changes to the roadmap.
468
What are the outputs of **Phase 6**?
* Operational metrics & incident history * Improvement backlog * Updated runbooks & documentation * Refined requirements ## Footnote These outputs feed into Phase 1 of the next iteration.
469
What does a **good Phase 6** look like?
* System health is visible * Low MTTR for incidents * Active management of tech debt * Feedback loops are established ## Footnote Good Phase 6 practices lead to continuous improvement.
470
What does a **bad Phase 6** look like?
* No monitoring until users complain * Recurring incidents without root cause fixes * Tech debt piling up * Siloed operations and development ## Footnote Bad practices can lead to system instability and inefficiency.
471
Fill in the blank: **Phase 6** is a __________ for the whole SDLC.
learning engine ## Footnote It emphasizes continuous improvement and adaptation based on operational data.
472
What checklist items indicate if **Phase 6** is functioning well?
* Clear SLIs/SLOs * Near real-time monitoring with alerts * Structured post-incident reviews * Dedicated capacity for maintenance * Regular review of user feedback * Security updates as normal work * Concrete changes based on operational data ## Footnote Affirmative responses to these items suggest effective operations.
473
What does **TDD** stand for?
Test-Driven Development ## Footnote TDD is a software development process where tests are written before the code.
474
What is the first phase of the **core TDD loop**?
Red: Write a *failing* test ## Footnote This phase involves writing a test that expresses a desired behavior which fails because the implementation does not exist yet.
475
What is the second phase of the **core TDD loop**?
Green: Write the *simplest* code to make the test pass ## Footnote Focus on implementing just enough code to satisfy the test without worrying about elegance.
476
What is the third phase of the **core TDD loop**?
Refactor: Clean up code *and* tests with all tests green ## Footnote This phase involves improving the code and tests while ensuring that all tests continue to pass.
477
True or false: TDD is about writing a giant test suite before writing any production code.
FALSE ## Footnote TDD focuses on writing tests for small behaviors before implementing the corresponding code.
478
What types of tests are typically focused on in a **TDD context**?
* Unit tests * Integration tests * End-to-end / system tests ## Footnote TDD primarily focuses on unit tests but exists within a larger testing ecosystem.
479
What is a key benefit of TDD that forces you to clarify behavior?
Specification via examples ## Footnote Writing tests first requires you to define what the code should do, including inputs, outputs, and edge cases.
480
What does TDD drive better design, especially in terms of **API design**?
* Smaller, more focused functions and classes * Clearer boundaries and interfaces * Fewer hidden dependencies ## Footnote Tests act as a client, pushing for better design to facilitate testing.
481
What is the **safety net** provided by TDD?
Regression safety net ## Footnote Tests help ensure that refactoring does not break existing behavior, as failing tests indicate where issues arise.
482
What is a common misconception about TDD?
It’s **not** a guarantee of zero bugs ## Footnote While TDD helps reduce bugs, it does not eliminate them entirely.
483
What should you avoid when writing tests in TDD?
* Testing trivial getters/setters * Writing huge mega-tests * Over-mocking internal collaboration ## Footnote Focus on meaningful behavior and keep tests small and focused.
484
What is the first step in a broader development process that includes TDD?
Understand requirements ## Footnote This involves gathering stories and acceptance criteria before breaking down behaviors into small chunks.
485
What is a key habit to maintain in TDD?
Keep tests small and focused ## Footnote Each test should assert one behavior and have a clear name to improve readability and maintainability.
486
What does the **Red** state in TDD indicate?
A failing test ## Footnote This means the test is meaningful and indicates that the code does not support the desired behavior.
487
What is the purpose of **refactoring** in TDD?
* Remove duplication * Simplify control flow * Improve names, structure, interfaces ## Footnote Refactoring occurs after tests pass to enhance code quality while maintaining functionality.
488
What is an example of a **unit test** in TDD?
Testing a single function/class/module in isolation ## Footnote Unit tests should not involve external services and should run quickly.
489
What is a common anti-pattern in TDD?
Writing tests after code ## Footnote This practice loses the design feedback loop that TDD provides.
490
What is a good practice regarding **mocks** in TDD?
Use mocks/fakes sensibly ## Footnote Mocks simulate collaborators, while fakes provide simplified alternatives to keep tests fast and deterministic.
491
What is a **fragile test** in TDD?
A test that breaks with simple refactors ## Footnote Fragile tests often indicate that they are testing implementation details rather than external behavior.
492
What should a solid engineer be able to do with TDD?
* Explain Red–Green–Refactor simply * Write a test first, then code until it passes * Use a unit testing framework ## Footnote These skills ensure effective use of TDD in software development.
493
What are the **two overlapping axes** of testing?
* Levels * Types ## Footnote These axes help categorize how much of the system is being tested and what aspect of behavior or quality is being assessed.
494
What is the **smallest level** of testing?
Unit tests ## Footnote Unit tests verify the correctness of small units of logic in isolation.
495
What is the **goal** of unit tests?
* Verify correctness of small units of logic * Fast feedback for developers ## Footnote Unit tests are crucial for ensuring that individual components function correctly.
496
What are the **characteristics** of unit tests?
* No network, no database, no filesystem (or all mocked) * Millisecond-level runtime * Very high volume ## Footnote Unit tests are designed to be quick and numerous to facilitate rapid development.
497
What do **integration tests** verify?
* Interfaces * Configurations * Wiring ## Footnote Integration tests check how multiple components work together and catch bugs in interactions.
498
What is the **goal** of integration tests?
Verify that interfaces, configurations, and wiring are correct ## Footnote Integration tests are essential for ensuring that components interact as expected.
499
What are the **characteristics** of integration tests?
* Touch real or realistic infrastructure * Slower than unit tests * Fewer in number ## Footnote Integration tests often require more setup and resources compared to unit tests.
500
What do **system / end-to-end (E2E) tests** validate?
Real user flows work from end to end ## Footnote E2E tests assess the entire system's functionality as a black box.
501
What are the **characteristics** of E2E tests?
* Slow, brittle, and expensive * Cover critical 'happy paths' and key error flows ## Footnote E2E tests are comprehensive but can be challenging to maintain.
502
What are **acceptance / user-acceptance tests (UAT)** designed to confirm?
The system does what stakeholders expect ## Footnote UATs are often written from a user perspective and validate business requirements.
503
What is the **test pyramid**?
* Base: lots of unit tests * Middle: fewer integration tests * Top: very few E2E / acceptance tests ## Footnote The test pyramid illustrates the ideal distribution of different types of tests.
504
What do **functional tests** check?
Whether the system does the correct thing functionally ## Footnote Functional tests can exist at various levels, including unit, integration, and E2E.
505
What are **non-functional tests** focused on?
* Performance * Reliability * Scalability * Security * Usability * Compliance ## Footnote Non-functional tests assess qualities beyond just correctness.
506
What are **regression tests** used for?
Ensure bugs don’t come back ## Footnote Regression tests are written to reproduce bugs and confirm they are fixed.
507
What are **smoke tests**?
Quick checks to see if a build/environment is basically not broken ## Footnote Smoke tests provide an initial verification that the system is functional.
508
What is the purpose of **exploratory / manual testing**?
Human testers exploring the app for edge cases and UX issues ## Footnote Exploratory testing is valuable for discovering issues not easily captured by automated tests.
509
What is the role of **TDD** during design & implementation?
Drives implementation of pure logic and domain services ## Footnote TDD emphasizes writing tests before code to shape the development process.
510
What is the typical order of tests run in **Continuous Integration (CI)**?
* Static analysis / linters * All unit tests * Key integration tests * Some smoke tests / minimal E2E tests ## Footnote This order ensures that the fastest tests are run first to catch issues early.
511
What checks are performed during **pre-release / staging**?
* Larger integration test suites * More end-to-end/acceptance tests * Performance tests * Security tests ## Footnote These checks ensure that the system is ready for production deployment.
512
What is the focus during **production / maintenance**?
* Monitoring & observability * Post-incident loop ## Footnote Continuous testing and monitoring are essential for maintaining system reliability.
513
What should every discovered bug turn into?
A test ## Footnote This practice helps prevent the reoccurrence of bugs in the future.
514
What are the **three core Scrum roles**?
* Product Owner (PO) * Scrum Master * Developers/Development Team ## Footnote These roles provide a mental model for understanding responsibilities in agile teams.
515
The **Product Owner (PO)** is primarily focused on maximizing what?
Product value ## Footnote Responsibilities include owning the Product Backlog and defining acceptance criteria.
516
Key responsibilities of the **Scrum Master** include facilitating which events?
* Daily standups * Sprint Planning * Review * Retrospective ## Footnote The Scrum Master acts as a servant-leader for the team.
517
In a healthy agile team, the **Development Team** includes which roles?
* Backend engineers * Frontend engineers * QA/test engineers * UX/UI designers * DevOps/SRE ## Footnote This group collectively owns the delivery of sprint goals.
518
True or false: The **Development Team** is strictly composed of coders.
FALSE ## Footnote The Development Team includes various roles that contribute to turning backlog items into a Done increment.
519
In agile, requirements evolve how?
Sprint by sprint based on feedback and learning ## Footnote This contrasts with traditional SDLC where requirements are often frozen up front.
520
What does **TDD** stand for in the context of agile implementation?
Test-Driven Development ## Footnote This practice involves writing tests before implementing features.
521
In agile, testing is considered to be __________.
Continuous ## Footnote QA is embedded in the team, designing tests from day one of the story.
522
What is the role of **DevOps/SRE** in an agile context?
* Build and maintain infrastructure * Provide tools for CI/CD * Participate in story slicing ## Footnote They are part of the cross-functional team rather than a separate gate between development and production.
523
What does the **shift-left** security approach entail in agile?
Considering security early, every sprint ## Footnote This contrasts with traditional methods that focus on security reviews only at the end.
524
During sprint planning, who presents the top stories and clarifies value?
Product Owner (PO) ## Footnote The PO plays a crucial role in ensuring the team understands the priorities.
525
What is the main focus of the **Scrum Master** in an agile team?
Flow, process, and people ## Footnote The Scrum Master facilitates events and helps the team improve.
526
What is meant by **cross-functional teams** in agile?
All skills needed to deliver value are present in the team ## Footnote This reduces external dependencies and increases autonomy.
527
What is the **Definition of Done** in agile?
A story isn’t Done until it’s deployable, monitored, and supportable ## Footnote This includes operational responsibilities as part of the development process.
528
What is the role of **UX** in an agile context?
* Work ahead of the current sprint * Participate in backlog refinement * Collaborate with developers during implementation ## Footnote UX is considered a first-class citizen in the backlog.
529
What are the **key principles** that matter most about roles in agile?
* Cross-functional teams * Shared responsibility * Short feedback loops * Accountability, not bureaucracy ## Footnote These principles guide how teams operate and collaborate.
530
What is Jira described as for your agile team?
**shared brain + history + scoreboard** ## Footnote It provides a single place to see what is being built, why, who is doing what, where things are stuck, and whether improvements are being made over time.
531
In an agile SDLC, what does every slice of work become?
**Jira issue** ## Footnote Each issue can be a user story, bug, spike, or task.
532
What are the **types** of Jira issues?
* Story * Task * Bug * Epic * Spike ## Footnote Each type serves a different purpose in the workflow.
533
What are the **statuses** that a Jira issue can have?
* To Do * In Progress * In Review * Testing * Done ## Footnote These statuses are customizable to fit the team's process.
534
What key features does Jira provide for the **Product Owner / Product Manager**?
* Backlog management * Roadmapping (Epics/Releases) * Visibility into progress ## Footnote These features help the PO prioritize and track work effectively.
535
What does Jira help the **Scrum Master / Agile Coach** to monitor?
* Team flow * Blockages ## Footnote It provides tools like boards, workflows, and reports to visualize the process.
536
What does Jira provide for **Developers**?
* Clear, prioritized queue of work * Issue details with context * Integration with code tools * Definition of Done baked into workflow ## Footnote This helps developers understand their tasks and the context around them.
537
What is the QA question that Jira helps answer?
* “What should I test, how, and what’s the status?” ## Footnote Jira assists QA with test scope, status tracking, bug tracking, and regression visibility.
538
How does Jira assist **UX / Design** roles?
* Attach/link design artifacts * Design tasks as first-class issues * Participate in backlog refinement * Track implementation ## Footnote This ensures designs are integrated into the development flow.
539
What does Jira help **DevOps / SRE** track?
* Infrastructure stories/tasks * Release versions * CI/CD integration * Change management ## Footnote This visibility helps prioritize infrastructure work alongside feature work.
540
What does Jira allow for **Security / Compliance** tracking?
* Track security work as a separate issue type * Link vulnerabilities to epics/components * Prioritize security fixes in sprints ## Footnote This keeps security items integrated with other work.
541
What does Jira support in **Sprint Planning**?
* Select issues from the backlog * See estimates * Check team capacity * Make sprint goal visible ## Footnote This helps create a committed set of issues for the sprint.
542
What is the purpose of Jira during the **Daily Standup**?
* Pull up the board * Walk the columns * Discuss active issues ## Footnote This provides a shared view for the team to discuss progress.
543
What does Jira provide for the **Sprint Review**?
* Completed issues view * Walk through behavior in the app * Confirm acceptance criteria are met ## Footnote This ties working software back to backlog items.
544
What data does Jira provide for the **Retrospective**?
* Sprint burndown * Blocked issues * Reopened bugs ## Footnote This data helps identify areas for process improvement.
545
What does Jira allow for the **lifecycle** of each piece of work?
* Statuses * Transitions * Workflows per issue type ## Footnote This captures the flow of each issue through mini-phases.
546
What does Jira help with regarding **TDD and testing strategy**?
* Stories & acceptance criteria inform tests * Definition of Done includes testing requirements * Linking test runs to issues ## Footnote This connects planning and tracking with execution.
547
What does Jira **not** do?
* Write requirements * Make you agile * Enforce good testing * Replace direct communication ## Footnote It does make work visible and provides structure for continuous improvement.
548
AI is changing *how* you do **TDD** and *which parts* you still do manually. True or false?
TRUE ## Footnote The skills behind TDD are more important when an LLM is generating and editing your code.
549
What are the **three shifts** people refer to when they say AI makes TDD obsolete?
* AI can write code from natural-language specs * AI can generate tests from code or specs * AI can auto-refactor and auto-fix failing tests ## Footnote These shifts lead to the perception that formal TDD loops are unnecessary.
550
What are the **three big things** TDD provides?
* Specification * Design * Regression safety net ## Footnote TDD is not just about writing tests first; it encompasses broader benefits.
551
In AI-assisted TDD, the loop shifts from **Human**: Red (write failing test) → Green (write code) → Refactor to what?
* Human: Describe behavior & edge cases * AI: Generate tests + stub code * Human: Run tests, inspect failures, steer design * AI: Patch code / refactor * Human: Curate and tighten tests ## Footnote The mechanics of TDD become more collaborative with AI.
552
AI makes **test-after** workflows cheap but also more dangerous. True or false?
TRUE ## Footnote AI-generated tests can lock in existing bugs as 'correct' behavior, losing the design-pressure aspect of TDD.
553
What does AI allow in the **refactoring** phase of TDD?
* Large refactors can be requested * Tests ensure behavior stays consistent ## Footnote AI reduces the cost of refactoring but increases the importance of having a solid test suite.
554
In what scenarios can AI replace traditional TDD practices?
* Simple / boilerplate code * Legacy systems and characterization tests ## Footnote AI can generate basic tests for trivial code and capture current behavior in legacy systems.
555
In which areas is TDD still absolutely necessary?
* Complex domain logic & invariants * API and system design * Regression defense in an AI-edit-heavy world ## Footnote TDD remains crucial for correctness and design pressure in complex scenarios.
556
How is TDD evolving in the context of AI?
* TDD becomes 'Test-Described Development' * TDD becomes more selective * AI makes higher-level checks more viable ## Footnote The intent of TDD remains, but the authoring process is shared with AI.
557
AI is making **manual, line-by-line test writing** less necessary for what types of code?
* Simple behavior * Boilerplate code * Legacy characterization ## Footnote AI reduces the need for strict adherence to TDD rituals in certain areas.
558
What remains essential despite AI's impact on TDD?
* Defining correct behavior clearly * Designing for testability and decoupling * Having a fast, trustworthy regression suite ## Footnote The core ideas of TDD are more valuable with AI editing codebases aggressively.
559
What is the **first step** in the classic TDD loop?
Write a failing test (Red) ## Footnote This involves creating a test that checks the expected behavior before any implementation is done.
560
In classic TDD, what does the **Green** step involve?
Implement minimal code to make tests pass ## Footnote This step focuses on writing just enough code to satisfy the failing tests.
561
What is the **final step** in the classic TDD loop?
Refactor with tests staying green (Refactor) ## Footnote This step involves cleaning up the code while ensuring all tests still pass.
562
What is the **AI-assisted flow** for TDD?
Involves using AI to generate tests and implementation based on behavior descriptions ## Footnote This approach leverages AI to reduce manual coding effort.
563
What does the AI-assisted **Red** step involve?
Describing the behavior instead of hand-writing the first test ## Footnote This allows for faster test creation through AI assistance.
564
In AI-assisted TDD, what is the purpose of asking AI to generate tests?
To create unit tests based on specified behavior ## Footnote This saves time on boilerplate code and allows for a focus on behavior.
565
What is the role of the developer during the AI-assisted **Green** step?
Review AI-generated implementation and verify logic ## Footnote The developer ensures that the AI's output meets the specified requirements.
566
What is a key difference between classic TDD and AI-assisted TDD regarding test writing?
AI generates tests instead of hand-writing them ## Footnote This reduces the manual effort involved in creating test cases.
567
What can AI *replace* in the TDD process?
* Boilerplate test authoring * Simple implementation coding * Mechanical refactoring ## Footnote AI can handle repetitive tasks, allowing developers to focus on more complex issues.
568
What are some aspects that AI *cannot* replace in TDD?
* You as the oracle of correctness * Design judgment * Deciding where tests belong in the architecture ## Footnote Human oversight is crucial for ensuring correctness and design quality.
569
What is the first step in the AI-accelerated TDD workflow?
Start with English, not code ## Footnote Write a short spec of behavior and edge cases to guide the development process.
570
What should you do after generating a test suite with AI?
Review and fix any wrong expectations or missing edge cases ## Footnote This ensures the tests accurately reflect the desired behavior.
571
What is the purpose of running tests after implementing code in AI-assisted TDD?
To verify that the implementation satisfies the tests ## Footnote This step checks for correctness and identifies any issues early.
572
What is the last step in the AI-accelerated TDD process?
Iterate for new features by adding/modifying tests first ## Footnote This maintains the cycle of development and ensures continuous improvement.
573
What is the main difference in control between a **Library** and a **Framework**?
Library: You control the flow; Framework: It controls the flow ## Footnote In a library, you decide when and how to call its functions, while in a framework, you plug your code into it, and it decides when and how your code runs.
574
What does a **Library** provide?
* Reusable functions or components * Control over the overall flow of the program * Ability to mix and match libraries ## Footnote Examples include math libraries, UI component libraries, and HTTP client libraries.
575
Fill in the blank: A **Framework** provides a ______ for your entire app.
skeleton / structure ## Footnote You fill in certain pieces, and the framework orchestrates the flow.
576
What is a key feature of a **Framework** regarding conventions?
Enforces conventions like file structure, naming, lifecycle hooks ## Footnote This helps maintain consistency across applications built with the framework.
577
What does **Inversion of Control (IoC)** mean in the context of a framework?
The framework calls your code instead of you calling the framework ## Footnote This is a fundamental principle that differentiates frameworks from libraries.
578
Name two examples of **Web frameworks**.
* Django * Ruby on Rails ## Footnote These frameworks provide a structured way to build web applications.
579
True or false: React is classified strictly as a **library**.
FALSE ## Footnote While React calls itself a library, it often behaves like a framework due to its ecosystem and usage.
580
What is a quick rule of thumb to determine if you are using a **Library** or a **Framework**?
Who controls the main flow of the program? ## Footnote If you control it, you are dealing with libraries; if the tool controls it, you are inside a framework.
581
What are the two main categories of **developers** discussed?
* Front-end developers * Back-end developers ## Footnote Each category uses libraries and frameworks differently based on their specific needs.
582
Typical **front-end libraries** include which of the following?
* UI/component libraries: Material UI, Chakra UI, Bootstrap, Tailwind * Utility libraries: Lodash, date-fns, axios, charting libs, form libs ## Footnote These libraries help with specific tasks like styling and HTTP calls.
583
How do front-end developers typically use **libraries**?
* Import specific pieces * Stay in charge of routing, state management, app lifecycle ## Footnote Libraries are like *lego bricks* that plug into existing app architecture.
584
Typical **front-end frameworks** include which of the following?
* React ecosystem with Next.js / Remix * Angular * Nuxt (Vue) * SvelteKit ## Footnote Frameworks provide a structured approach to building applications.
585
How do front-end developers typically use **frameworks**?
* Framework decides routing, data fetching, build pipeline * Adopt framework conventions: folder layout, file names ## Footnote Frameworks serve as the *skeleton* of the entire UI app.
586
Common **back-end libraries** include which of the following?
* Database drivers: pg, MongoDB driver * ORMs: Prisma, TypeORM, SQLAlchemy * Auth & security: JWT libs, bcrypt * Utilities: logging, metrics, parsing ## Footnote Libraries solve specific problems within the back-end architecture.
587
How do back-end developers typically use **libraries**?
* Call library functions explicitly * Manage DB connections, password hashing, logging ## Footnote Libraries provide *low-level tools* for specific tasks.
588
Common **back-end frameworks** include which of the following?
* Node: Express, NestJS * Python: Django, Flask/FastAPI * Java: Spring Boot * C#: ASP.NET Core * Ruby: Rails ## Footnote Frameworks define the structure and behavior of back-end applications.
589
How do back-end developers typically use **frameworks**?
* Frameworks define HTTP handling, routing, and project architecture * Write handlers/controllers for HTTP requests ## Footnote Frameworks provide the *structure* of the entire server application.
590
What are the key differences in how **front-end** and **back-end** developers use libraries?
* Front-end: UI & UX focused, tied to rendering tech * Back-end: Infrastructure focused, less opinionated about app structure ## Footnote Front-end libraries are often swapped more frequently than back-end libraries.
591
What are the key differences in how **front-end** and **back-end** developers use frameworks?
* Front-end: Handle routing, rendering, data fetching * Back-end: Handle HTTP layer, project architecture, patterns ## Footnote Frameworks enforce structure and provide built-in features for back-end applications.
592
On the **front end**, frameworks focus on what aspect of the application?
How the app is loaded, routed, and rendered in the browser ## Footnote This includes considerations for server-side rendering as well.
593
On the **back end**, frameworks focus on what aspect of the application?
How HTTP requests flow through the system and app structure ## Footnote This includes routing and middleware management.
594
Fill in the blank: Libraries are just **_______** you pull off the shelf, while frameworks are the **house you agree to move into**.
tools ## Footnote This analogy highlights the difference in flexibility and structure between libraries and frameworks.
595
What is **inversion of control** (IoC)?
**The moment your code stops being the boss and something else takes over the flow** ## Footnote IoC signifies a shift where the framework or library calls your code instead of the other way around.
596
In a **library**, who is in control?
* You are in control * You call the library when needed ## Footnote The library acts as a toolbox that you utilize as required.
597
In a **framework**, who is in control?
* The framework is in control * It calls your functions/classes when needed ## Footnote This represents the essence of inversion of control.
598
Without IoC, who decides when the program starts and exits?
* You decide when the program starts * You decide when to call library functions * You decide when the program exits ## Footnote Your code is the main driver in this scenario.
599
What does **IoC** promise when you hand control to the framework?
It promises to call your code at the right time ## Footnote This allows you to focus on providing components or handlers.
600
In a **front-end example** with a framework, what does the framework do?
* Renders components * Wires up DOM events * Calls your callback when needed ## Footnote You do not manage the event loop directly.
601
In a **back-end example** with Express, what does the framework handle?
* Accepting TCP connections * Parsing HTTP headers * Routing URLs ## Footnote The framework calls your handler when a request matches a route.
602
What is the **Hollywood Principle** in the context of IoC?
**“Don’t call us, we’ll call you.”** ## Footnote This principle emphasizes that in a framework, you implement hooks/handlers that the framework will call.
603
What are some **pros** of using IoC?
* Consistent structure (routing, lifecycle, rendering) * Less boilerplate code * Centralized caching, logging, error handling ## Footnote These advantages streamline development and maintenance.
604
What are some **cons** of using IoC?
* Must play by the framework’s rules * Harder debugging * Harder to swap frameworks than libraries ## Footnote These drawbacks can complicate development and transition between frameworks.
605
Quick definition: **Inversion of control** is when your code stops directing the program and instead _______.
plugs into a larger system (a framework) that decides when and how your code runs ## Footnote This highlights the shift in control from the developer to the framework.
606
What is an **endpoint** in web/dev terms?
A specific URL + HTTP method combination that your app responds to ## Footnote Examples include: *GET /users*, *POST /users*, *GET /users/123*, *DELETE /users/123*.
607
Name an example of an **endpoint**.
* GET /users * POST /users * GET /users/123 * DELETE /users/123 ## Footnote Each of these is a different endpoint, even though some share the same URL path.
608
Endpoints usually live on a **________** or an **API gateway**.
server ## Footnote They accept requests and return responses (JSON, HTML, files, etc.).
609
What is **routing**?
How the framework decides which handler function to call based on an incoming request ## Footnote It acts as the decision layer between the outside world and your code.
610
List the **pieces** you’ll see in routing.
* Path * HTTP method * Path parameters * Query parameters * Body ## Footnote These components help define how requests are handled.
611
What does the **path** represent in routing?
The URL structure, e.g., /users, /users/:id, /orders/:orderId/items/:itemId ## Footnote It indicates the specific resource being accessed.
612
What are **path parameters**?
Variable segments in the URL, e.g., /users/:id ## Footnote Available in code as req.params.id (Express-style).
613
What are **query parameters**?
Parameters added to the URL, e.g., /users?role=admin&page=2 ## Footnote Available as req.query.role, req.query.page.
614
In routing, what does the **body** refer to?
Data that comes in the request body for methods like POST/PUT ## Footnote This can include JSON, form data, etc.
615
What does **front end routing** usually mean?
Changing what component/view is shown based on the URL without a full page reload ## Footnote This is often implemented with libraries like React Router.
616
In front end routing, what happens when the URL changes?
The router library chooses the right component to render ## Footnote No network request is required for the route change itself.
617
Summarize **endpoint** in one sentence.
A specific URL + HTTP method that your app responds to (e.g., GET /users/:id) ## Footnote This defines how your application interacts with clients.
618
Summarize **routing** in one sentence.
The rules/mechanism that take an incoming request’s URL/method and choose which code or component should handle it ## Footnote This is essential for directing traffic in web applications.
619
What does **API** stand for?
Application Programming Interface ## Footnote An API defines what you can ask a system to do, what you must send, and what you’ll get back.
620
An API is similar to a **menu at a restaurant** because it tells you what?
* What dishes (operations) are available * What inputs each needs (options, sizes, sides) * What you can expect in return (the dish) ## Footnote The kitchen (implementation) is hidden; you don’t care how they cook it.
621
What is a **library API**?
Function signatures, classes, methods you can call ## Footnote A web API typically refers to a set of HTTP-based operations.
622
Name an example of a **web API** operation.
* `GET /users` * `POST /login` * `GET /orders/:id` ## Footnote Usually returning JSON or similar.
623
What is an **endpoint** in the context of a web API?
One 'item' on the API menu ## Footnote Each endpoint corresponds to a specific operation like `GET /users`.
624
True or false: An **API specification** describes all the endpoints and how to use them.
TRUE ## Footnote It details the URL, method, required parameters, request body, and response.
625
What happens during the **routing** process when a request hits an endpoint?
The server matches the request to a route definition ## Footnote This involves checking the HTTP method and path.
626
Fill in the blank: **Routing** is the internal switchboard that decides how to handle incoming requests based on the _______.
API endpoint ## Footnote It maps requests to the appropriate handler function.
627
What are the three components that define how an API operates?
* API: The public contract * Endpoints: The addresses implementing that contract * Routing: The internal logic that maps requests ## Footnote This structure helps in understanding how APIs function.
628
What does the **API** provide regarding operations?
It defines what operations exist, how to call them, and what they return ## Footnote This is essential for developers to interact with the application.
629
What are the two important tools in the **vibecoding** world?
* Hide plumbing (auth, routing, streaming, RAG, agents) * Give AI structured ways to act (tools, workflows, data access) ## Footnote These tools simplify the development process by managing complex backend functionalities.
630
What do **LangChain** and **LangGraph** provide?
* Open-source frameworks for building LLM-powered agents * Tools, workflows, data access, and integrations with models ## Footnote They help describe workflows in high-level terms and manage orchestration.
631
What is **LlamaIndex** used for?
* Data ingestion * Indexing * Querying * RAG over documents and data sources ## Footnote It provides composable pieces for connecting various data sources.
632
What is the purpose of **Semantic Kernel**?
* Model-agnostic SDK for building AI agents * Orchestrates complex behaviors with planning and skills ## Footnote It is designed to integrate into enterprise applications.
633
What are the main features of **React + Next.js**?
* Server components & API routes * Easy integration with streaming responses * File-based routing ## Footnote They are widely used for building modern web applications.
634
What does the **Vercel AI SDK** provide?
* Toolkit for building AI-driven UIs * Hooks like `useChat` and server utilities for streaming text ## Footnote It simplifies the development of chat interfaces and generative UIs.
635
What functionalities does **LangSmith** offer?
* Tracing and evaluating agents * Full observability of chains, prompts, and metrics ## Footnote It is essential for monitoring AI agent performance.
636
What are the roles of **FastAPI / Django / Rails / Spring / NestJS** in AI applications?
* Clean boundaries for AI endpoints * Hosting routing for tools/agents * Connecting to existing databases ## Footnote These frameworks provide structure and maintainability for AI applications.
637
What is the significance of **task/workflow orchestrators** like Temporal or Dagster?
* Manage long-running, multi-step workflows * Provide reliability, retries, and visibility ## Footnote They are crucial for orchestrating complex AI tasks.
638
In vibecoding, what do **frameworks** and **libraries/SDKs** represent?
* Frameworks: Structure of AI app (agents, tools, workflows) * Libraries/SDKs: Provide components for frameworks and glue code ## Footnote This distinction helps in organizing the development process.
639
What does **Continuous Integration (CI)** focus on?
Integrating code changes quickly and automatically checking for bugs ## Footnote CI helps prevent bugs from piling up by ensuring that every change is validated.
640
List the typical **CI pipeline steps** when you push a commit or open a PR.
* Trigger * Checkout + build * Automated tests * Static checks * Report ## Footnote These steps ensure that code changes are validated before merging.
641
What happens if the CI pipeline fails?
Build is **red**, and you fix it *before* merging ## Footnote This indicates that there are issues that need to be resolved.
642
What does a **green** build indicate in CI?
The change is safe to integrate ## Footnote A green build means all tests and checks have passed.
643
Name some **tools** used for Continuous Integration.
* GitHub Actions * GitLab CI * CircleCI * Jenkins * Azure DevOps ## Footnote These tools help automate the CI process.
644
True or false: Without CI, developers work on short-lived branches.
FALSE ## Footnote Without CI, developers tend to work on long-lived branches, leading to integration challenges.
645
What is the main benefit of **Continuous Delivery (CD)**?
Every change that passes CI is always in a deployable state ## Footnote CD allows for a small manual step to release changes.
646
What is the difference between **Continuous Delivery** and **Continuous Deployment**?
Continuous Delivery requires a manual step to release; Continuous Deployment is fully automated ## Footnote Both involve similar pipeline mechanics but differ in deployment policies.
647
What does a **deployment stage** in Continuous Delivery include?
* CI pipeline passes * Artifacts are created * System automatically deploys to staging * Human decides when to ship to production ## Footnote This ensures that the code is ready for production at any time.
648
In **Continuous Deployment**, what happens after the CI passes?
* Builds artifacts * Deploys them straight to production * Runs smoke tests / health checks ## Footnote This process ensures that every healthy change is deployed automatically.
649
What is the key point of **Continuous Deployment**?
Every healthy change *does* get deployed to users ## Footnote This minimizes waiting time for deployments.
650
How do **CI** and **CD** fit together?
* CI keeps the codebase healthy and integrable * CD gets healthy code into real environments safely ## Footnote CI and CD work in tandem to streamline the development process.
651
Fill in the blank: **Continuous Integration** means every change is automatically built and tested as soon as it’s integrated, keeping `main` ______.
healthy ## Footnote This ensures that the main branch remains stable.
652
What is the one-sentence summary of CI and CD?
**Continuous Integration** means every change is automatically built and tested; **CD** takes those validated changes and continuously pushes them toward or into production ## Footnote This summary encapsulates the essence of both practices.
653
What are **CI/CD tools** primarily used for?
* Watching your repo * Building and testing your code * Shipping to the right environment (dev/staging/prod) ## Footnote They aim to automate the software development lifecycle with minimal human intervention.
654
Name the **common CI/CD tools** that handle the full pipeline.
* GitHub Actions * GitLab CI/CD * CircleCI * Jenkins * Azure DevOps Pipelines * Bitbucket Pipelines * TeamCity * Travis CI ## Footnote These tools manage the build, test, and optional deployment processes.
655
What is the role of **Argo CD** in CI/CD?
GitOps-style continuous delivery for Kubernetes ## Footnote It keeps clusters in sync with the desired state stored in Git.
656
What are the **typical use cases** for CI/CD tools?
* Run tests on every push / PR * Build artifacts automatically * Auto-deploy to staging after CI passes * Continuous delivery to production with approval * Continuous deployment (auto to prod) * Kubernetes + GitOps-style deployments * Multi-service / microservices release orchestration * Compliance, auditability, and enterprise control ## Footnote These use cases illustrate how CI/CD tools can be applied in various scenarios.
657
What is the goal of the use case: **Run tests on every push / PR**?
Don’t let broken code hit `main` ## Footnote This involves checking out the repo, installing dependencies, and running tests.
658
What artifacts might be produced in the use case: **Build artifacts automatically**?
* Docker images * JARs / WARs * npm packages * Mobile app bundles (APK / AAB / IPA) * Static web assets ## Footnote These artifacts are generated from successful builds and are deployable.
659
What is the goal of the use case: **Continuous delivery to production with approval**?
Keep code always deployable; promote to prod via one click / approval ## Footnote This involves a CI build storing artifacts and a CD pipeline promoting them to production after approval.
660
What tools are used for **Kubernetes + GitOps-style deployments**?
* Argo CD * Flux CD ## Footnote These tools ensure that the cluster state matches what is declared in Git.
661
What is the goal of the use case: **Compliance, auditability, and enterprise control**?
Prove who deployed what, where, and when; enforce approval policies ## Footnote This can involve tools like GitLab CI/CD with built-in security scans and compliance reports.
662
True or false: **GitHub Actions** is a CI/CD tool that can be used for simple CD.
TRUE ## Footnote It is suitable for solo or small teams using GitHub.
663
Fill in the blank: **Continuous deployment** means that every green build goes straight to __________.
production automatically ## Footnote This is achieved using CI/CD tools with a policy of deploying on successful builds.
664
What is **refactoring**?
Cleaning up and restructuring existing code to make it easier to understand, maintain, and extend, without changing its observable behavior. ## Footnote Refactoring might involve renaming variables, splitting functions, removing duplicate code, and simplifying complex logic.
665
List some **signs** that indicate code may need refactoring.
* Hard to understand * Painful to add new features * Lots of duplicated code * Complexity causing bugs * Regular maintenance needed ## Footnote These signs suggest that the design and structure of the code may be hindering development.
666
When is it advisable to refactor code?
* When code is hard to understand * When adding a new feature feels painful * When there is lots of duplicated code * When complexity keeps causing bugs * After tests are in place * As part of regular maintenance ## Footnote Refactoring should not change the external behavior of the code.
667
True or false: Refactoring is the same as rewriting everything from scratch.
FALSE ## Footnote Refactoring improves the internal structure without changing external behavior, while rewriting is a complete overhaul.
668
What does the term **'Red → Green → Refactor'** refer to?
A loop from Test-Driven Development (TDD) where you write tests, refactor code, and confirm everything still works. ## Footnote This process ensures that refactoring does not change the existing behavior of the code.
669
Fill in the blank: Refactoring is not just __________ or superficial edits.
formatting ## Footnote Refactoring involves meaningful changes to structure and design.
670
What are some examples of actions taken during refactoring?
* Renaming variables/functions * Splitting a huge function into smaller ones * Removing duplicate code * Improving design * Simplifying complex logic ## Footnote These actions help enhance code clarity and maintainability.
671
What is the purpose of **tests** in the refactoring process?
To provide a safety net ensuring that refactoring does not change behavior. ## Footnote Tests confirm that the code still works the same after refactoring.
672
What is meant by **'code rot'**?
The gradual decline in code quality over time due to lack of maintenance. ## Footnote Regular refactoring helps prevent code rot by keeping the codebase healthy.
673
What is a common practice during regular maintenance of code?
Leave the code cleaner than you found it. ## Footnote This practice involves tidying up code while fixing bugs or adding features.
674
What is the **goal** of refactoring?
To improve the internal structure of code without changing its external behavior. ## Footnote This goal enhances readability, maintainability, and extensibility of the code.