Unleashing Databases for the Agile Age

In today's fast-paced world, agility is critical for software teams to deliver value rapidly and respond to changing requirements. While agile methodologies like Scrum, Kanban, and XP have been widely adopted for application development, applying these practices to database work presents unique challenges.

At Agile SQL, our mission is to equip database professionals with the knowledge, techniques, and tools to embrace an agile mindset for database development and operations successfully.

Why Agile for Databases?

Databases are often treated as monolithic, static components that constrain agility. However, an increasing number of organizations are recognizing the benefits of bringing databases into their agile processes:

  • Accelerate time-to-market for database changes
  • Reduce risk through automated testing and controlled deployments
  • Improve collaboration between DBAs, developers, and business
  • Align database work with frequent application release cadences

Your Guide to Agile Database Practices

Whether you're just getting started or looking to level up your agile database capabilities, Agile SQL provides comprehensive resources:

Adopting Agile

Agile methodologies have transformed how software teams build and deliver applications. However, several unique database challenges require a tailored approach to going agile.

The Need for Agile Database Practices

Traditional database development follows a waterfall model - requirements are gathered up front, a monolithic schema is designed, and changes are infrequent. This model breaks down in agile environments:

  • Applications evolve rapidly through iterative sprints
  • Database changes struggle to keep pace
  • Rigid schemas become bottlenecks for delivering new features

To align with agile application processes, database work must also become iterative, incremental, and responsive to change.

Common Challenges

Some key challenges development teams face when transitioning databases to agile include:

  • Lack of version control and automated testing for database code
  • Coordinating database changes across multiple application teams
  • Defining requirements and sizing database work for agile sprints
  • Resistance to refactoring database schemas due to perceived risks
  • Compliance requirements for auditing and approving changes

The Agile Database Techniques Stack

The Agile Data (AD) home ground presents a comprehensive framework for applying agile practices across different aspects of database work:

  • Evolutionary Database Design
  • Database Refactoring
  • Database Change Management
  • Database Version Control
  • Database Continuous Integration

By adopting techniques across this stack, teams can iteratively design, develop, test, and deploy database changes agilely.

Getting Started

To begin your agile database journey, consider the following:

  • Conducting an assessment of current database development processes
  • Identifying gaps compared to agile best practices and prioritizing improvements
  • Implementing version control and automating schema migration scripts
  • Adopting an evolutionary, test-driven approach to database design
  • Establishing processes for continuous integration of database changes

While transforming database work to agile requires upfront effort, the long-term benefits of accelerated delivery and reduced risks make it a worthwhile investment.

Data Modeling

In traditional waterfall projects, data modeling happens upfront - requirements are gathered, a normalized data model is designed, and the database schema is implemented as a monolithic structure. However, this "big upfront design" approach does not align well with agile principles of embracing change and incremental delivery.

Agile data modeling techniques break away from this rigid model, allowing data structures to evolve iteratively alongside application code. This enables teams to deliver working software increments rapidly while deferring design decisions until the last responsible moment.

Evolutionary Data Modeling

Rather than attempting to design a perfect, future-proof schema from the start, agile data modeling follows an evolutionary, test-driven approach:

  • Start with a simple, persistent, ignoring data structure
  • Model around the current set of core requirements
  • As new requirements emerge, refactor and extend the model incrementally

This "growing" of the data model reduces upfront design effort and embraces the reality that requirements will change over a project's lifetime.

Data Modeling Anti-Patterns

Some traditional data modeling practices that are anti-patterns in agile environments:

  • Big upfront analysis and design
  • Aiming for a fully normalized data model from the start
  • Modeling for hypothetical future requirements

Vertical Data Slicing

A critical agile data modeling technique is vertical data slicing or vertical partitioning. Rather than modeling the entire domain upfront, data is sliced into vertical datasets focused on specific business capabilities or user stories.

This allows teams to iteratively build out slices of functionality without being constrained by a rigid, monolithic schema. Slices can evolve semi-independently and be integrated over time.

Collaborative Modeling

Agile data modeling is a collaborative effort between developers, DBAs, data analysts, and business representatives. Models are continuously groomed and refactored through active communication and feedback loops.

Techniques like model storming, example mapping, and event storming foster shared understanding and drive the evolutionary design process.

Refactoring Databases

A core practice is treating database schemas as iterative code that can be refactored safely through automated tests and database change scripts. This enables an evolutionary, incremental approach to data modeling.

By embracing agile data modeling practices, teams can deliver working software faster while keeping data structures in sync with changing application requirements.

DevOps for Databases

The DevOps movement has transformed how applications are built, tested, and deployed through practices like continuous integration, automated deployments, and infrastructure as code. However, databases are often left out of this automated delivery pipeline.

Applying DevOps principles to database development and operations can unlock significant benefits:

  • Accelerate the delivery of database changes in sync with application releases
  • Improve quality through automated testing and validation
  • Enable self-service provisioning of database environments
  • Increase reliability and reduce downtime through controlled deployments

To achieve true DevOps for databases, teams must adopt a comprehensive set of processes and tooling.

Database Change Management

At the core is treating database schemas and objects as versionable artifacts that can be developed, built, tested, and deployed through an automated pipeline:

  • Version control for all database code (schemas, objects, migration scripts)
  • Automated build processes to generate deployment artifacts
  • Automated testing of database changes through unit/integration tests
  • Approval workflows for database deployments with audit trails

Continuous Database Integration

Integrating database changes continuously allows teams to catch issues early and maintain a releasable database at all times:

  • Automated build triggers on every database code commit
  • Run full test suite against each build artifact
  • Provision temporary database environments for integration testing

Database Deployment Automation

Deploying database changes should be a fully automated, repeatable process with zero downtime:

  • Blue/green or rolling deployments for updates with no downtime
  • Automated rollback capabilities in case of failures
  • Environment configuration management as code (infrastructure as code)

Self-Service Database Provisioning

Enable developers to self-provision database environments on-demand through automated processes:

  • Templated database environments with sample data
  • Tear down and rebuild environments with the latest changes
  • Integrate with CI/CD pipelines for testing and staging

By adopting a DevOps mindset for databases, teams can deliver changes faster, with higher quality, and in lockstep with frequent application releases.

Legacy Modernization

Many organizations are burdened with aging, monolithic database systems designed for a different era of software development. These legacy databases act as bottlenecks and constraints when adopting agile practices and accelerating delivery cycles.

Modernizing legacy databases is critical for achieving true agility. This involves re-architecting and migrating to new database technologies and designs that enable frequent iterative changes.

Challenges of Legacy Databases

Some critical challenges posed by legacy monolithic database systems:

  • Rigid, inflexible schemas resistant to incremental changes
  • Lack of database version control and change automation
  • Tightly-coupled dependencies across applications
  • Performance/scalability limitations of older technologies
  • Technical debt from years of accumulating quick fixes

These issues make it extremely difficult to evolve legacy databases alongside rapidly changing application requirements using agile methods.

Modernization Strategies

There are several potential strategies for modernizing and migrating away from legacy database architectures:

Re-Architecting

Break up the monolithic database into smaller, decoupled persistence stores aligned with business capabilities or bounded contexts. This separates the concerns and lifecycles of different data domains.

Schema Extraction

Extracting just the active schema and data into a new database may be possible for specific legacy systems, leaving behind cruft and technical debt.

Database Refactoring

Apply a series of composable refactorings to improve and modularize the existing database design over time iteratively.

Polyglot Persistence

Adopt a polyglot persistence architecture that leverages different database technologies (relational, NoSQL, etc.) based on the data access patterns required.

Cloud Migration

Migrate the entire database environment to a fully managed cloud database service, which offers greater agility, scalability, and automation capabilities.

Getting Started

A successful legacy database modernization initiative requires the following:

  • Conducting an assessment of the existing system's strengths/weaknesses
  • Defining a target modern database architecture
  • Creating a phased roadmap for incremental migration
  • Implementing database version control and automation
  • Adopting agile development practices like test-driven design

With a pragmatic, incremental approach, legacy databases can be evolved into an enabler rather than a bottleneck for agile delivery.

Tools and Techniques

To effectively apply agile methodologies to database work, teams must adopt robust tools and techniques across the entire database lifecycle. From evolutionary design to automated testing and deployment, having the right processes in place is critical.

Database Version Control

Treating database code as a versionable artifact is foundational. Version control systems like Git, when combined with database source control tools, enable:

  • Tracking and auditing of all database changes
  • Branching and merging of database changes
  • Reverting changes through rollbacks if needed
  • Diff/merge of schema changes across branches

Popular database source control tools include Liquibase, Flyway, and Redgate's SQL Source Control.

Evolutionary Database Design

Rather than a big upfront design, databases should evolve incrementally. Techniques like example mapping, event storming, and data slicing foster an iterative, test-driven approach to data modeling.

Database Refactoring

Like application code, database schemas should be iteratively refactored to improve design, remove technical debt, and support new requirements. Database refactoring tools and frameworks like SQLTools, Prisma Migrate, and Flyway can automate schema migrations.

Database Change Automation

Every database change should be built, tested, and deployed through an automated pipeline. Tools for database change automation include:

  • Build servers like Jenkins/Azure DevOps for executing migration scripts
  • Test frameworks like tSQLt for database unit testing
  • Release management tools like Octopus Deploy for controlled deployments

Database Continuous Integration

Integrating database changes continuously through a CI process helps catch issues early. Databases can be integrated into existing CI tools like Jenkins, CircleCI, etc., or used in database-specific CI tools like Redgate's SQL Change Automation.

Self-Service Database Provisioning

Enable developers to self-provision temporary database environments through automated processes and infrastructure-as-code tools like:

  • Docker containers for database environments
  • Terraform/ARM templates for infrastructure provisioning
  • Cloning/refreshing databases from production backups

By leveraging agile tools and techniques, database professionals can accelerate delivery, improve quality, and bring databases into their DevOps practices.

Case Studies

While adopting agile for database work has challenges, an increasing number of organizations across industries are realizing the benefits. Here are some real-world examples of teams that have transformed legacy database processes using agile techniques:

Salesforce - Database Continuous Delivery

Salesforce's core database powers their market-leading CRM, with millions of subscribers and billions of transactions daily. To support rapid application release cadences, their database team implemented a continuous delivery pipeline:

  • Database changes are version-controlled in Git repositories
  • Every commit kicks off an automated build and test process
  • A suite of over 60,000 tests validate database changes
  • Passing builds can be deployed to production databases through automated releases

This allowed Salesforce to release database changes multiple times per day with high quality. Their agile database practices reduced deployment risks and accelerated delivery timelines.

Skyscanner - Microservices Database Migrations

The travel search engine Skyscanner successfully migrated from a monolithic database to a microservices architecture with 50+ databases. They used an evolutionary database refactoring approach:

  • Identified bounded contexts and vertically extracted data slices
  • Implemented database version control and automated migrations
  • Conducted a phased, iterative migration between old and new databases
  • Adopted event sourcing and CQRS patterns to decouple reads/writes

This re-architecting enabled autonomous database lifecycles aligned to microservices, increasing agility and scalability.

Gamesys - Agile Data Modeling

Online gaming company Gamesys embedded agile data modeling practices within their Scrum processes for new product development:

  • Used example mapping to collaboratively model around user stories
  • Implemented database version control and refactoring techniques
  • Adopted a model-storming approach to evolve the data model iteratively
  • Integrated database deployments into their DevOps pipelines

This allowed Gamesys to rapidly design, develop, and deploy new games and features supported by an evolving database.

These case studies demonstrate how applying agile principles like version control, test automation, and incremental design can help database teams accelerate delivery and reduce risks. With the proper techniques, databases can become enablers rather than bottlenecks.