In today's fast-paced world, agility is critical for software teams to deliver value rapidly and respond to changing requirements. While agile methodologies like Scrum, Kanban, and XP have been widely adopted for application development, applying these practices to database work presents unique challenges.
At Agile SQL, our mission is to equip database professionals with the knowledge, techniques, and tools to embrace an agile mindset for database development and operations successfully.
Databases are often treated as monolithic, static components that constrain agility. However, an increasing number of organizations are recognizing the benefits of bringing databases into their agile processes:
Whether you're just getting started or looking to level up your agile database capabilities, Agile SQL provides comprehensive resources:
Agile methodologies have transformed how software teams build and deliver applications. However, several unique database challenges require a tailored approach to going agile.
Traditional database development follows a waterfall model - requirements are gathered up front, a monolithic schema is designed, and changes are infrequent. This model breaks down in agile environments:
To align with agile application processes, database work must also become iterative, incremental, and responsive to change.
Some key challenges development teams face when transitioning databases to agile include:
The Agile Data (AD) home ground presents a comprehensive framework for applying agile practices across different aspects of database work:
By adopting techniques across this stack, teams can iteratively design, develop, test, and deploy database changes agilely.
To begin your agile database journey, consider the following:
While transforming database work to agile requires upfront effort, the long-term benefits of accelerated delivery and reduced risks make it a worthwhile investment.
In traditional waterfall projects, data modeling happens upfront - requirements are gathered, a normalized data model is designed, and the database schema is implemented as a monolithic structure. However, this "big upfront design" approach does not align well with agile principles of embracing change and incremental delivery.
Agile data modeling techniques break away from this rigid model, allowing data structures to evolve iteratively alongside application code. This enables teams to deliver working software increments rapidly while deferring design decisions until the last responsible moment.
Rather than attempting to design a perfect, future-proof schema from the start, agile data modeling follows an evolutionary, test-driven approach:
This "growing" of the data model reduces upfront design effort and embraces the reality that requirements will change over a project's lifetime.
Some traditional data modeling practices that are anti-patterns in agile environments:
A critical agile data modeling technique is vertical data slicing or vertical partitioning. Rather than modeling the entire domain upfront, data is sliced into vertical datasets focused on specific business capabilities or user stories.
This allows teams to iteratively build out slices of functionality without being constrained by a rigid, monolithic schema. Slices can evolve semi-independently and be integrated over time.
Agile data modeling is a collaborative effort between developers, DBAs, data analysts, and business representatives. Models are continuously groomed and refactored through active communication and feedback loops.
Techniques like model storming, example mapping, and event storming foster shared understanding and drive the evolutionary design process.
A core practice is treating database schemas as iterative code that can be refactored safely through automated tests and database change scripts. This enables an evolutionary, incremental approach to data modeling.
By embracing agile data modeling practices, teams can deliver working software faster while keeping data structures in sync with changing application requirements.
The DevOps movement has transformed how applications are built, tested, and deployed through practices like continuous integration, automated deployments, and infrastructure as code. However, databases are often left out of this automated delivery pipeline.
Applying DevOps principles to database development and operations can unlock significant benefits:
To achieve true DevOps for databases, teams must adopt a comprehensive set of processes and tooling.
At the core is treating database schemas and objects as versionable artifacts that can be developed, built, tested, and deployed through an automated pipeline:
Integrating database changes continuously allows teams to catch issues early and maintain a releasable database at all times:
Deploying database changes should be a fully automated, repeatable process with zero downtime:
Enable developers to self-provision database environments on-demand through automated processes:
By adopting a DevOps mindset for databases, teams can deliver changes faster, with higher quality, and in lockstep with frequent application releases.
Many organizations are burdened with aging, monolithic database systems designed for a different era of software development. These legacy databases act as bottlenecks and constraints when adopting agile practices and accelerating delivery cycles.
Modernizing legacy databases is critical for achieving true agility. This involves re-architecting and migrating to new database technologies and designs that enable frequent iterative changes.
Some critical challenges posed by legacy monolithic database systems:
These issues make it extremely difficult to evolve legacy databases alongside rapidly changing application requirements using agile methods.
There are several potential strategies for modernizing and migrating away from legacy database architectures:
Break up the monolithic database into smaller, decoupled persistence stores aligned with business capabilities or bounded contexts. This separates the concerns and lifecycles of different data domains.
Extracting just the active schema and data into a new database may be possible for specific legacy systems, leaving behind cruft and technical debt.
Apply a series of composable refactorings to improve and modularize the existing database design over time iteratively.
Adopt a polyglot persistence architecture that leverages different database technologies (relational, NoSQL, etc.) based on the data access patterns required.
Migrate the entire database environment to a fully managed cloud database service, which offers greater agility, scalability, and automation capabilities.
A successful legacy database modernization initiative requires the following:
With a pragmatic, incremental approach, legacy databases can be evolved into an enabler rather than a bottleneck for agile delivery.
To effectively apply agile methodologies to database work, teams must adopt robust tools and techniques across the entire database lifecycle. From evolutionary design to automated testing and deployment, having the right processes in place is critical.
Treating database code as a versionable artifact is foundational. Version control systems like Git, when combined with database source control tools, enable:
Popular database source control tools include Liquibase, Flyway, and Redgate's SQL Source Control.
Rather than a big upfront design, databases should evolve incrementally. Techniques like example mapping, event storming, and data slicing foster an iterative, test-driven approach to data modeling.
Like application code, database schemas should be iteratively refactored to improve design, remove technical debt, and support new requirements. Database refactoring tools and frameworks like SQLTools, Prisma Migrate, and Flyway can automate schema migrations.
Every database change should be built, tested, and deployed through an automated pipeline. Tools for database change automation include:
Integrating database changes continuously through a CI process helps catch issues early. Databases can be integrated into existing CI tools like Jenkins, CircleCI, etc., or used in database-specific CI tools like Redgate's SQL Change Automation.
Enable developers to self-provision temporary database environments through automated processes and infrastructure-as-code tools like:
By leveraging agile tools and techniques, database professionals can accelerate delivery, improve quality, and bring databases into their DevOps practices.
While adopting agile for database work has challenges, an increasing number of organizations across industries are realizing the benefits. Here are some real-world examples of teams that have transformed legacy database processes using agile techniques:
Salesforce's core database powers their market-leading CRM, with millions of subscribers and billions of transactions daily. To support rapid application release cadences, their database team implemented a continuous delivery pipeline:
This allowed Salesforce to release database changes multiple times per day with high quality. Their agile database practices reduced deployment risks and accelerated delivery timelines.
The travel search engine Skyscanner successfully migrated from a monolithic database to a microservices architecture with 50+ databases. They used an evolutionary database refactoring approach:
This re-architecting enabled autonomous database lifecycles aligned to microservices, increasing agility and scalability.
Online gaming company Gamesys embedded agile data modeling practices within their Scrum processes for new product development:
This allowed Gamesys to rapidly design, develop, and deploy new games and features supported by an evolving database.
These case studies demonstrate how applying agile principles like version control, test automation, and incremental design can help database teams accelerate delivery and reduce risks. With the proper techniques, databases can become enablers rather than bottlenecks.