Updated
Myst v7.0.0 has been released! 🎉 Read the full release notes and check out the release video highlights.
DEVOPS for Oracle

Best Practice for Implementing Continuous Delivery for Oracle

This paper examines the advantages of adopting Continuous Delivery, and how to implement this in the delivery of Oracle Middleware projects, enabling organizations to rapidly, reliably and repeatedly deliver projects faster, with less risk and less cost.
40%
Waste Reduction
5-7x
Productivity improvement
in minutes
Average time to deploy
Team
Free Whitepaper on Continuous Delivery
Thank you!

Your submission has been received and your whitepaper is on its way.
Oops! Something went wrong while submitting the form.

Best Practice for Implementing Continuous Delivery for Oracle Middleware

Reduce Risk, Decrease Costs and Speed Up Time to Market
Continuous Delivery line

Software is a Competitive Advantage

The source of real value that an organization delivers through its products and services to the end customer is increasingly defined by the software “systems” that underpins them.

The end service delivered to the customer is not performed by a single system; but rather a patchwork of applications, each one performing a particular business function. Oracle Middleware components, such as the Oracle BPM Suite and Oracle SOA Suite, provide the application platform to combine these business apps, like puzzle pieces, into an integrated solution in order to deliver a seamless and unified experience to the customer.

Organizations are in a digital race, where the speed at which IT can reliably deliver new features and innovations is what sets them apart from their competition. Yet in most organizations, IT projects are failing to deliver, either on-time or on-budget.

Research shows that a typical software project will often waste 40 percent or more of their resources. It also highlights organizations embracing delivery strategies such as Continuous Delivery or DevOps are able to significantly reduce this waste, and on average are 5-7 times more productive than their peers.

This paper examines the advantages of adopting Continuous Delivery, and how to implement this in the delivery of Oracle Middleware projects, enabling organizations to rapidly, reliably and repeatedly deliver projects faster, with less risk and less cost.

The Need to Eliminate Waste

Several studies have shown that a typical software project will often waste 40 percent or more of their resources; waste is defined as time spent on activities that are NOT adding value to the end customer.

Much of the waste in software deployment comes from the progress of software from development to testing and finally to operations.  Symptoms of waste include re-work as a result of human error or invalid requirements / assumptions, waiting time where an incomplete activity delays the start of other activities, and frequent task switching, where team members are moving from one task to another without completing the first task properly.

Removing waste can improve operational efficiency, but more importantly, it can reduce the length of the development cycle and increase customer value. Shorter cycles can improve innovation, competitiveness, and responsiveness in the marketplace.

Manual Build and Deployment of Code is Error Prone

Manually building and deploying code is a resource intensive and highly error prone process; ask anyone to perform a task tens, hundreds, or even thousands of times and you will find that there are inconsistencies / errors; this is further compounded by the fact that in most organizations there are different individuals and teams performing these tasks in each environment.

An incorrect deployment is one of the most common causes of issues when promoting code into a staging environment. Small errors, such as misconfiguration of a middleware component, can cause issues that are difficult to diagnose and rectify, often requiring many days / weeks of man effort to resolve. As a result, we’re often left with a situation where deployed code fails to work, with the all too familiar expression; “Well, it worked in my environment!”

These are not one-off issues, but rather a steady drip, drip, drip of issues through all stages of the project lifecycle, resulting in many months of wasted man effort to resolve and lost productivity; leading to missed milestones, project delays and the inevitable cost blow out.

Late Integration

Since manual builds are so time consuming, stressful, and error prone, the natural tendency in a project is to minimize the number of releases into each staging environment, and delay these until late in the project when the code in theory will be more stable.

Software components implemented in isolation are full of assumptions about the other components with which it will be integrated. Leaving integration towards the end is a high risk strategy, since issues with core architecture or design patterns, for example, may not be exposed until a project is almost completed.

This is especially the case for Oracle SOA and BPM projects, which involve integrating multiple systems together; it is a common mistake for all parties to agree on the interfaces between the systems and then go off and code independently (often for months), with a false sense of security that this is sufficient to avoid the worst issues when it comes to integrate these pieces together.

System integration and testing is then carried out towards the end of the project, just prior to going into User Acceptance Testing (UAT). Correcting invalid assumptions discovered at this stage in the lifecycle can result in significant time delays, be very costly and may even require significant areas of the code base to be re-written.

Test Teams Idle

One of the biggest wastes in software development is time spent waiting for things to happen. An area where this happens all too regularly is testing.

As previously mentioned, System Integration Testing (SIT) is often pushed back until late in the project, with developers cutting code up until the day before SIT is due to begin. At the eleventh hour, the build is run and the code deployed into the SIT environment, ready for testing to begin.

Unfortunately, for reasons already highlighted, the first delivery into SIT rarely goes smoothly, often requiring weeks or even months of elapsed effort by the development team to get the application to a state where testing can be performed. During this time, the test team is forced to stand by idle.

Once the first release into SIT has been successfully completed, it is not the end of the issue. Since manual builds and deployments are error prone, it means that process of deploying each subsequent release so that it is ready and fit for testing can be arduous. The deployed code will often fail basic “smoke” tests and require extensive troubleshooting and fixing before it’s ready to be tested, again with the test team left helpless on the sidelines.

Apart from wasting significant amounts of the test team’s time, the time spent troubleshooting the release is wasting developer time that should be spent on writing the code that delivers business value.

Defects Discovered Late in Delivery

Test teams are caught between a rock and a hard place; with each test cycle starting late for reasons outside of their control, yet the milestones for completing each round of testing remain fixed due to project pressures.

Squeezing the testing into a reduced timeframe, means the overall coverage and quality of testing is compromised, resulting in more defects going undetected in each round of testing.

The net effect is that defects are discovered later in the development cycle, or worse, make it into production. It is well known that the longer these defects remain undiscovered, the more effort it takes to troubleshoot and fix, resulting in significant project delays.

The business is frustrated when “development complete” code can’t be released, or unreliable code not fit for purpose is pushed into production – leading to the inevitable fallout and fire-fighting.

What is Continuous Delivery?

The goal of continuous delivery is to help software development teams drive waste out of their process by simultaneously automating the process of software delivery and reducing the batch size of their work. This allows organizations to rapidly, reliably, and repeatedly deliver software enhancements faster, with less risk and less cost.

Continuous Delivery - value, automation, and collaboration

Continuous Integration (CI) is the practice of automatically building and testing a piece of software; either each time code is committed by a developer or in environments with a large number of small commits, or a long-running build on a regular scheduled basis.

Continuous Delivery (CD) goes a step further to automate the build, packaging, deployment, and regression testing, so that it can be released at any time into production.

Continuous deployment takes this another step further, in that code is automatically deployed into production, rather than when the business decides to release the code.

Work in Small Batches

The batch size is the unit at which code under development is promoted between stages, such as SIT, UAT, and Pre-Prod, in the development process.

Under a traditional development process, the code from multiple developers working for weeks or months is batched up and integrated together. During the integration process, numerous defects will be surfaced. Some will be the result of a lack of unit testing, but many will be down to invalid assumptions about the various pieces of code developed in isolation and how they will work together as part of the overall solution.

This is especially the case for Oracle SOA and BPM projects, which involve integrating multiple systems together. It is a common mistake for all parties to agree on the interfaces between the systems and then go off and code independently, with each party making invalid assumptions about how the other systems will behave.

The bigger the batch, the longer these assumptions remain undiscovered, and the greater the number of defects in the batch. A significant amount of the time taken to fix a defect is actually spent trying to isolate the problem and determine the root cause, rather than fixing the problem.

The issue with a big batch is that many of the defects are interwoven, and that the volume of code that needs be analyzed to troubleshoot a defect is greater. In addition, code based on invalid assumptions can often require significant re-work once these invalid assumptions are discovered; the longer these remain undiscovered, the greater the amount of invalid code written and the greater the amount of re-work required. As a result, the amount of time taken to identify and fix defects increases exponentially with the batch size.

Continuous delivery promotes the use of small batches, where new features are developed incrementally and promoted into the various test environments on a regular and frequent basis.

Small batches mean problems are caught immediately and instantly localized, making it far simpler to identify the root cause and fix. In the case of invalid assumptions, these are discovered far earlier in the process, when they are cheap to fix, and results in higher-quality software.

Software components that are implemented in isolation are full of assumptions about the other components with which they will be integrated. The sooner we can identify these assumptions, the smaller the impact and the associated waste will be. Small batches enable us to integrate these components earlier in their respective development lifecycles, and thus reduce the risk and overall impact on the project.

Process for Releasing / Deploying Software MUST be Repeatable and Reliable

To enable the development team to work in small batches, we need to remove the waste in the current build and deployment process. This requires that the process for releasing/deploying software MUST be efficient, repeatable, and reliable.

This is achieved by automating each step in the software delivery process, as manual steps will quickly get in the way, become a bottleneck, or risk introducing unintended variation. This means automating the build and deployment of code, the provisioning of middleware environments, plus the testing of code.

Minimise Differences Between Environments

A common anti-pattern is deploying to a production-like environment only after development is complete.

It is unfortunately all too common for solutions to fail on first deployment to any environment. Small inconsistencies between environments, such as disparities in the configuration of deployed SOA/BPM composites and OSB services, adapter configurations, WebLogic resources, or applied patches can cause issues with deployed code that are difficult to diagnose and rectify.

This means that there can be almost no confidence that a particular software release will work successfully if it has never been tested in a production-like environments.

To avoid this, deployment should always be to production-like environments. Each time we make a deployment to any environment, we are making changes to that environment, which means that it is no longer in-alignment with production. If the release passes, and the code gets promoted through to the next stage and into production then that is not an issue. But if the release fails, we need to restore the environment back to its pre-deployment state, prior to deploying the next release.

Build Quality In

W. Edwards Deming, in his famous management guideline, stated:

"Cease dependence on mass inspection to achieve quality and improve the process and build quality into the product in the first place”.
‍

This means ensuring improvement and quality assurance at every step of the value stream instead of testing just the final product for compliance to requirement.

For software, it translates to writing automated tests at multiple levels (unit, component, and acceptance) and automating their execution as part of the build – test – deployment pipeline.

This way, whenever a commit happens (which is a change being made to either of the application, its configuration, or the environment and software stack that it runs on), an instance of the pipeline runs and so do the automated tests that verify / validate business expectations in form of test cases.

Automating the Software Deployment Pipeline

During the lifetime of a project, code will be built and promoted to various staging environments such as Development, System Integration and Test (SIT), User Acceptance and Test (UAT), Pre-Prod, and Production.

The overall goal of the deployment pipeline is to detect any changes that will lead to problems in production.  At each stage in the deployment pipeline, different levels of testing will be applied. Early stage tests are targeted at being quick and simple to run, but should capture the majority of the problems, so providing fast feedback and allowing for quick cycle times.

Later stages in the delivery pipeline provide more comprehenisve testing, some of which may be manual, or take longer to set-up and run; with each successive stage provides increasing confidence in the overall build quality.

Change promotion from CI to Production

Automating the deployment pipeline is the key requirement to enable continuous delivery. This process involves automating the building of the software, the deployment of each build into each staging environment, and the automation of tests conducted in each environment.

We also need to automate platform provisioning, to ensure that each build is deployed to, and tests carried out against, correctly configured and reproducible environments.

As well as automating each step in the process, we also need to automate the end to end orchestration of these steps; this will typically consist of a separate deployment pipeline for each staging environment, as well as a top-level view of the entire pipeline, allowing you to define and control the stages and gain insight into the overall software delivery process.

Pipeline Orchestration

One of the key principles of Continuous Delivery is that ALL source artefacts must be committed to a source code management system, such as Subversion or Git. This does not just mean source code; it means anything that is needed to recreate the solution from scratch (including testing).

With this approach, a developer working on a task will take a local working copy of the current working integrated code from the repository, write / modify whatever code they need to complete their task, build the code on their development machine, and unit test. Once everything passes, the new and updated source artifacts will be checked in, in other words, committed into the source code management system. This is the point, we need to initiate our delivery pipeline.

The key enabling technology for this is a Continuous Integration Server, such as Jenkins, Hudson, Bamboo, or TeamCity. The CI Server is set-up to monitor the source code management system; as soon as it detects a new commit to the repository it triggers the initial sequence of stages in the delivery pipeline.

Automated Build

The pipeline starts by building the binaries to create the deliverables that will be passed to the subsequent stages.

Build Only Infographic

During this process, the CI Server will check out the latest committed version of the code from the source code repository into a temporary directory.

It will then invoke the necessary tasks to build and compile that code. For Oracle SOA, OSB, and BPM, the simplest way to achieve this to do a direct build with MyST; this will automatically detect the type of source code arfefacts and build them accordingly.

Alternatively, these tasks can be scripted using a tool such as Maven or Ant.

The output of this build is then packaged up as a release and placed into a software repository, such as Artifactory or Nexus, ready to be deployed into any of the staging environments.

With composite applications that leverage a number of disparate sources, it is a common anti-pattern to create a separate deployment unit, i.e. build, for each environment. With this approach, endpoint references are manually updated in the required configuration files before packaging up the deployment unit. This introduces the risk that endpoints are not updated correctly or that the output of a build can be deployed into the wrong environment.

Best practice is to have a common build, and automatically configure these environment-specific configurations as part of the deployment process.

Stage Deployment Pipeline

During the lifetime of a project, code will be built and promoted to various staging environments, such as Development, System Integration and Test (SIT), User Acceptance and Test (UAT), Pre-Prod, and Production.

For each stage, we tend to maintain a separate deployment pipeline, with successful completion of one delivery pipeline (e.g. CI) being the pre-requisite for initiating the next stage of the deployment pipeline. On successful completion of one stage, the next stage can either be triggered automatically or manually initiated.

Whilst each stage is different, it essentially consists of the same basic elements; deploy and configure the latest build into the staging environment, run some smoke tests to validate the deployment, then execute the stage-specific tests.

As part of this we need to automate platform provisioning to ensure that each build is deployed to, and tests carried out against, correctly configured and reproducible environments.

Teardown / Provision Oracle Middleware Platform… Automated

As we highlighted earlier, a common anti-pattern is deploying to a production-like environment only after development is complete. To avoid this, deployment at every stage of the software delivery pipeline should be to production-like environments.

Each time we make a release into a staging environment, we are making the changes to that environment. As a result, what is deployed to any environment is the culmination of previous failed releases and configuration changes (i.e. releases that have not made it into production) plus the latest build. This leads to the environment becoming less like production each time we deploy a new build; this is known as configuration drift and commonly leads to releases failing when promoted to the next environment, or worse still, when promoted into Production.

To fix this we need to teardown our middleware platform and re-provision it to a state consistent with production, prior to each release.

For our CI environment, where we are often making several releases a day, it’s not practical to re-provision the environment prior to every release. Rather, it is better to schedule a re-build of the CI environment nightly.

It’s worth observing that deployments will often fail due to configuration issues. In these cases, developers will often make a manual “quick fix”; this will fix the release in that environment, but those changes are often forgotten, and the same issues are encountered at the next stage. Tearing down our middleware platform and re-provisioning will quickly discourage this unproductive behavior.

MyST is the enabling technology that allows you to quickly teardown and re-provision your Oracle Middleware Environments.

Central to MyST is the Platform Blueprint, an environment-agnostic specification that we use to define a standardized Oracle Middleware topology and configuration, for example, a highly available deployment of the Oracle SOA Suite incorporating OSB, SOA, WSM, and BAM.

Provisioning infographic. Platform Blueprints, Platform Models, Provision on the Infrastructure of your choice.

The Platform Blueprint provides an abstraction layer over the underlying infrastructure, meaning that you can use the same Platform Blueprint to provision production-like environments to your staging environments.

A Platform Model is then used to capture all the environment-specific configuration information required to provision an instance of an Oracle Middleware platform into the selected environment. With this approach, we create a Platform Model for each staging environment, all based on the same Platform Blueprint.

The Platform Blueprint and Model are placed under version control, allowing us to treat configuration as code. This gives us the flexibility to provision a consistent middleware platform across all environments, as well as having the ability to roll forward / backward to a different version of the platform and hence eliminate the possibility of configuration drift.

Deployment and Configuration of Oracle Middleware Code… Automated

The next step in our delivery pipeline is to deploy the latest release into the staging environment.

MyST is the enabling technology that allows you to quickly establish a standardized, repeatable, and automated process for the deployment of Oracle Middleware solutions at each stage in the Software Delivery Pipeline. MyST shifts the experience from a resource intensive and highly error prone process to an automated, predictable, and low risk process that can be performed in a fraction of the time.

Release Blueprints are used to define the artefacts that constitute a release, the configuration requirements for these artefacts, and the configuration changes that will need to be applied to the Oracle Middleware Platform. These blueprints are version controlled, allowing us to treat configuration as code, ensuring strong governance and consistency across the release process.

Build and Deploy - Deployment focus

The deployment fetches the output of the earlier build from the software repository. Before deploying the packaged code, any references it has to other artefacts, such as service endpoint locations, database connection details, and file locations, need to be configured for the target environment. These configurations are defined as part of the Release Blueprint, and automatically applied by MyST during the deployment process.

In addition to deploying the package to the target environment, configuration changes may need to be made to the Oracle Middleware Platform, such as the creation of data sources, JMS Queues, and other resources, as well as the application of patches. These changes are also defined as part of the Release Blueprint, and again automatically applied by MyST as part of the deployment process.

Automated Testing

Throughout this stage, the new version of an application is rigorously tested to ensure that it meets all desired system requirements. It is important that all relevant aspects — whether functionality, security, performance, or compliance — are verified by the pipeline. The stage may involve different types of automated or (initially, at least) manual activities.

Smoke Test Each Deployment

A deployment script will generally indicate whether it was successful or unsuccessful. But what does this mean? Often a success message is an indication that the software artefact was deployed to the environment, not necessarily that it is working. For this reason, it is important to run smoke-tests to ensure that the deployment was truly successful.

Smoke tests should be designed to execute fast and fail fast; if something is fundamentally wrong with the release, we want to know about it quickly in order to resolve the issue speedily. However, we also want the smoke tests to provide reasonable coverage; the last thing we want is to get deep into our environment-specific tests, only to discover a significant issue with the deployment that invalidates all our testing.

We will also want to run smoke tests in each environment. It is for these reasons that smoke tests (as well as unit tests) are among the first candidates for automation

Testing Options

Different tests serve different purposes, and have different characteristics, that impact their suitability for automation; how much effort is required to set-up and perform the tests, as well as the frequency at which they are performed.

Test Categories

The diagram to the side is from the book “Agile Testing” by Lisa Crispin & Janet Gregory, which identifies four categories into which tests can be classified. Tests are grouped into a vertical axis of being business-facing or technology-facing in nature, and a horizontal axis of supporting the team or to critique the product.

Technology facing tests that support development activities are usually developed and maintained by developers [Q1]. Unit, component, and deployment tests belong to this category.

Business facing tests that support functional or acceptance criteria fall in [Q2]. These are usually defined by business analysts and business facing test teams.

Business facing tests that inspect the product are manually executed exploratory tests [Q3]. These are tests that require real user feedback or validate complicated business assumptions, which are often difficult to codify.  Quadrant Q4 is all about collation of specialized tool-driven tests that validate non-functional requirements, such as performance, scalability, and security.

When automating tests, we initially want to focus on automating tests in Q1 followed Q2. Apart from being simpler to automate, these tests tend to be performed earlier and more frequently in the software deployment pipeline. A key consideration is that tests performed in Q3 are expensive and time consuming to perform, due to their manual nature – therefore it is important to catch as many defects earlier in the test cycle through automated tests, so as to minimize the amount of manual testing required.

A similar consideration applies to Q4; while these tests can typically be automated, they are often time-consuming to run, for example, performance tests may need to be run over a period of days, so again we want to minimize the number of Q1 and Q2 defects that make it through to this stage.

When it comes to automating tests, there are a variety of specialist web service / API test solutions available such as SoapUI and Parasoft SOAtest, which can easily be invoked from your CI Server of choice. For Q4 testing, there are also specialist tools for automating non-functional tests, such as LoadUI or Parasoft SOAtest with LoadTest.

Continuous Delivery Tool Chain

Implementing continuous delivery for SOA and BPM Projects requires a variety of tools to automate the end to end Software Deployment Pipeline. The following diagram summarizes the key tool categories required, and highlights the leading tools in each of these categories.

Continuous Delivery tool chain

MyST has been designed from day one to integrate seamlessly with existing tools, enabling organisations to leverage the full power of MyST from within the context of their existing tools, as detailed below.

Continuous Delivery

MyST has plugins for popular Continuous Integration tools, including Jenkins, Hudson, and Atlassian Bamboo, allowing you to fully integrate MyST with your CI/CD tool(s) of choice, enabling organizations to leverage the full power of MyST as part of their overall Continuous Delivery solution.

Software Configuration Management

MyST integrates with the majority of popular software configuration tools, including Subversion, Git, Mercurial and Perforce.

Build Automation

MyST integrates with popular build automation frameworks including Apache Ant, Maven, and Shell scripts.

Software Repository

MyST provides integration with software repositories, such as Artifactory, Nexus, and Archiva. This means that at deployment time, MyST retrieves the binaries from the binary repository, rather than building from source for each environment. This ensures consistency of deployed artefacts by using the same package across target environments.

Reduce Risk, Decrease Costs, and Speed Up Time to Market

Organizations are in a digital race, where the speed at which IT can reliably deliver new features and innovations is what sets them apart from their competition.

Leveraging Continuous Delivery for the development of Oracle Middleware projects can deliver significant reductions in development time and costs. Automating the deployment pipeline is the key requirement to enable continuous delivery.

MyST is the key enabling technology allowing organizations to quickly automate the build and deployment of Oracle Middleware code and the provisioning and re-provisioning of Oracle Middleware environments required to support these processes.

In short, the benefit of adopting Continuous Delivery will provide the business with a strategic advantage in its ability to be more responsive in delivering new solutions faster, cheaper, and more often.

Your journey starts here

Let us personally show you how Myst can help.

Do more with Myst

MyST delivers automated platform provisioning and continuous delivery for Oracle Middleware, both on-premise and on-cloud. This enables enterprises to deliver a consistent and reliable solutions in minutes, NOT weeks or months.
Saves Money
Easy to Use
Secure
Intelligent Discovery
Much More

Connect with the myst community on our slack channel.

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa.
We will never share your email address with third parties.
Join Community