Test Data Management Archives - SD Times https://sdtimes.com/tag/test-data-management/ Software Development News Wed, 24 Aug 2022 16:17:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Test Data Management Archives - SD Times https://sdtimes.com/tag/test-data-management/ 32 32 Continuous test data management for microservices, Part 2: Key steps https://sdtimes.com/test/continuous-test-data-management-for-microservices-part-2-key-steps/ Tue, 14 Jun 2022 15:27:51 +0000 https://sdtimes.com/?p=47960 This is part 2 in a series on applying test data management (TDM) to microservices. Part 1 can be found here.  The continuous TDM process for microservices applications is similar to that for general continuous TDM, but tailored to the nuances of the architecture. The key differences are as follows:  Step 1(b): Agile Design Rigorous … continue reading

The post Continuous test data management for microservices, Part 2: Key steps appeared first on SD Times.

]]>
This is part 2 in a series on applying test data management (TDM) to microservices. Part 1 can be found here


The continuous TDM process for microservices applications is similar to that for general continuous TDM, but tailored to the nuances of the architecture. The key differences are as follows: 

Step 1(b): Agile Design

Rigorous change impact analysis during this step is key to reducing the testing (and the TDM) burden for microservices applications—especially in the upper layers of the test pyramid and the CD stages of the lifecycle. There are various ways to do this, following are a few highlights: 

(a)   Code-change-based impact analysis (also known as a white-box, inside-out approach). Through this approach, we identify which services and transactions are affected by specific code changes in implementing backlog requirements. We then focus testing and TDM efforts on those services and transactions affected. This approach is supported by tools such as Broadcom TestAdvisor and Microsoft Test Impact Analysis. This approach is more useful for white and gray box testing, specifically unit and component testing.  

(b)  Model flow-based impact analysis (also known as a black-box, outside-in approach). Here we do change impact analysis using flows in model-based testing. This analysis helps to highlight key end-to-end or system integration scenarios that need to be tested, and can also be traced down to individual components and source code. This approach is supported by such tools as Broadcom Agile Requirements Designer, and is more beneficial for testing in the upper layers of the test pyramid. 

I recommend a combination of both approaches to ensure sufficient test coverage, while minimizing the number of tests in a microservices context. Based on the change impact set, we prepare test data for the tests discussed in the previous section. 

Step 2(a): Agile Parallel Development 

As discussed in the previous section, as part of development, a component developer must also define and implement these APIs:

  •  APIs that allow us to set test data values in the component data store. These are sometimes referred to as mutator APIs. 
  • APIs that allow us to extract test data values, for example, from instances of components in production. These are also known as accessor APIs.

Developers should use the white-box change impact testing technique discussed above to focus their unit and component testing efforts. 

Step 2(b): Agile Parallel Testing

This is an important stage in which testers and test data engineers design, or potentially generate or refresh, the test data for test scenarios that have been impacted by changes and that will be run in subsequent stages of the CI/CD lifecycle. This assessment is based on the backlog items under development. Testers use the TDM approaches described above for cross-service system testing and end-to-end testing.  

In addition, the test data will need to be packaged, for example, in containers or using virtual data copies. This approach can ease and speed provisioning into the appropriate test environment, along with test scripts and other artifacts.  

Step 3: Build

In this step, we typically run automated build verification tests and component regression tests using the test data generated in the previous step. 

Step 4: Testing in the CD Lifecycle Stages 

The focus in these stages is to run tests in the upper layers of the test pyramid using test data created during step 2(b).  The key in these stages is to minimize the elapsed time TDM activities require. This is an important consideration: The time required to create, provision, or deploy test data must not exceed the time it takes to deploy the application in each stage.  

How do you get started with continuous TDM for microservices?

Continuous TDM is meant to be practiced in conjunction with continuous testing. Various resources offer insights into evolving to continuous testing. If you are already practicing continuous testing with microservices, and want to move to continuous TDM, proceed as follows:   

  • For new functionality, follow the TDM approach I have described. 
  • For existing software, you may choose to focus continuous TDM efforts on the most problematic or change-prone application components, since those are the ones you need to test most often. It would help to model the tests related to those components, since you can derive the benefits of combining TDM with model-based testing. While focusing on TDM for these components, aggressively virtualize dependencies on other legacy components, which can lighten your overall TDM burden. In addition, developers must provide APIs to update and access the test data for their components. 
  • For other components that do not change as often, you need to test less often. As described above, virtualize these components while testing others that need testing. In this way, teams can address TDM needs as part of technical debt remediation for these components. 

The post Continuous test data management for microservices, Part 2: Key steps appeared first on SD Times.

]]>
Continuous test data management for microservices, Part 1: Key approaches https://sdtimes.com/microservices/continuous-test-data-management-for-microservices/ Mon, 06 Jun 2022 16:52:38 +0000 https://sdtimes.com/?p=47861 Applying TDM to microservices is quite challenging. This is due to the fact that an application may have many services, each with its own underlying diverse data store. Also, there can be intricate dependencies between these services, resulting in a type of ‘spaghetti architecture.’ For these systems, TDM for end-to-end system tests can be quite … continue reading

The post Continuous test data management for microservices, Part 1: Key approaches appeared first on SD Times.

]]>
Applying TDM to microservices is quite challenging. This is due to the fact that an application may have many services, each with its own underlying diverse data store. Also, there can be intricate dependencies between these services, resulting in a type of ‘spaghetti architecture.’

For these systems, TDM for end-to-end system tests can be quite complex. However, it lends itself very well to the continuous TDM approach. As part of this approach, it is key to align TDM with the test pyramid concept.

Let’s look at the TDM approaches for tests in the various layers of the pyramid. 

TDM Approach for Supporting Microservices Unit Tests

Unit tests test the code within the microservice and at the lowest level of granularity. This is typically at a function or method level within a class or object. This is no different than how we do unit testing for other types of applications. Most test data for such tests should be synthetic. Such data is typically created by the developer or software development engineer in test (SDET), who uses “as-code” algorithmic techniques, such as combinatorial. Through this approach, teams can establish a high level of test data coverage. While running unit tests, we recommend that all dependencies outside the component (or even the function being tested) are stubbed out using mocks or virtual services

TDM Approach for Supporting Microservices Component or API Tests

This step is key for TDM of microservices, since the other tests in the stack depend on it.  In these tests, we prepare the test data for testing the microservice or component as a whole via its API.

There are various ways of doing this depending on the context: 

  1. Generate simple synthetic test data based on the API specs. This is typically used for property-based testing or unit testing of the API.
  2. Generate more robust synthetic test data from API models, for example, by using a test modeling tool like Broadcom Agile Requirements Designer. This enables us to do more rigorous API testing, for example for regression tests.
  3. Generate test data by traffic sniffing a production instance of the service, for example, by using a tool like Wireshark. This helps us create more production-like data. This approach is very useful if for some reason it isn’t possible to take a subset of data from production instances. 
  4. Generate test data by sub-setting and masking test data from a production instance of the service, or by using data virtualization. Note that many microservice architectures do not allow direct access to the data store, so we may need special data access APIs to create such test data.  

Regardless of the approach, in most cases test data fabrication for a microservice must be prepared by the developer or producer of the microservice, and made available as part of service definition. Specifically, additional APIs should be provided to set up the test data for that component. This is necessary to allow for data encapsulation within a microservice. It is also required because different microservices may have various types of data stores, often with no direct access to the data. 

This also allows the TDM of microservices applications to re-use test data, which enables teams to scale tests at higher layers of the pyramid. For example, a system or end-to-end test may span hundreds of microservices, with each having its own unique encapsulated data storage. It would be very difficult to build test data for tests that span different microservices using traditional approaches.   

Again, for a single component API test, it is recommended that all dependencies from the component be virtualized to reduce the TDM burden placed on dependent systems. 

TDM Approach for Supporting Microservices Integration and Contract Tests

These tests validate the interaction between microservices based on behaviors defined in their API specifications.

The TDM principles used for such testing are generally the same as for the process for API testing described previously. The process goes as follows: 

For contract definition, we recommend using synthetic test data, for example, based on the API specs, to define the tests for the provider component. 

The validated contract should be a recorded virtual service based on the provider service. This virtual service can then be used for consumer tests. Note that in this case, a virtual service recording forms the basis of the test data for the consumer test. 

TDM Approach for Supporting an X-service System Test or Transaction Test at the API Level 

In this type of test, we have to support a chain of API calls across multiple services. For example, this type of test may involve invoking services A, B, and C in succession.

The TDM approach for supporting this type of test is essentially the same as that for a single API test described above—except that we need to set up the test data for each of the services involved in the transaction. 

However, an additional complexity is that you also need to ensure that the test data setup for each of these services (and the underlying services they depend on) are aligned, so the test can be successfully executed. Data synchronization across microservices is largely a data management issue, not specific to TDM per se, so you need to ensure that your microservices architecture sufficiently addresses this requirement. 

Assuming data synchronization between microservices is in place, the following approaches are recommended to make test management easier: 

  1. As mentioned before, use model-based testing to describe the cross-service system tests. This allows you to specify test data constraints for the test uniformly across affected services, so that that initial setup of test data is correct. This is done using the test data setup APIs we discussed above.
  2. Since setting up test data definition across services is more time consuming, I recommend minimizing the number of cross-service tests, based on change impact testing. Run transaction tests only if the transaction, or any of the underlying components of the transaction, have changed. Again, this is a key principle of continuous testing that’s aligned with the test pyramid. 
  3. If there have been no changes to a participating component or underlying sub-component, we recommend using a virtual service representation of that component. This will further help to reduce the TDM burden for that component. 
TDM Approach for Supporting End-to-End Business Process or User Acceptance Tests 

The TDM approach for these tests is similar to that for system tests described above, since user actions map to underlying API calls. Such tests are likely to span more components. 

Many customers prefer to use real components, rather than virtual services, for user acceptance testing, which means that the TDM burden can be significant. As before, the key to reducing TDM complexity for such tests is to reduce the number of tests to the bare minimum, using techniques like change-impact testing, which was discussed above. I also recommend you use the change-impact approach to decide whether to use real components or their virtual services counterparts. If a set of components has changed as part of the release or deployment, it makes sense to use the actual components. However, if any dependent components are unchanged, and their test data has not been refreshed or is not readily available, then virtual services can be considered.

The post Continuous test data management for microservices, Part 1: Key approaches appeared first on SD Times.

]]>
The Essential Phone, IBM cybersecurity initiatives, and Nile.js — SD Times news digest: May 30, 2017 https://sdtimes.com/android/essential-phone-ibm-cybersecurity-initiatives-sd-times-news-digest-may-30/ Tue, 30 May 2017 15:57:22 +0000 https://sdtimes.com/?p=25346 After weeks of tease, creator of Android Andy Rubin has unveiled a new smartphone: The Essential Phone. The phone is being introduced as part of Rubin’s latest company, Essential. According to Rubin, the belief behind Essential is that devices should be personal property, play well with others, shouldn’t become outdated, and should assist the user. … continue reading

The post The Essential Phone, IBM cybersecurity initiatives, and Nile.js — SD Times news digest: May 30, 2017 appeared first on SD Times.

]]>
After weeks of tease, creator of Android Andy Rubin has unveiled a new smartphone: The Essential Phone. The phone is being introduced as part of Rubin’s latest company, Essential. According to Rubin, the belief behind Essential is that devices should be personal property, play well with others, shouldn’t become outdated, and should assist the user.

“So why did I create Essential? Well, my hardware engineers wanted me to talk about how we are bringing real passion and craftsmanship back into this category. My software engineers wanted me to talk about our vision for making all devices, even those we don’t make ourselves, play well together. My partners wanted me to talk about how we are using methods that could change how successful technology companies are built forever,” Rubin wrote.

More information about the phone can be found here.

Delphix’s State of Test Data Management
Delphix announced a State of Test Data Management (TDM) survey, which revealed improved data quality is a major factor in faster application development. The report also found respondents with the ability to bring high quality software to market faster had a better chance of survival in today’s software economy.

“Application development teams need fast and reliable test data for their projects. Yet many are constrained by the speed, quality, security, and costs of moving data across environments,” said Iain Chidgey, vice president of sales international at Delphix. “Since it takes significant time and effort to move and manage data, developer environments can take days or weeks to provision. In turn, this places a strain on operations teams and creates time sinks, ultimately slowing down the pace of application delivery.”

Other key features include 45% of respondents are taking steps to improve their TDM, and 43% are confident it will improve in the next year.

IBM’s new cybersecurity initiative
IBM Security announced a new initiative to address the cybersecurity worker storage problem. The company will work with new programs and partnerships that promote a “new collar” cybersecurity workforce strategy. Current initiatives include: a collaboration with the Hacker Highschool project; continued investment in education, training and recruitment; and best practices on how organizations can rethink their own cybersecurity talent models.

“The cybercrime landscape is evolving rapidly, yet many organizations are still approaching their cybersecurity education and hiring in the same way they were 20 years ago,” said Marc van Zadelhoff, general manager of IBM Security. “The truth is that many of the critical cybersecurity roles we need to fill don’t require a traditional four-year technical degree. Industry leaders need to take an active part in resolving the talent issues we’re facing, by investing in new models and extending the pipeline to focus on hands-on skills and experience over degrees alone.”

Developers introduce Nile.js
Nile.js is a new peer-to-peer live video streaming library designed to handle scaling, developed by software engineers Derek Miranda, Kevin Qiu, and Justin Pierson. It uses the power of WebTorrent, which is a distributed file delivery protocol inspired by BitTorrent. Using WebTorrent as the means of broadcasting the stream makes for a better fit than implementing WebRTC peer connections, according to a Nile.js blog.

According to its GitHub page, “Nile.js utilizes Express middleware and socket.io to receive torrent information, broadcast it to as many clients it can comfortably handle who will then send it out to the rest of the clients.” The Nile.js team would like other developers to try out the library and build upon the project.

The post The Essential Phone, IBM cybersecurity initiatives, and Nile.js — SD Times news digest: May 30, 2017 appeared first on SD Times.

]]>