In-Depth Archives - SD Times https://sdtimes.com/category/in-depth/ Software Development News Thu, 01 Jul 2021 17:46:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg In-Depth Archives - SD Times https://sdtimes.com/category/in-depth/ 32 32 Are your metrics right for a remote workforce? https://sdtimes.com/softwaredev/are-your-metrics-right-for-a-remote-workforce/ Tue, 08 Jun 2021 17:00:37 +0000 https://sdtimes.com/?p=44317 So much of what we do at work has to be measured. There is a sense that, if something cannot be measured, does it even really exist? Certainly, if a project or function can not demonstrate how it is being measured in a clear, understandable manner, its ability to secure approval or signoff is dramatically … continue reading

The post Are your metrics right for a remote workforce? appeared first on SD Times.

]]>
So much of what we do at work has to be measured. There is a sense that, if something cannot be measured, does it even really exist? Certainly, if a project or function can not demonstrate how it is being measured in a clear, understandable manner, its ability to secure approval or signoff is dramatically reduced.

Metrics, key performance indicators, objectives and key results (OKRs), being able to measure progress – it all links back to a need within organizations to ultimately quantify return on investment.  When we all worked in one place, most metrics were tied to outputs – achieve sales targets, ship code, maintain a positive net promoter score.

Changing environments demand new metrics

But how have those ways of measurement changed in the last year? Do they take into account the challenges and opportunities that come with remote working? As Dan Montgomery, the founder and managing director of Agile Strategies, said, the current situation “is a great opportunity to get better at managing people around outcomes rather than tasks or, worse yet, punching a virtual clock to prove they’re working. Many employees working from home genuinely have big challenges, including bored kids, sick relatives and an unending stream of bad news. They need the flexibility right now and will appreciate your trust in them.”

Having that flexibility is particularly critical in uncertain times. “Now more than ever, the goals that we’re setting are so critical for us to be able to navigate what happens next,” Ryan Panchadsaram, co-founder and head coach of What Matters said.

Defining a clear vision

But how do we set those goals? One mistake many businesses make is not aligning targets and objectives throughout the business. It doesn’t matter whether you’re a start-up, a scale up or an established sector leader, without a goal at the company level, you’re lost. Chris Newton, VP of Engineering at Immersive Labs, calls this “Vision — it all needs to have a really clear, inspiring, well understood company vision that is really guiding every department in the business. Not just product and tech, but you’re talking about the whole wider business. There has to be a direction, a clear direction for the company.”

Chris was talking as part of a recent Indorse Engineering Leaders panel discussion. Once you have that big vision, he says “Underpinning that is going to be the product and tech side of things. You will have your product vision: ‘what are we trying to achieve for our customers through the product?’ Then you have the engineering vision that underpins the product vision. It is complementary to the product vision, and it supports it. The engineering vision & strategy lines up to delivering the best outcomes for customers through the product vision.”

It is only once that big picture is in place that a business can start to work out how it is going to get there.

The right framework for transparency and function

Chris was particularly keen on Objectives and Key Results, or OKRs. “Objectives framework, such as OKRs, can be a really powerful tool in terms of getting that prioritization and alignment right. It’s great to make a clear and visible link between what software engineers and managers are doing on the ground and how that then ties back up to top-level objectives.”

What this brings to an organization is transparency in goal setting. Everyone, from senior executives down to team members, is clear on how objectives are created and how what they do helps drive results.

Having that process is critical to determining what action is going to be taken. As another panellist, Nik Gupta, Software Development Manager at Amazon, highlighted, getting the basics right is critical. Nik and his team “spend about two months just getting our metrics right. Literally, just figuring out what are the right metrics we should track worldwide – are they instrumented, are they reliable, and how would we validate them, etc. It is absolutely essential to get that framework built before you start delving into ‘what projects are we going to do and why.’ ”

What that looks like is going to vary, and it can be easier for some functions than it is for others, as Smruti Patel, another panellist, highlighted. As Head of LEAP and Data Platform at Stripe, she has found that the former is easier to measure than the latter. For LEAP, “the metrics here are obviously more tangible. It’s easier to measure how much you’re spending on your infrastructure or how much time the customer sees when they make a request.” 

However, on the data infrastructure side “some of the inherent qualities or principles from the platform that the internal users require are security, reliability, availability, and leverage, in terms of product enablement, which then enables Stripe’s users. Here, identifying the right set of metrics for infrastructure kind of work has been a challenge.” 

To solve this, Smruti and her team were looking at leveraging learnings from LEAP and seeing how they could be applied to Data Platform. 

Prepare for change

However, while it is important to be clear on what you should measure, being too rigid once they’re defined is counterproductive. Panchadsaram pointed out that “OKRs were never meant to be these rigid rails, they were meant to be a tool for your teams to collectively commit to something.”

In a blog for O’Reilly.com, former Rent the Runway CTO Camille Fournier echoed this sentiment when she said “measurement needs to be focused on the right goals for right now, and you should expect that what you measure will change frequently as the state of systems and the business changes.”

That can only be achieved when metrics are aligned throughout the organization.

Put simply, for metrics to be relevant in the current climate, they need to be aligned with a company vision which is then cascaded down the organization. It is a process that needs to be rigorous in order to inform the work teams need to do, but it also needs to be flexible. At a time when the situation changes almost daily, it is the only way organizations operating with remote teams are going to develop metrics that are beneficial to the business.

The post Are your metrics right for a remote workforce? appeared first on SD Times.

]]>
Understanding the new “open” licenses https://sdtimes.com/open-source/understanding-the-new-open-licenses/ Mon, 07 Jun 2021 17:26:58 +0000 https://sdtimes.com/?p=44298 The Commons Clause was one of the first licenses that came out to try to combat cloud providers. It made headlines and caused an uproar in the open-source community when Redis Labs announced it was switching to the license. Under the clause, users do not have the right to sell the software, meaning third parties … continue reading

The post Understanding the new “open” licenses appeared first on SD Times.

]]>
The Commons Clause was one of the first licenses that came out to try to combat cloud providers. It made headlines and caused an uproar in the open-source community when Redis Labs announced it was switching to the license. Under the clause, users do not have the right to sell the software, meaning third parties can not sell the software for a fee or as a product or service. 

It was drafted by Heather Meeker, a specialist in open-source software licensing and strategy, and meant to complement other licenses. Applying the Commons Clause to an open-source project means the source code is available and enables users to modify and distribute it, but it does not comply with the Open Source Initiative’s (OSI) 10 guidelines for open source. 

RELATED CONTENT:
The battle of open-source licenses
Open source is a community, not a brand

Since its announcement, Redis Labs has decided to move on from the Commons Clause and created its own Redis Source Available License (RSAL) for Redis Modules, which are modules running on top of open-source Redis. Under RSAL, software can be modified, integrated into an application, used and distributed. It restricts the software from being used as a database, caching engine, stream processing engine, search engine, indexing engine or ML/DL/AI servicing engine. 

Confluent switched some components of its platform to the Confluent Community License in 2018, which allows developers to access the software code, modify it and redistribute it, but does not allow developers to use it in a competing SaaS offering. “‘Excluded Purpose’ is making available any software-as-a-service, platform-as-a-service, infrastructure-as-a-service or other similar online service that competes with Confluent products or services that provide the Software,” the license states. 

Elastic just recently announced this year Elasticsearch and Kibana would be switching to dual licenses under MongoDB’s Server Side Public License (SSPL) and the Elastic License v2. The Elastic License is a non-copyleft license that has three limitations: developers cannot provide the software as a managed service; circumvent the license key functionality or remove/obscure features protected by license keys; or remove or obscure any licensing, copyright or other notices, the company explained. 

MongoDB’s SSPL is based on the GNU General Public License, and while the company believes it contains all the tenets of what it means to be open source, it has not been approved by the Open Source Initiative because the license contains conditions for providing the software as a service. “If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License. Making the functionality of the Program or modified version available to third parties as a service includes, without limitation, enabling third parties to interact with the functionality of the Program or modified version remotely through a computer network, offering a service the value of which entirely or primarily derives from the value of the Program or modified version, or offering a service that accomplishes for users the primary purpose of the Software or modified version,” section 13 of the license states. 

According to Dev Ittycheria, CEO and president of MongoDB, since the company created and switched over to the SSPL more than two years ago, it has not had a negative impact on user adoption or impacted the success of the company.

MariaDB switched to the Business Source License as an alternative to closed source and the open core licensing models. It does not meet the criteria of the OSI because it allows the licensor to make an additional use grant that limits production use.

“If you write a new license, you also should be clear about whether you intend it to be open source or not,” Meeker told SD Times. “Fundamentally, open source licenses have no scope limitations. They cannot be limited by field of use, or time, or number of users  —  all of the typical limitations you see in proprietary licenses. Most of the new licenses that have been written recently —  like the Elastic License 2.0, the Confluence Community License, or the Business Source License — are not open source licenses. Most of the new licenses are source code licenses, and I would put them in the category of source available, though this category is still in its early stages. SSPL was more controversial, as there was some disagreement over whether it was an open source license.” 

“The future of open source is strong and still growing.  Some of the new licenses are used as alternatives to open source, but more often, complements to it,” she added.

The post Understanding the new “open” licenses appeared first on SD Times.

]]>
The battle of open-source licenses https://sdtimes.com/open-source/the-battle-of-open-source-licenses/ Mon, 07 Jun 2021 16:26:54 +0000 https://sdtimes.com/?p=44294 Earlier this year, Elastic reignited the open-source licensing debate when it announced it would be changing its license model to better protect its open-source code. Over the last couple of years, a number of companies — including Redis Labs, MongoDB, Cockroach Labs, and Confluent — have been switching their open-source licenses to avoid what they … continue reading

The post The battle of open-source licenses appeared first on SD Times.

]]>
Earlier this year, Elastic reignited the open-source licensing debate when it announced it would be changing its license model to better protect its open-source code. Over the last couple of years, a number of companies — including Redis Labs, MongoDB, Cockroach Labs, and Confluent — have been switching their open-source licenses to avoid what they call “the big code robbery,” where cloud providers like Amazon take their successful open-source project, adopt and profit off it as a cloud service without giving back to the community. 

“Cloud vendors do not care about monetizing FOSS projects, they are about getting more workloads running on their infrastructure — hence, to be the preferred destination for such workloads,” said CloudBees’ co-founder and chief strategy officer Sacha Labourey.

Confluent created a new community license, and MongoDB announced its Server Side Public License (SSPL) to combat cloud providers. In January, Elastic announced it would move its Kibana and Elasticsearch open-source projects to a dual license under the Elastic License v2 and SSPL. 

RELATED CONTENT: Open source is a community, not a brand

However, these new licenses that companies are switching to are not considered open source by the Open Source Initiative’s standard, leaving many in the industry to wonder where these companies now stand with open source.  

“These new ‘source available’ licenses contain restrictions to prevent cloud infrastructure providers from building a service out of their code. Early efforts like the commons clause limited ‘commercial use’ broadly and users found that the license language ‘created some confusion and uncertainty.’ Recent efforts by Elastic and others are more surgical. They simply attempt to restrict users from standing up the software alone as a service. The goal of these new licenses is to continue to capitalize on the widespread availability of the software and its source code to gain future customers while shutting out competing SaaS services based on the same code,” Justin Colannino, director of developer policy and counsel at GitHub, wrote in a post

According to Stephen O’Grady, principal analyst and co-founder of the developer analyst firm RedMonk, while it can be upsetting, the cloud providers are not actually abusing open-source projects if they are still abiding by the rules of the open-source license. “If project owners don’t want certain parties to be able to use their software, they shouldn’t be using open-source licenses,” he said. 

MongoDB argues that under SPPL, developers are still able to access, use, modify and redistribute its code. “We adopted the SSPL license to protect our right to build an innovative business in the Cloud era. We wanted to counter the threat of hyperscale cloud vendors taking our free product and offering it as a service without giving anything back,” said Dev Ittycheria, CEO and president of MongoDB.

Tomer Levy, CEO of Logz.io, a cloud observability platform provider, argues that changing licenses shakes the entire foundation of the open-source philosophy and shows that those in control of popular projects have the ability to take these projects away from the community at any time. “We were disappointed to hear about Elastic’s decision to change to a license which is not truly open source. This is a slap in the face to the engineers that helped build the community and make the open source software the staple that it is today,” he said. 

O’Grady added that changes like these have the potential to blur the definition of what is and isn’t open source, creating more uncertainty in the space. “If these companies genuinely want to protect open source, they would actively and aggressively maintain a bright line of distinction between their source available, proprietary licenses and genuine open source alternatives.,” he said.

Elastic made the decision to no longer refer to Elasticsearch or Kibana as open source and instead refer to the project’s as free and open. “While we have chosen to avoid confusion by not using the term open source to refer to these products, we will continue to use the word ‘Open’ and ‘Free and Open.’ These are simple ways to describe the fact that the product is free to use, the source code is available, and also applies to our open and collaborative engagement model in GitHub. We remain committed to the principles of open source — transparency, collaboration, and community,” the company explained in a post

Red Hat’s Haff actually thinks it can be a good thing if a project is successful and popular enough that a big public cloud provider is going to try to compete with it. “There’s a saying in the open-source space that your biggest challenge isn’t to be competed with, it’s to have no one know or care what you do,” he said. 

Some ways to combat the cloud providers, other than changing your software licensing model, is to form innovation partnerships with the cloud vendor so there’s a window where they can’t just steal your functionality and hopefully during that window the project innovates and moves past the threat. 

Drupal’s Bryon thinks creating a form of Creative Commons for open source could help categorize open-source projects into projects that are free to use, projects that require attribution and so on and so forth. “That sort of thing around open-source licenses could be really interesting to explore, because it would allow the expression of what these different projects are trying to do, but through the singular lens of this organization that has proven its importance and it’s credibility within the community,” she said. 

She also suggested creating social pressures on these companies to do better. WSO2’s Newcomer thinks we are already seeing Amazon react and change. In response to Elastic, the company created OpenSearch, an open-source fork of Elasticsearch and Kibana, and it is working with the industry to support and maintain the project long-term. Additionally, New Relic recently contributed Pixie, the open-source project for Kubernetes-native observability, to the Cloud Native Computing Foundation, and expanded its relationship with Amazon to run Pixie on AWS. 

Amazon “is the lead right now in this market. They have the capability to just take a leadership position in solving new problems through collaboration and open source,” said Newcomer. “What we need is more standard ways of interacting with them, standard platforms that all cloud providers should implement to solve the problems in the way of people so they’re not in this situation of having to pick and choose, which is difficult for everyone.”

The post The battle of open-source licenses appeared first on SD Times.

]]>
Apple, Google, Microsoft, Mozilla form WebExtensions Community Group https://sdtimes.com/webdev/apple-google-microsoft-mozilla-form-webextensions-community-group/ Mon, 07 Jun 2021 14:49:39 +0000 https://sdtimes.com/?p=44285 The World Wide Web Consortium (W3C), which sets international standards for the web, has announced the formation of the WebExtensions Community Group (WECG). WebExtensions is an API for developing extensions for different web browsers.  Apple, Google, Microsoft, and Mozilla are among the first to initiate this group, but the WebExtensions Community Group also welcomes other … continue reading

The post Apple, Google, Microsoft, Mozilla form WebExtensions Community Group appeared first on SD Times.

]]>
The World Wide Web Consortium (W3C), which sets international standards for the web, has announced the formation of the WebExtensions Community Group (WECG). WebExtensions is an API for developing extensions for different web browsers. 

Apple, Google, Microsoft, and Mozilla are among the first to initiate this group, but the WebExtensions Community Group also welcomes other browser vendors and extension developers to join. 

“With multiple browsers adopting a broadly compatible model for extensions in the last few years, the WECG is excited to explore how browser vendors and other interested parties can work together to advance a common browser extension platform,” the W3C team wrote in a post

The goal of WECG is to come up with a common vision for extensions and work on standardization. 

More specifically, it hopes to make it easier to develop extensions by providing a consistent model and common set of functionality, APIs, and permissions. It also plans to outline an architecture that improves performance, is more secure, and is more resistant to abuse. 

According to the W3C team, the WECG’s work will be driven by a set of HTML and W3C TAG principles: user-centered, compatibility, performance, security, privacy, portability, maintainability, and well-defined behavior. 

It will use the existing extensions model and APIs currently supported by Chrome, Edge, Firefox, and Safari as a starting point. 

The W3C team also clarified that it does not want to come up with a specification for every aspect of the web extensions platform. “We want browsers to keep innovating and shipping APIs that may serve as the basis for further improvement of the web extensions platform.

In addition, we don’t plan to specify, standardize or coordinate around extension signing or delivery. Each browser vendor will continue to operate their extension store fully independently, with their own technical, review, and editorial policies,” the W3C team wrote. 

The post Apple, Google, Microsoft, Mozilla form WebExtensions Community Group appeared first on SD Times.

]]>
Open source is a community, not a brand https://sdtimes.com/open-source/open-source-is-a-community-not-a-brand/ Mon, 07 Jun 2021 13:34:07 +0000 https://sdtimes.com/?p=44280 It’s no longer a question of why should you use open source. The tables have turned and businesses are asking themselves why aren’t they using open source? But an even bigger question has been left unanswered, and that is how are they using open source? Are they staying true to the open source meaning?  As … continue reading

The post Open source is a community, not a brand appeared first on SD Times.

]]>
It’s no longer a question of why should you use open source. The tables have turned and businesses are asking themselves why aren’t they using open source? But an even bigger question has been left unanswered, and that is how are they using open source? Are they staying true to the open source meaning? 

As open source has become increasingly more popular, companies have begun to adopt open source for the brand, but then try to go against the purpose of open source, according to Gordon Haff, a technology evangelist at open-source company Red Hat. “I’ve definitely been on a lot of calls where one of the first things I’ll ask business leaders is why do you want to be open source, and often the answer is: because our customers seem to like that, but we don’t want Amazon to compete with us. We don’t want someone else to compete with us. We want to be able to maintain some proprietary parts of our software,” he said.   

RELATED CONTENT: The battle of open-source licenses

Open source itself has never gotten away from its meaning, according to Vicky Brasseur, author of the bookForge Your Future with Open Source.” The problem, she said, is that people haven’t bothered to learn or understand the true meaning of open source. “They make up their own definitions of open source, or they do it via the telephone game…and so the definition they’re working under in no way relates to what it actually is,” she said. According to Brasseur, the Open Source Initiative (OSI) defined open source over 20 years ago, and that is the one true meaning there is.

The Open Source Initiative’s definition of open source

OSI’s open source definition states that open source goes beyond just accessing the source code. To be open source, the software must comply with the following 10 criteria: 

  1. Free redistribution, 
  2. Source code, 
  3. Derived works, 
  4. Integrity of the author’s source code, 
  5. No discrimnation against persons or groups, 
  6. No discrimination against fields of endeavor, 
  7. Distribution of license, 
  8. License must not be specific to a product, 
  9. License must not restrict other software, 
  10. And the license must be technology-neutral.

“That is the one, the only, the worldwide recognized standard,” said Vicky Brasseur, author of the bookForge Your Future with Open Source.” “Standards are very important because otherwise we can be using the same words and mean completely different things, and from a business perspective, that can be devastating for people to be using different words or the same word open source and meaning different things. There is no other definition of open source.”

Creating a business model around open source

According to Robin Schumacher, vice president of product at open-source monitoring solution provider Netdata, the reason why open source has been so successful is because of the social aspect of it. Unlike proprietary software, it’s collaborative. It’s community-oriented and community-driven.

There are ways for a business to successfully use open source to their competitive advantage while staying true to the nature of open source, but open source shouldn’t be adopted just because it makes a company look good. “Your primary responsibility as a business owner, as a founder, as a manager of an organization, of a business, of a company, is not necessarily to open source. It is to your business,” said Brasseur. “If you are starting from open source and then trying to reverse engineer a business out of that, you’re coming at it from the wrong direction.” 

RELATED CONTENT: Making open source work for you and your business

A business should be looking at what the user needs, what the environment is they are targeting, what the trends are, whether or not they can meet those user needs or do it better than someone else, and then decide if it makes sense to use open source or release software to open source, Brasseur explained. If open source makes sense for the business goal, then companies need to put the effort into building the community around open source and understanding what the goal of releasing to open source is going to be. “If you don’t know your business goals, you won’t be able to maintain and guide that open-source project in a way that you can actually meet your business goals,” said Brasseur. 

According to Sacha Labourey, co-founder of enterprise software delivery company CloudBees, there are a number of models and tools today to make sure organizations are able to properly manage and govern the use of free and open-source software (FOSS). “We talk a lot about FOSS, but the reality is that it has been incredibly stable in how it operates and the value it provides. What has really been evolving fast are the various business models around FOSS,” he said. 

One of the best and most proven models out there is the open core model, according to Schumacher. In the open core development model, vendors open-source a portion of their software, but surround it with proprietary offerings. While it is valid from a business model perspective, Red Hat’s Haff noted that it’s important to recognize the open core model makes things a lot harder for the community to do collaborative open development.  

It takes a lot of time for people to figure out how to use the code, set it up properly and then maintain it, explained Angie Byron, core co-maintainer of the Drupal project, an open-source web content management framework. What companies like Acquia, a digital experience platform built around Drupal, and Red Hat do is provide a cloud platform that takes all the guesswork out for users and provides users with professional services and a support system. 

When projects and vendors commercialize open source, they have to understand there are various levels of commitments and contributions they are going to get from the community. It’s not always about code contributions, Schumacher said. There are other ways the community can help out;.for instance, by doing testing, quality assurance, performance testing, bug reports, feature requests, forum contributions, meetups, and sharing best practices and pitfalls.

Giving back to the open-source community

Technology giants like Google, Red Hat and others have been the most successful in the open-source world because they embrace the developer. “The love of the developer, the understanding that the developer is the set of ground troops that takes the technology into a particular enterprise, ingrains it into the lines of business, then it begins to bubble up to the higher-ups who see the benefits of what’s going on or just the proliferation of this software, and have no choice but then to make a commitment to it,” said Netdata’s Schumacher. 

A successful open-source vendor will provide a very smart and qualified developer relations staff, he explained. “You are going to need people who understand the spirit, mindset and everything of the developer community, of open source in general…” he said. 

Schumacher has three pillars for a successful developer relations staff:

  1. Community managers who are active in the industry and evangelizing the software, participating and scheduling meetups and events, are present on social media, and are broadcasting the benefits of projects to the open-source community
  2. Skilled technical members who are responsible for helping the community implement the open-source software and providing best practices, jump-starts, sample apps, and code contributions
  3. Lastly, you need an educational aspect that goes beyond how to use the software and talks about the next steps in terms of how to utilize the software to the user’s advantage. This area should include videos, written content and other resources to provide users with a pass to success. 

“The developer relations staff is absolutely critical for any vendor that wishes to work with open-source software, commercialize and be successful,” said Schumacher.

However, author Brasseur warns that while developer relations and open-source program offices can be beneficial, you have to make sure you are hiring the right or qualified people. “There are great people out there for this, but there aren’t nearly as many experienced people for this.” You can’t just hire internally because a developer contributed to an open-source project once, she explained. 

Other ways organizations can give back or get involved in the community include getting involved in industry initiatives or open-source foundations. Organizations “have to change their mindset from, we’re just going to develop what we think we need to be competitive to let’s help develop what the industry needs,” said Eric Newcomer, CTO at WSO2, an API management company. “One of the reasons open source is so successful is because people can collaborate on a shared vision of a common problem that everybody has.”

It’s not as easy as telling organizations to give back though, Drupal’s Byron explained. She said you have to incentivize companies to give back.

At Drupal, the project created a contribution record where contributors and committers can show how they are helping to sustain the project and the Drupal Association. “Hammering on that is probably the best way to do it because companies are probably not going to contribute out of the kindness of their heart. They need to have an incentive that matches with their return on investment,” Byron said.

She also explained that contributing to open source not only helps solidify an organization as an expert in their field, but it helps gain and retain talent because many developers want to work for companies that make time for open source. Contribution credits can help weed out the true open-source experts from the pretenders. “If you are selling yourself as an AWS vendor, but you have no record of ever contributing to anything around the AWS ecosystem, it’s sort of like, well did you just take a test and now you’re calling yourself an expert versus if you can see the trail of this person making contributions, writing blog posts and such, it’s easy to choose between the two. One is literally establishing themselves as an expert,” Bryon added.

The challenges facing open source today

Vicky Brasseur, author of the book “Forge Your Future with Open Source,” sees three main issues plaguing the open source landscape today. 

  1. The influx of open-source projects: According to Brasseur, there has been a flood of new projects being released. While that can be a good thing, it can also be problematic if organizations are just releasing things into open source to be trendy. She explained it makes the signal-to-noise ratio off-balance and makes it difficult to find useful projects. “It’s contributing to this age-old problem of reinventing the wheel, rather than perhaps contributing back to the existing wheel that’s already there,” she said. It’s tempting to want to release something rather than contribute to something, but you don’t necessarily have to start everything from scratch. Support what’s already out there, fork it, or take it into a different direction, according to Brasseur. 
  2. Lack of knowledge: Knowledge should go beyond just the definition of open source and free software. Businesses and developers need to understand the copyright and licensing details that go behind open source. Developers that “play fast and loose” with the laws, Brasseur said, make it difficult for companies to use their software because they have to take the time to figure out what the license is and how they can use the software. Too many hours are wasted just talking about and chasing down licensing information.
  3. Monocultures: Brasseur sees a number of monocultures plaguing the open-source ecosystem through fiscal sponsors, tooling and foundations. “These monocultures are a problem. All you need to do is watch Twitter on any day when GitHub is down. All of open source screeches to a halt. That is a huge problem. People equating open source with GitHub, that is a problem… I like GitHub, they do good things, but from an ecosystem point of view, that’s a problem. Projects that assume the only place I can go to have somebody support me from a foundational level is the Linux Foundation, that is a problem. There are lots of different options. The Linux Foundation does a very good job in many ways, but it’s not the be-all and end-all. Companies that think in order to participate in open source, I have to pay to become a member of a foundation, that is a problem,” she explained.  — Christina Cardoza
Open-source software in the enterprise

Red Hat’s 2021 State of Enterprise Open Source report found 90% of IT leaders are using open source in the enterprise, and 79% expect their use of enterprise open-source software for emerging technologies (edge, IoT, AI and ML) to increase over the next couple of years. The main drivers for adopting open source are infrastructure modernization, digital transformation, higher quality software, access to latest innovations, and better security. 

This year, the company decided to ask respondents whether or not they look to see if a vendor contributes back to open source when looking to implement a new solution. Surprisingly, the report found that IT leaders not only care, but they are much more likely to choose a vendor who contributes. “That means the IT leaders are starting to appreciate the virtuous cycles that you have in open-source development,” said Gordon Haff, a technology evangelist at open source company Red Hat.

But barriers still remain with respondents citing level of support, compatibility, and lack of internal skills as top challenges to adopting open source. 

Software solutions provider Perforce, which recently released a report on open-source opportunities with Forrester Research, believes that while open source has cemented its role as a critical agenda driver in the enterprise, not enough organizations are taking the necessary steps to optimize their OSS strategies. 

“Without comprehensive and optimized strategies that govern the critical pillars of running OSS, organizations risk missing out on the benefits it can deliver, including greater flexibility and better efficiency, time to market for products, customer and employee experiences, and more,” the report stated. 

While free and open, open source can be complex and require expertise to maintain, support and operate. According to the Perforce report, it’s important to partner with industry leaders to maximize open-source success through migration help, ongoing management and support. Additionally, an open-source strategy that can clarify the open source initiatives, governance, role of internal resources and external support can help pave the way for open source in the enterprise. 

“Finding success with open-source software as an enterprise organization requires a fully formed strategy – especially as it applies to critical areas like support,” said Rod Cope, CTO at Perforce Software

The post Open source is a community, not a brand appeared first on SD Times.

]]>
Low code meets the urgency of today’s rapidly changing world https://sdtimes.com/lowcode/low-code-meets-the-urgency-of-todays-rapidly-changing-world/ Thu, 03 Jun 2021 15:06:19 +0000 https://sdtimes.com/?p=44219 It should come as no surprise that low code was instrumental in facilitating the large-scale changes many companies had to undergo last year, and continues to be an important part of many organizations’ strategies moving forward. In fact, an upcoming survey by IT company ServiceNow and Radar Media shows that 45% of respondents have adopted … continue reading

The post Low code meets the urgency of today’s rapidly changing world appeared first on SD Times.

]]>
It should come as no surprise that low code was instrumental in facilitating the large-scale changes many companies had to undergo last year, and continues to be an important part of many organizations’ strategies moving forward.

In fact, an upcoming survey by IT company ServiceNow and Radar Media shows that 45% of respondents have adopted low-code platforms and that 79% say now is an optimal time to invest in low code. 

According to John Bratincevic, senior analyst at research firm Forrester, there were two major use cases for low code and no code in the past year: building new apps or adding onto existing ones. Examples of new apps developed using low code include medical clinics needing to build an app to route patients to different parts of a building based on COVID rules, or vendors making apps for distributing PPP loans, he explained. 

“One vendor wrote a solution in 48 hours and sold it to like 25 regional banks. So they themselves got into a whole new business line overnight, well, in two days. And then the banks of course could adopt the solution. Lots of people self-served and made their own using the platforms,” he said. 

RELATED CONTENT:
2021: The year of low code
Businesses in 2021 think high for low-code

Adding onto or changing existing apps was also very easy using low-code platforms. For example, Bratincevic recalled a retailer that had to get into the delivery business quickly and because they’d already used low code to build their important applications, it was as simple as adding a new module on top of that application to handle delivery and transportation management. 

“You can build stuff faster, you can change stuff faster and easier, and more people can do it,” said Bratincevic. “In the context of the current need — COVID was a very desperate need. Just in general the sheer amount of software that needed to be made and changed, if you look at the numbers, it’s just ridiculous. It’s just the right thing at the right time at the right level of maturity, and the economic and social factors all kind of colliding.”

Changing needs require development speed 

One of the key benefits of low-code development is speed. Traditional software takes a long time to deliver, and sometimes by the time it actually has been delivered, requirements have changed.

ServiceNow and Radar Media’s survey found that low code cut development time at least in half. Forty-two percent of respondents had a 2x reduction in development time and 43% saw reductions of 3x. 

In addition to being able to build solutions faster, low code also provides the ability to make changes quickly, without compromising on quality, Bratincevic explained. “There’s a lot of quality checks in these platforms, so like if you’re going to delete a field in a database it’ll stop and go ‘hold it, if you do that it’ll break these hundred other things. Here’s how it fits into the architecture.’ There’s a lot of quality checks that are built into the products that make it so that, in addition to being able to develop quickly, you can change quickly with a certain level of quality maintained,” he said.

Bratincevic added that it would be nearly impossible to build and change software at the scale and speed that’s needed using only traditional methods, in the traditional working pattern of developers only doing software and business people only doing business work. “To me that’s the kind of big thing, it’s sort of the technology key for many firms to really transform,” he said. 

The pandemic has made companies more wary of something like this happening again and how they could respond. “A lot of people had systems that couldn’t change to respond to whatever the different needs of COVID were and that was a huge problem,” said Bratincevic. “So people I think are changing their approach to say ‘what do we do when this happens again? How do we build that concrete ability to change into the systems?’”

For example, when people started working from home and not driving their cars, insurance companies needed to have a way to change their billing systems quickly to be able to issue billions of dollars in refunds. “That’s a fast big change to really core systems that theoretically aren’t supposed to change very much. That paradigm of there being these systems that don’t change very much so we can kind of leave those be, but maybe there’s some kind of narrow set of systems that need to change or are very unique, that kind of broke. You realize for everything you need to be able to plan for change and be able to do it quickly, or make new stuff,” said Bratincevic. 

Younger workers adopting low code

Hari Subramanian, founder of no-code tool provider Appify, believes the generational change in the workforce and their customer bases—both shifting younger—is also contributing to low code’s success. 

Younger workers tend to be very tech-savvy, having lived most of their lives surrounded by technology. Younger workers with no development experience might be able to leverage that knowledge to go into a low-code or no-code platform and create an application from scratch. 

At the same time, younger customers are expecting modern digital experiences, Subramanian explained. “They want to be able to at the click of a button get a $14 pizza and track it until it reaches their door …  If I’m going to meet a salesperson, I need that same modern digital experience. Things need to be available at my fingertips. I need rapid access to rich information. I need to be able to engage in a very rich way and that demand is being placed on businesses as well. And it kind of comes back to no-code/low-code platforms as a way for businesses to accelerate and deliver to that need,” he said.

In addition to the age of workers, the age of the company also plays a role in adoption. According to Jinen Dedhia, co-founder of low-code platform DronaHQ, new companies are able to adopt low code without a ton of baggage. He compared the low-code movement to the introduction of the Ford motor car. “You always want to go from A to B the fastest and you have horse carts, which can take you there (horse carts in our world are developers and tooling), but tools like low code/no code are like the Ford motor car. You get to do things extremely fast. And the proof of the pudding, the ones who experience it would definitely not look at anything else.”

Larger, more established companies might have some experiences with low code, however. For example, a company with a Microsoft ecosystem could get started using Microsoft PowerApps. “But you won’t see a lot of adoption because a lot of people won’t do it unless you are a power user or somebody who will do very well with SharePoint and so and so. Large enterprises are going for the citizen developers and in smaller companies they are basically making full-fledged systems, mission-critical applications,” said Dedhia. 

IT still key to low-code success

Low code may have been a popular choice this year, but a few years ago reception among developers and architects was mixed. A 2018 Progress survey of 5,565 developers revealed that 28% of developers and 20% of managers had a positive opinion on low code. The rest fell into categories such as “skeptical” (37% of developers and managers combined), “negative” (21%), “customization and flexibility seen as shortcoming” (17%), and “good for simple apps and prototyping but not suitable for complex ones” (16%). 

The increasing push to adopt low-code/no-code tools might have developers and IT teams worried, but the need for those technical roles isn’t going away any time soon, Dedhia explained. These solutions enable you to build faster, but technical expertise is still needed. 

“You definitely need engineering skill sets. It’s just that without these tools a typical engineer would take 10 days and with these tools an engineer could go about building it in a day’s time,” said Dedhia.

Even after an application is built, those skill sets are still needed. Once applications are live, they need to be maintained long-term. “Even if they start off with building their applications they have to move at some point in time to IT for maintenance,” said Dedhia. “You need IT to maintain and you need IT to do governance.” 

In addition, there are some limitations to these tools that require users to turn to their development or IT teams. Dedhia gave the example of a low-code platform not allowing you to create an API endpoint for accessing the data. 

“Even if you do not have an out-of-the-box way of doing it on the platform, there will always be workarounds,” said Dedhia. “And there will always be ways and means in which you can accomplish and get things done. I think IT companies who are taking up low code/no code should have clarity on their expectations and the willingness that if they’re adopting low code/no code, they might encounter scenarios where they might have to look beyond augmenting the low-code capabilities.”

The post Low code meets the urgency of today’s rapidly changing world appeared first on SD Times.

]]>
Scaling up Agile requires a change of Pace https://sdtimes.com/agile/scaling-up-agile-requires-a-change-of-pace/ Mon, 10 May 2021 13:21:15 +0000 https://sdtimes.com/?p=43961 Software teams and organizations today are looking to scale faster than ever. The pressure to release features at an increasing rate, while keeping bugs to a minimum is only exacerbated by the growing size of dev teams needed to deliver said features. We add more and more devs to a team, but only get incremental … continue reading

The post Scaling up Agile requires a change of Pace appeared first on SD Times.

]]>
Software teams and organizations today are looking to scale faster than ever. The pressure to release features at an increasing rate, while keeping bugs to a minimum is only exacerbated by the growing size of dev teams needed to deliver said features. We add more and more devs to a team, but only get incremental returns, all the while the experienced, senior devs seem to be delivering less and less of their high-value code, which can make or break the product. 

The approaches that have gotten us this far have stalled. Instead of adding people to a team, in order to grow we need to look differently how people, already in a team, work together.

Before we dig into that, let’s look at the story so far.

The Waterfall model was essentially the first project management method to be introduced to the software engineering industry. At the time, software was just a small part of larger infrastructure projects. 

RELATED CONTENT: Agile at 20: Where it’s been and where it’s going

Due to the rigidity of specifications for those projects, there was little room for variation in the process. Any changes made to the specifications during development would have resulted in high costs, so the very rigid and structured approach of Waterfall worked well.

As software became more prominent in business use, and ultimately personal use, there was a rise in the number of much smaller software applications. This, in turn, resulted in a rise in the number of software companies creating such applications, and a rise in the issues with the rigid and deterministic approach of Waterfall.

Agile was born out of the need to address these challenges. Software teams identified that it was far more useful to have a process that could respond to changes in customer requests, and to get something basic working quickly, and then adjusting and iterating from there. 

Sprints, the most applied aspect of Agile development, enabled software companies to create value for customers much quicker. It also enabled teams to be more responsive and reduce the amount of rework that resulted from changing specifications.

And here we are in the present times. Despite the evolution of software development approaches through the years, and the benefits that have come with it, issues that arise from team and organization growth remain unresolved. 

So, what is going on? 

Let’s take a small development team and follow them as they scale. 

Our dev team, part of a start-up, has five developers. Of the five, one is an extremely experienced senior developer, another couple are senior devs, and the last two are juniors with far less experience. Before the juniors came on board, the three senior developers would coordinate themselves and just get on with it. But as the team has grown, they have needed to add a bit more structure into their sprint planning and execution, so that the whole team had plenty of work to do for the fortnight. 

As well as this, the most senior dev started to spend his time assisting the two new juniors. Naturally, this limits the other work that he can do. 

Coincidentally (or perhaps not a coincidence at all) two new pressures have arisen: to produce more features; and, at the same time, to fix up the quality based on the bugs resulting from the new developments. Our most senior dev, who has become the de facto team leader, complains to the founder about needing more assistance. They are, of course, old mates who have been in the business from the start, so convinces the founder to authorize more hires. 

At this point the team has a real structure and is sure to plan out everyone’s work to ensure they’re getting the most out of the team. This growing team requires a fair amount of the senior dev’s time, but that’s to be expected to keep the machine running. On top of this, the founder gets calls with ‘urgent’ customer requests and ignores the sprint load to expedite them into the senior dev’s workload.

Back to the question: What’s going on? Why would teams all over the world do this?

These issues don’t arise from malice and they certainly don’t arise from stupidity (given the calibre of minds involved). Instead, they come from two assumptions we make about how teams should operate.

Firstly, we assume all team members’ contribution is equal. At the end of the day there is no “I” in “team” and we all do our bit around here. 

This assumption is evident in the way we plan work. We hold planning meetings where everyone has a say in what work they should be doing. The focus in these planning meetings is on the inputs and an even spread of load, rather than on the output of the team. 

Secondly, we assume time not working is wasted time. The more everybody does, the more we get done as a team. Right?

This becomes obvious in situations where we have a team member who has an hour to spare in a day. Instead of being comfortable with letting that team member twiddle their thumbs, we will find something ‘more useful’ for them to do. Maybe start investigating a quick bug fix? 

These assumptions are based on reasonable efficiency drivers we have as human beings, but these assumptions don’t apply effectively to software teams.

Let’s examine them more deeply!

1. All team members contribute equally to the output of the team 

Every team has one person who is the most skilled person in that team. This gap in the skill level is magnified by their experience with the code base and the product, which creates a very large discrepancy in value of the code that is written by them, as opposed to that written by the most junior person. This does not mean that junior devs are not valuable, but instead it simply clarifies the type of work, and the value-add that is able to be done at each tier of seniority.

This is crucial because, by default, the most skilled senior dev acts as the bottleneck for the work the team can deliver as a whole, and even more so for the high value work the team can deliver, which leads us to conclude that NOT all members’ contributions are equal.

2. Idle time is a waste of time

A team of people working together is not like swimmers in a pool, each swimming in their own lanes. There are many interdependencies in the work the team members do, which means we will never have an even load across a single sprint, and some people will be idle from time to time. Forcing an even load is planning to fail.

If the first assumption is wrong, and not all team members’ contributions are equal, we should then be maximizing the contribution of the most skilled resource. This may be done in many ways, one being that they are idle for 30 minutes in between tasks because picking up another task would make them late to do a handover to the bottleneck resource.  

Sometimes, not working is the best contribution a team member can make!

How do we fix this?

The answer is conceptually simple, but much harder to implement. 

What the team needs is more capacity for the bottleneck end of the bottle (the most senior dev), not for the body of the bottle (the team as a whole). By increasing the capacity of the body, we are just putting more strain on the bottleneck, as opposed to focusing on widening the bottleneck, which increases the output of the whole team. 

So the answer is to coordinate more effectively around the bottleneck, then to protect the team’s work from the impact of variation, and finally to accelerate the flow of work through the team. These three initiatives make up ‘Pace,’ an Agile-friendly framework that replaces Scrum in many teams. To take something tangible from this article, here are 4 immediately applicable Pace rules to minimize bottlenecks and maximize team performance:

1. Ensure there is a steady supply of work for the bottleneck

  • As the bottleneck controls our output, and time lost from the bottleneck is lost for the team, we ensure there is always valuable work for the bottleneck.

2. Offload the bottleneck from unnecessary tasks

  • All work that can be done by others is assigned to them, freeing the bottleneck to only do work they must do. Check for the efficiency trap of planning tasks for the bottleneck because they are faster.

3. Plan others’ work around the bottleneck

  • The bottleneck’s work (including others’ work that requires the bottleneck) is planned first. Then, others’ work which does not interact with the bottleneck’s can be planned.

4. Ensure quality inputs into the bottleneck

  • To minimise the risk of bottleneck rework, extra quality steps such as tests or checklists are introduced immediately before the bottleneck.

Pace applies these proven rules and ensures they produce significant benefits – almost instantly, and certainly over the longer term.

The post Scaling up Agile requires a change of Pace appeared first on SD Times.

]]>
Confusing the What with the How https://sdtimes.com/softwaredev/confusing-the-what-with-the-how/ Thu, 06 May 2021 16:50:14 +0000 https://sdtimes.com/?p=43931 Imagine you are building a house. You get all your tools, lay out the lumber, and start constructing the first room. As you are building the room, you decide if it’s a living room, or a kitchen, or a bathroom. When you finish the first room you start on the second, again deciding, as you … continue reading

The post Confusing the What with the How appeared first on SD Times.

]]>
Imagine you are building a house. You get all your tools, lay out the lumber, and start constructing the first room. As you are building the room, you decide if it’s a living room, or a kitchen, or a bathroom. When you finish the first room you start on the second, again deciding, as you build, what kind of room it will be.

Let’s face it, no one would do that. A rational person would first figure out what the house should look like, the number of rooms it needs to contain, how the rooms are connected, etc. When the plans for the house are complete, then the correct amount of supplies can be delivered, the tools taken out, and construction begun. Architects work with paper and pen to plan the house, then, and only then, carpenters work with tools and lumber to build it. Everyone associated with home building knows that you figure out what is wanted before determining how to build it.   

RELATED CONTENT:
The most uninteresting reason your project might fail
5 tasks project managers must perform to ‘sell’ their proposals

Arguably, the fundamental principle of systems development (FPSD) is also to figure out what the system is supposed to do before determining how to do it. What does the user want? What does the system have to do? What should system output look like? When the what is understood, then the how can start. How will the system do it? How should the system generate needed output? The how is the way users get what they want.

The IT industry has long recognized that confusing the what with the how is a major cause of project failure resulting in user dissatisfaction from poor or absent functionality, cost overruns, and/or missed schedules. Ensuring that the what is completely understood before attempting the how is so important that it was engraved into the dominant system development life cycle methodology (SDLC) of the time—the waterfall approach. 

For those of you who just recently moved out of your cave, the waterfall SDLC consists of a series of sequential phases. Each phase is only executed once, at the completion of the previous phase. A simple waterfall approach might consist of five phases: analysis, design, coding, testing, and installation. 

In this approach, the analysis phase is completed before the design phase starts. The same is true for the other phases as well.

The good news is that with the waterfall approach, systems developers did not have to remember to put the what before the how because their SDLC took care of it for them.

Then iterative and/or incremental (I-I) development came along and, the rest, as they say, got a little dicey.

Although there are dozens of I-I approaches, they are all variations of the same theme: make systems development a series of small iterative steps, each of which focuses on a small portion of the overall what and an equally small portion of the how. In each step, create just a small incremental part of the system to see how well it works. Vendors like to depict I-I development as a spiral rather than a waterfall, showing the iterative and incremental nature of these approaches.

Using an I-I approach such as prototyping, a session might consist of a developer sitting down with a user at a computer. The user tells the developer what is needed, and the developer codes a simple solution on the spot. The user can then react to the prototype, expanding and correcting it where necessary, until it is acceptable. 

This is obviously a very different way to develop systems than the waterfall approach. What might not be so obvious is that the various I-I methodologies and techniques, such as rapid application development, prototyping, continuous improvement, joint application development, Agile development, and so on, still involve figuring out what is wanted before determining how to do it. 

Rather than looking at I-I as a picturesque spiral, an I-I approach can be viewed as a string (or vector for you programming buffs) of waterfall phases where each cycle consists of a sequence of mini-analysis, mini-design, etc. phases. However, rather than each phase taking six months they could be no longer than six weeks, or six days, or six hours.

It might take a half-dozen cycles of sitting down with a user to figure out the requirements (the what) and then coding the results (the how) before showing them to the user for additional information or changes, but the principle is always the same—understand the what before determining the how.

However, too many developers throw out the baby with the bathwater. In rejecting the waterfall approach, they mistakenly ignore the basic what before the how—the FPSD. The result is the reappearance of that pre-waterfall problem of project failure resulting in user dissatisfaction from poor or absent functionality, cost overruns, and/or missed schedules.

Why? How could something so well understood as ‘put the what before how’ be so ignored? Here are three common reasons for this troublesome behavior.

Reason 1. Impatience (Excited to Get Started)—Many in IT (the author included) started their careers as programmers. Programming is a common entry-level position in many system development organizations. It is understandable that new (and not so new) project team members are anxious to start coding right away. In their haste, FPSD is not so much ignored as short-changed—corners cut, important (sometimes annoying) users not interviewed, schedules compressed, etc. The result is an incomplete understanding of exactly what the users want.

Reason 2. Not Understanding the Value of Analysis—Analysis, or whatever your call it (requirements, logical design, system definition, etc.) is the process of learning from users their requirements (the what) and then documenting that information as input to system design (the how). However, analysis has endured some heavy criticism over the past few decades. Some feel that it is overly laborious, time consuming, error prone, or just not needed at all. The result can be an incomplete understanding of what the new system needs to do.

Reason 3. Confusion about the FPSD and the Waterfall Approach—The waterfall SDLC is viewed by many as a relic of IT’s past, offering little for today’s developers (not entirely true, the waterfall approach still has its uses, but that is a subject for another time). Unfortunately, the what-how distinction is closely tied to this approach so any rejection of the waterfall approach contributes to skepticism regarding any what-how talk.

What’s a project manager to do? How do you ensure the what is understood before the how is undertaken? There are three things the project manager can do.

Training—The problem with most systems development rules and guidelines is that the reason they should be followed is not always obvious. Or has been forgotten. Systems developers tend to be an independent and skeptical bunch. If it was easy to get them do to something, then documentation would be robust and project managers would earn half what they do now because their hardest job would have disappeared. No, managing a team of developers is like teaching an ethics class in Congress—difficult, underappreciated, and often exhausting. The one saving grace is that systems developers like to create great systems. The vast majority of developers take great pride in doing a good job. The project manager needs to tap in to that enthusiasm. 

The easiest way to get systems developers to do something (other than forbidding them from doing it) is to convince them that doing it is in their and the project’s interest and that separating the what from the how is in that category.

But wait. You can hear the challenge right now. “We are using Agile development, so we don’t need to separate the what from the how.”

The answer is that the purpose of the what-how distinction is not to create separate development phases but is to make developers think before they act—to ensure that before the design hits the page or a line of code is entered on a screen, the problem is mulled over in those high-priced heads.

Is this approach counter to Agile development or any iterative-incremental approach? No. Read the books and vendor manuals more closely. There is not one author or vendor who believes you can skip the thinking if you use their tool or technique. The problem is that many of them are not sufficiently vocal about the value of thinking before acting.

Discipline—The problem is usually not knowing (most know to complete the what before starting the how), the problem is doing. A good way to get team members to do the right thing is to codify the desired behavior before the project kicks off. Rules, standards, and strong suggestions presented before a project starts are more likely to be accepted and followed by team members than mid-project changes, which can be seen as criticisms of team member behavior.

The project manager needs to lay out the project rules of engagement, including such things as the SDLC method or approach to follow, the techniques and tools to use, documentation to produce, etc., all focused on ensuring the what is completely understood before starting the how. 

Then comes the hardest part of the entire project—enforcement. The project manager needs to ensure that the rules of engagement are followed. Failure to enforce project rules can undercut the project manager’s credibility and authority. A few public executions early in the project do wonders for maintaining that project manager mystique.

Collaboration—Want to influence a systems developer? Need to convince developers to follow systems development best practices? Then have him or her collaboratively meet with other systems developers. The team walk-through is a great vehicle for this.

In a team walk-through the developer presents, demonstrates, and defends his or her work, not to users, but to other team members. The developer walks the other team member through the user requests, his or her analysis of those requests, their solution to the request, and finally any demonstrable work products. This friendly IT environment is a useful way to test if the developer’s work is thorough, efficient, and complete.  

This is should be a slam-dunk. Team walk-throughs can be very motivational, inspiring (shaming) underperforming developers into producing better results while providing overachievers an opportunity to show off. In both cases, the user, the project, and the project manager win.

George Tillmann is a retired programmer, analyst, systems and programming manager, and CIO. This article is adapted from his book Project Management Scholia: Recognizing and Avoiding Project Management’s Biggest Mistakes (Stockbridge Press, 2019). He can be reached at georgetillmann@gmx.com.

The post Confusing the What with the How appeared first on SD Times.

]]>
Progressive delivery: Testing software through limited releases https://sdtimes.com/devops/progressive-delivery-testing-software-through-limited-releases/ Tue, 04 May 2021 13:30:30 +0000 https://sdtimes.com/?p=43864 Sometimes continuous delivery just isn’t enough for organizations that are constantly testing and adding features, especially those that want to roll out features to progressively larger audiences. The answer to this is progressive delivery.  The term progressive delivery was created in mid-2018 by Adam Zimman, the VP of Platform at LaunchDarkly, and James Governor, analyst … continue reading

The post Progressive delivery: Testing software through limited releases appeared first on SD Times.

]]>
Sometimes continuous delivery just isn’t enough for organizations that are constantly testing and adding features, especially those that want to roll out features to progressively larger audiences. The answer to this is progressive delivery. 

The term progressive delivery was created in mid-2018 by Adam Zimman, the VP of Platform at LaunchDarkly, and James Governor, analyst and cofounder at RedMonk, to expand on continuous delivery’s notion of separating deployments and releases for organizations. 

Organizations that adopted continuous delivery early on were primarily software-first  organizations and their main delivery of value was through some sort of software package. Companies that didn’t have software as their only source of value faced challenges that weren’t really addressed by continuous delivery. 

RELATED CONTENT: 
Industry Watch: What follows CD? Progressive delivery
Learning about your software progressively

“When you start talking to the business, continuous deployment and continuous delivery tend to sound a little bit scary. If you talk to the business and say, look, we aren’t going to decouple these things. You decide when the business activation happens and you can do that because something is very well tested and you can test in production, you could be confident about when the services are rolled out and this will de-risk what you’re doing, then it sounds like they’re back in control,” Governor said. 

All of the core testing concepts of progressive delivery existed in continuous delivery. Now, it’s a matter of what’s actually getting the focus since there are a lot more things organizations can do while utilizing the cloud. 

Progressive delivery is a term that can be applied to a set of disciplines that people are already using now, whether that’s delivery and production excellence or organizations that are effectively testing and have a high level of confidence in their operations with a culture of troubleshooting and observability. 

“If you look at Google, Amazon, and Microsoft from a public cloud perspective, they are all doing stuff like this even though they don’t always call it progressive delivery,” Governor said. “Once you start getting into banks and telcos, then it’s becoming a more generally applicable set of approaches and technologies.” 

Progressive delivery really boils down to two core tenets: release progression and delegation, according to Zimman.

Release progression is all about adjusting the number of users that are able to see or interact with new features and new code at a pace that is appropriate for one’s business. It’s also about expanding it out only to the appropriate parties at any given time as part of the testing. That could mean only offering the feature to early access beta users first and then expanding it out to a trusted user group before expanding it out to everyone. Or maybe, the end state is to only give access to the people who are on the premium plan. 

“The thing that [continuous delivery] stopped short of was it was more of a binary mentality,” Zimman said. “So it was either on or off for everyone, as opposed to this notion that we’re really focused on this ability for increasing your blast radius.” 

Practicing release progression helps with the testing aspect of software delivery because the individual or team that built a new feature or a new widget can choose to deploy it and be the only ones that can interact with it. 

“Everybody is testing in production. Some people do it on purpose, but if you’re not testing in production on purpose, chances are that you are going to be burned by a bad release or a lack of consistency between your test environment and your production environment.”

The other core aspect, release delegation, focuses on shifting release control from the engineering and operations organization out to the business owner.

“As soon as you move out of the realm of pure software organizations, in which their only value is through their software, you start recognizing that the business owners are actually looking for greater control and greater ability to impart change on digital experiences,” Zimman said. 

Business owners can then customize what features they want to release to certain customers and even give the end users the ability to toggle certain features on and off, all while having guardrails put in place to make sure that the releases meet an industry’s compliance requirements. 

A lot of companies are looking to do that autonomously and not have to go back to the engineering or operations team for the ability to control features, especially when it comes to things like beta testing, A/B testing or experimentation, according to Zimman. 

Ravi Lachhman, an evangelist at Harness, said that progressive delivery comes from getting feedback and this is especially important in the software development model of today where a lot of the time you’re doing the unknown and you don’t know what the impact is going to be. One of the quintessential firms that has relied on feedback for progressive delivery is Facebook. 

“If you take it back 10 years ago, and you and I were downloading Facebook from the App Store, you and I would have two different download sizes and there’d be a reason for that. They’d be shipping different features for you and I,” Lachhman said. “For example, I really like fried chicken and I’m on several fried chicken groups on Facebook.They might say, you know what, target him with cook-specific things and so how they started doing it was with the concept of progressive delivery. We’re not going to give all the users the same thing, and we want to be able to make sure that we can retract those features if they’re not performing well, or we can roll those features out if they are doing well and determine how we provide feedback and how we choose to deploy across our entire user base or our entire infrastructure.”

One common way that organizations are going about progressive delivery is by using feature flags. Feature flags give users fine-grained control over their deployments and remove the need to change config files, do blue-green deployments and perform rollbacks.

A new functionality would be wrapped up in a feature flag and then deployed to a new version of the application to a single production environment, allowing only users from the designated canary group to access the new functionality.

However, having too many feature flags at once can lead to sprawl and a difficulty in keeping track of what feature flags are out there. This prompted a demand for feature flag management solutions, which serve as a central spot for the management of the flags with a common API that tracks the whole feature flag life cycle — for example, what was the logic? How do you turn it on? How do you turn it off? Where did it go? 

Progressive delivery is maturing 

Progressive delivery is starting to become a more mature practice as vendors are coming and coalescing around it. 

Governor said that this is the stage when it gets interesting because if you have a set of practices and then package them as a platform, it becomes something that a broader set of constituents can use. 

In addition to new tooling, it’s also about shifting the delivery side of the equation mostly from the context of engineering readiness to business readiness. 

“We don’t want to make any changes whatsoever to the deployment side of that equation because we want engineers to continue to develop at the pace of innovation, however fast they are comfortable with creating new technologies, features and code. They should continue to have that flexibility to do that creation and deployment into a production environment so that it is something where they’re able to test,” Zimman said. 

Now, the release side of the equation is really the delivery of value, Zimman noted. In the context of engineering readiness, something is released when it’s ready. On the other hand, business readiness puts the business in charge of when and how to release new feature functionality or release when customers are actually ready to adopt this new feature functionality.

This might be great for a company running a deal-a-day site because their value is changing on a daily cadence, Zimman said. 

Getting started with progressive delivery really requires getting all aspects of the business on board. 

One has to talk to product management about experimentation with progressive delivery, talk to the business about delegating the service activation to the business and having delegated users, and then talk to software developers and say that this technology won’t slow them down and will just enable you to move more rapidly and with higher quality, Governor explained. 

“The question that I like to ask enterprises is are you comfortable shipping code on a Friday afternoon?,” Governor said. “There are some people that will be like, no, the last thing I want to do is roll something out at 5PM on a Friday, because if something goes wrong, then there goes the weekend. Some organizations are like ‘well, yeah, that’s where we’re getting to, we do enough testing’ and really begin to say, yeah, we can ship a new service whenever. We have that confidence because we’ve done the engineering work and the cultural work in order to be able to do this. That’s progressive delivery.” 

The post Progressive delivery: Testing software through limited releases appeared first on SD Times.

]]>
The end of “your database” https://sdtimes.com/data/the-end-of-your-database/ Thu, 22 Apr 2021 17:15:23 +0000 https://sdtimes.com/?p=43745 When I started in web development, the architecture of an application always radiated out from the database. Any application was firmly rooted by its data schema and the first step was sketching out the tables and relationships that would define how data was organized and retrieved. But that’s where the web was, not where it’s … continue reading

The post The end of “your database” appeared first on SD Times.

]]>
When I started in web development, the architecture of an application always radiated out from the database. Any application was firmly rooted by its data schema and the first step was sketching out the tables and relationships that would define how data was organized and retrieved.

But that’s where the web was, not where it’s headed. Today, I’m struck by how little developers need to think about the database at all.

Databases are still very much at the heart of the modern web, just as servers still dutifully power the expanding array of serverless offerings. But it’s today possible—and common—to author and deploy rich, interactive web applications without managing database infrastructure or even knowing how the data is ultimately stored.

RELATED CONTENT: Jamstack brings front-end development back into focus

It’s a shift that’s been in the making: developing directly against the database became less common with the rise of web frameworks like Rails and the ORM. Even with these abstractions, it was still advisable to understand the schema of tables beneath so you could drop down to SQL to optimize critical queries as you tweaked performance. To know your app, you had to know your database.

Breaking the monolith: API services impact the data layer

The mighty, monolithic database and its role as an oracle—as the singular source of truth—is being challenged on two fronts.

The first challenge comes from inside each company as more development teams adopt microservice architectures, structuring each service to focus on a single domain complete with its own datastore. While companies sometimes try building a microservice stack on top of a monolithic database, this tends to be the worst of both worlds. Individual teams still need to own the data layer.

The second challenge to the central database actually comes from outside the company: the explosion of API services the economy of developer-centric offerings powering everything (payment, search, content authoring, artificial intelligence). The productivity lift from instant access to APIs and services makes the benefits of decoupling an application too large to ignore.

This requires a shift in thinking from “my database” to “my data,” trading direct access to each table for indirect access via APIs. It’s the right tradeoff, even if data spreads across third-party services. Monolithic databases were straining under the weight of additional requirements as applications grew.

A service like Stripe bundles compute and data behind a payment API, managing, scaling, securing and optimizing databases for payment processing in ways that would be out of reach to all but the largest web teams. New “content APIs”—hosted services like Contentful and Sanity—allow entire teams to author all types of content without ever managing a CMS or the database behind it. 

Prioritizing Performance: Making API-centric apps faster with the Jamstack

Querying out to a remote API can be slower than requesting the same data from a database local to the application. However, developers have tackled this problem head-on with a new performance-oriented architecture for the web called the Jamstack.

For years, the web worked like this: as each incoming web request arrives at the server, a response is built by calling application software that fetches the required data with calls out to the database and/or API services. Once the right data is fetched, the html page is assembled and returned. This process is repeated for each and every request. 

Jamstack works differently. Instead of pulling the data at request time, Jamstack reverses the flow. The majority of compute happens ahead of the request cycle. API services use webhooks to notify an automated build process to re-render html pages whenever the data they are storing changes. Prebuilt pages are published out to the edge of the network as close to users as possible. At request time, there’s little left to do but serve the final html.

This “precompute” rendering happens prior to the actual web request made by a visitor. A complex e-commerce page with multiple data sources from multiple APIs can be pre-assembled and still served in milliseconds, with a faster time to first byte than a traditional server. The response can be personalized for each user via Javascript, which may run inside of an edge node on a modern CDN or inside the browser on the user’s device. Developers optimize applications by reaching out for the right data from the right APIs during build time for what can be precomputed or during request time for what is custom to the user.

Querying across APIs 

New API services continue to accelerate new applications being developed. New architectures like the Jamstack are accelerating web applications towards the same level of performance once only enjoyed by native applications.

As we make use of an increasing number of services, we’ll need to guard against fragmentation. It’s now common to have user and authentication data housed in one service, content in another service, and subscriptions in another service, all using different providers. While the variety of services is empowering, underlying vendors will need to be better managed and easier to develop against.

How do we create a unified way of thinking about it? Thankfully, a host of new answers are emerging. In the Jamstack ecosystem, TakeShape, OneGraph, Apollo GraphQL, and Prisma are working on unifying the new generation of data layers. 

Takeaways for web application architects

With all the new services available for instant API consumption, my advice to modern web developer teams: 

  1. Embrace the move away from the monolith. Modern apps will be easier to deploy, maintain, and scale than the old monoliths.
  2. Start by decoupling the frontend of your application from the backend and use APIs to talk to your own internal services as well as external ones.
  3. Performance is everything on the web. Test your application under throttled bandwidth conditions to better approximate real-world usage.
  4. Have your architecture do the hard work in advance. Remove as much as possible from the request path by pulling data during the build process so that you can prerender pages into high-performance assets. Services like Netlify help make the Jamstack architecture easy to adopt.
  5. Orient your thinking away from the centralized database towards the distributed data layer.

The new mindset about “my data” can help you change the way you build systems. Our answer to that is the Jamstack.

The post The end of “your database” appeared first on SD Times.

]]>