Analyst View Archives - SD Times https://sdtimes.com/category/analyst-view/ Software Development News Fri, 12 May 2023 16:06:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Analyst View Archives - SD Times https://sdtimes.com/category/analyst-view/ 32 32 Patch the cloud native development talent gap with platform engineering https://sdtimes.com/cloud/patch-the-cloud-native-development-talent-gap-with-platform-engineering/ Fri, 12 May 2023 16:06:14 +0000 https://sdtimes.com/?p=51147 Cloud native technologies—with their malleable, modular microservice architectures—quickly generate transformative digital innovations that deliver high-demand customer capabilities and operational value breakthroughs.  But wait, how many Kubernetes experts do we have? We’ve got an industry-wide shortage of skilled software development and operations talent—and the complexity of cloud native development is exacerbating the problem. We’re not going … continue reading

The post Patch the cloud native development talent gap with platform engineering appeared first on SD Times.

]]>
Cloud native technologies—with their malleable, modular microservice architectures—quickly generate transformative digital innovations that deliver high-demand customer capabilities and operational value breakthroughs. 

But wait, how many Kubernetes experts do we have? We’ve got an industry-wide shortage of skilled software development and operations talent—and the complexity of cloud native development is exacerbating the problem. We’re not going to hire our way out of this mess!

Skill shortages stymie cloud native innovation

Even non-technical executives now understand the basic benefits of cloud native software. They know it has something to do with Kubernetes pushing out containers, so the resulting applications are more modular and take advantage of elastic cloud infrastructures.

There’s way more to it than that. The cloud native landscape is a beehive of open-source projects for configuration, networking, security, data handling, service mesh – at various maturity stages. There are also hundreds of vendors offering their development, management and support tools atop this ever-moving CN raft.

A developer’s learning curve is steep and continuous, since this year’s stack might no longer be relevant next year. Outsourcing is tough too, when the few consultants with a real knack for cloud native development are busy and expensive.

The only way to sustainably grow is through building cloud native development talent and capabilities from within, following the introduction and maturation of the space, as well as keeping tabs on the vendor and end user community at large.

While the market demand for digital offerings keeps doubling every couple years, IT budgets often get a budget belt-tightening as a reward for managing and scaling so much technology.

Smaller and leaner teams are expected to deliver twice the output with half the people. The need for specialized skill positions only increases as concepts like event-driven architecture, data lakehouses, real-time analytics, and zero-trust security policies turn into production-grade requirements. 

Why platform engineering matters

No matter what target environment they are contributing to, developers still spend most of their time coding within an IDE. Over the years, vendors have tried everything from low-code tools to process toolkits to lower the skills bar, but the pipelines don’t translate into easy wizards. 

Complex open-source tooling, third party service APIs, and code that is being mixed and matched from GitOps-style repos are driving cloud native development teams toward a new platform engineering approach.

Platform engineering practices seek to create shared resources for development environments–encouraging code, component, and configuration reuse. 

Common platform engineering environments can be represented within a self-service internal development portal or an external partner marketplace, often accompanied by concierge-style support or advisory services from an expert team that curates and reviews all elements in the platform.

It’s critical to govern the platform’s self-service policies for access permissions, code, logic, data, and automation at just the right level of control for the business it supports. 

Too much flexibility ends up creating overhead, as diverse stakeholders fail to distinguish between the relative value or usefulness of so many unvalidated and poorly categorized components of the platform.

Too much rigidity in policy design can create the opposite problem, where centralized governance and approval cycles for every element slow down solution delivery and take away the innovative freedom developers crave.

A modern approach to cloud native platform engineering can finally bring the principles of governance, consistency and reuse to the table.

Speedy innovation through infrastructure abstraction

Refreshingly—or maddeningly—there’s no single right way to ‘do’ cloud native. Even Kubernetes is by itself just an abstraction of container orchestration and only one option for going cloud native.

As an open-source movement, the CNCF purposely leaves the future approach open to interpretation by the community. It doesn’t dictate a particular language, or even a specific piece of infrastructure. 

That’s excellent, but it also leaves short-handed dev teams managing complex plumbing and experimenting with integration options, rather than building better functionality. That’s where platform engineering practices can save the day.

The decision to create a platform is a commitment to help developers of varying skill levels abstract away the complexity of underlying cloud native architectures with interfaces and tools atop readily configured environments.

A platform engineering approach must offer ease of use, elimination of toil, and reduced cognitive load for development teams—helping orgs attract and retain the best talent.

The Intellyx Take

Truly innovative ideas that impact customers represent a core competitive differentiator, and should grow from within the enterprise. That’s difficult when the supply of skilled cloud native development resources is constrained.

Fortunately, platform engineering organizations and technology stacks can automate some of the most difficult work of delivering on the needs of API-driven microservices environments.

The post Patch the cloud native development talent gap with platform engineering appeared first on SD Times.

]]>
Getting ready for the generative AI wave https://sdtimes.com/ai/getting-ready-for-the-generative-ai-wave/ Thu, 27 Apr 2023 17:01:16 +0000 https://sdtimes.com/?p=51028 Even as late as December of last year, few were aware of generative AI. Then ChatGPT popped up, and Microsoft started putting it in everything including its developer tools. Now it’s currently the hottest thing in the market. It is also still immature, but it is working well enough that people are finding it surprisingly … continue reading

The post Getting ready for the generative AI wave appeared first on SD Times.

]]>
Even as late as December of last year, few were aware of generative AI. Then ChatGPT popped up, and Microsoft started putting it in everything including its developer tools. Now it’s currently the hottest thing in the market. It is also still immature, but it is working well enough that people are finding it surprisingly useful. This is very different than what happened with previous Microsoft products like Apple Newton and Microsoft Bob, both of which were released well before the underlying technology cooked enough for the general market.

Generative AI is a new way for people to interface with their technology, but it has some shortcomings. 

Let’s talk about this from a developer’s standpoint, and about why, once generative AI becomes commonplace, we’ll likely have a very different group of companies like we did with the introduction of the Web.

Generative AI’s promise

The promise for generative AI is that you can use your natural, spoken language to ask the computer to do something and the computer will automatically do it. In Microsoft Office, the initial implementation is very sub-product-centric. For instance, you can request Word to create a document to your specifications, but you’ll have to go to PowerPoint or Excel if you want the tool to create a blended document. I expect the next generation of this Microsoft offering will bridge those apps and other products to allow you to create more complex documents just by putting in information the AI asks for to strengthen the piece. 

This is going to make for a difficult evolution for firms that have apps that don’t currently integrate well because the user will want one interface, not multiple AIs that each require different command language or that use different language models. 

The generative AI problem

While developing your own generative AI may help, long-term integration with the platform’s generative AI will quickly be a differentiator focused on user satisfaction and retention. I point out that last because users who get frustrated working with multiple generative AI platforms will likely begin preferring products that interoperate and integrate with a major generative AI solution so that the user doesn’t have to train and learn multiple generative AI offerings.

In short, one of the bigger problems is integrating the app with the generative AI most likely to be found on it. Neither Apple nor Google have a cooked generative AI model, and neither company is as good as Microsoft in terms of bringing partners on board to better address their lack of a generative AI solution. 

Assuring quality

The other big trend in generative AI is putting the technology into development tools that will allow the AI to become a coding accelerator. But with code, errors tend to proliferate. While this initial instance of generative AI is very fast, it’s anything but infallible. If you don’t want a lot of mistakes, the initial focus of any generative AI user needs to be on quality over quantity. The error-checking capability of generative AI is still very young and often makes mistakes. That means coders who use generative AI need to focus more on quality than they currently do. You’ll be training the tool while you use it, and if you train it to make a mistake, that mistake has the potential to proliferate and create additional problems. So, when using development tools that make use of generative AI, the massive increase in speed needs to be tempered with an increased focus on quality. Otherwise, your quality is likely to degrade badly over time.

Wrapping up

Generative AI is a game changer. It allows people to increasingly interact with their smartphones, PCs, apps and cloud services as if they were people. To make this work optimal, applications will need to be able to integrate under a generative AI umbrella so that the user only needs to make a request and the relevant app(s) is launched to complete the request. With its announcements of generative AI for its developer tools and Office, Microsoft is arguably the farthest along this path, but we are still early days, and this leadership is likely to become dynamic in the future.

The path to success will be to adapt an existing generative AI tool tactically but work to create the hooks to better integrate your app with the platform’s most likely generative AI solution so that you can dictate once and the AI will move between tools to complete the task. We’re far from that point now, but that gives you time to figure out how to address it.

In short, we are at the front end of a massive generative AI change. Make your related decisions very carefully because you want to be standing when this AI trend reaches critical mass and users move from products that haven’t embraced it much like they did with GUIs and the Web. 

The post Getting ready for the generative AI wave appeared first on SD Times.

]]>
SEC probe and newly discovered $4.7B liability puts ARM at greater risk https://sdtimes.com/software-development/sec-probe-and-newly-discovered-4-7b-liability-puts-arm-at-greater-risk/ Tue, 21 Mar 2023 17:23:30 +0000 https://sdtimes.com/?p=50627 A lot of us have been looking at ARM more closely since litigation with Qualcomm started. To refresh you on that situation, that litigation appears to be an effort to get Qualcomm to pay significantly more for licenses for PCs than it does for smartphones, even though the PC effort has yet to be successful. … continue reading

The post SEC probe and newly discovered $4.7B liability puts ARM at greater risk appeared first on SD Times.

]]>
A lot of us have been looking at ARM more closely since litigation with Qualcomm started. To refresh you on that situation, that litigation appears to be an effort to get Qualcomm to pay significantly more for licenses for PCs than it does for smartphones, even though the PC effort has yet to be successful. That effort likely won’t be successful until 2024 but only if Qualcomm invests a massive amount of cash – which, if ARM’s litigation is successful, Qualcomm wouldn’t have. The litigation is not only counter to the contract between Qualcomm and ARM, it places a cloud over ARM, and it appears to be increasing a migration of developers from ARM to RISC V. To me, it reads like extortion, but at best it is premature because the product ARM is attempting to get more money for doesn’t exist in market yet, so, getting a higher percentage of nothing is still nothing.  

So why is ARM so hard-up for cash that it’s willing to put its future at risk in what looks like an effort to get Qualcomm to pay it more while putting QUALCOMM’s PC effort at greater risk of failure and clearly increasing the motivation to move to RISC-V (with Apple apparently hedging back in 2021)? 

Let’s explore this. 

ARM Needs Cash. Why?  

The legal dust-up caused the industry to look at why ARM needs this cash, and we initially determined it was likely because the NVIDIA acquisition fell through. That acquisition would have provided SoftBank, which owns ARM, with cash and given ARM access to NVIDIA’s huge R&D war chest. But that effort failed, partially due to Qualcomm, but also partially because governments, particularly the U.S. government, doesn’t want big tech to get bigger (it is really hard to do mergers right now; ask Microsoft which just got sued to block it from doing one that gaming platforms have been doing without problems for decades).  

Two things have subsequently been discovered. One is that SoftBank’s boss owes the company a whopping $4.7B and, most recently, the company is the focus of  a probe by the U.S. Securities and Exchange Commission for misleading investors, an investigation that puts the firm at additional financial risk. This probe will make it nearly impossible for SoftBank to do an IPO until the problem is resolved, and finally, SoftBank had to write down a $100M investment in FTX (which is under bankruptcy protection and also under investigation). Finally, the head of SoftBank has been aggressively using SoftBank funds to buy out investors in order to take over full and absolute control of the company in order to potentially take the company private (estimated cost is $50B or around twice the massive cost of the Dell buyout that nearly failed) but also significantly reducing the firm’s cash reserves in the process. 

This all means there is little free cash to invest in things like ARM market development or R&D. This showcases a risk to SoftBank and ARM that is extreme, but there’s a chance that a Qualcomm-led consortium that took ARM from SoftBank and potentially better funded ARM might fix the problem. Still, it would be wise to hedge with RISC-V development for when Apple, which is likely to fight this consortium approach, and others decide to abandon ARM’s licensing for RISC-V’s more favorable approach.  

Recommendation:

I think this all means that while moving from ARM to RISC-V may be premature, it wouldn’t be premature to begin developing RISC-V skills, particularly if you are developing on Apple products. Even if the Qualcomm Consortium approach works, Apple is likely to move to RISC-V in the next two to five years. But given the SEC investigation on SoftBank and the discoveries so far, it is likely there is other dirty laundry yet to be discovered that could put both the IPO and the acquisition at greater risk as well as other questionable executive financial decisions (the investment in FTX is troubling, as is the stock buyback plan which appears to benefit the head of SoftBank more than it does SoftBank).  

And even if ARM pulls out of this mess, the momentum to the better RISC-V model may be unstoppable at this point, further justifying developing RISC-V skills because the market may have already gone too far to stop its pivot. In short, ARM is increasingly looking like the damage done by SoftBank may be unrecoverable, making a hedge on, or move to, RISC-V the safer choice. 

The post SEC probe and newly discovered $4.7B liability puts ARM at greater risk appeared first on SD Times.

]]>
Software engineering teams must collaborate with site reliability engineers https://sdtimes.com/software-development/software-engineering-teams-must-collaborate-with-site-reliability-engineers/ Tue, 07 Mar 2023 18:45:17 +0000 https://sdtimes.com/?p=50494 Software engineering leaders need to foster collaboration with site reliability engineers (SRE) in order to scale unplanned work and improve customer experience. Software engineering teams tend to focus on releasing new product features quickly, which causes them to not always prioritize the reliability of new features. Gartner predicts that by 2027, 75% of enterprises will … continue reading

The post Software engineering teams must collaborate with site reliability engineers appeared first on SD Times.

]]>
Software engineering leaders need to foster collaboration with site reliability engineers (SRE) in order to scale unplanned work and improve customer experience. Software engineering teams tend to focus on releasing new product features quickly, which causes them to not always prioritize the reliability of new features.

Gartner predicts that by 2027, 75% of enterprises will use SRE practices organization-wide to optimize product design, cost and operations to meet customer expectations, up from 10% in 2022. Today, more than ever, customers are expecting applications to be reliable, fast and available on demand. When organizations present products that do not meet these expectations, customers are quick to seek other alternatives. 

To improve product reliability, IT organizations are starting to adopt SRE principles and practices when designing and operating systems. However, SRE is rarely embedded into every product’s development life cycle. While software engineering leaders are engaging site reliability engineers, they are only performing occasional reliability exercises. 

Foster Collaboration With Site Reliability Engineers

Now is the time for software engineering leaders to be building lasting partnerships with site reliability engineers as a part of their continuous quality strategy by adopting SRE practices and tools. Software engineering leaders will only be able to deliver the business value of their products to customers if they are treating reliability as a differentiating feature. 

Software engineering teams should be addressing reliability issues early on in their product’s life cycle and collaborating with site reliability engineers throughout the entirety of a product’s design and delivery activities. Doing so is more time-efficient and economical than needing to resolve a product’s issue after it has been released. 

Collaboration with site reliability engineers can be fostered by defining service level indicators (SLIs) and service level objectives (SLOs) that capture customer expectations for both product reliability and product performance. SLIs and SLOs will allow teams to clearly evaluate how well a product is meeting customer needs.

Enforce an SLO Action Plan

Failure is an inevitable aspect of service delivery, so it is important that software engineering leaders have a plan of action to effectively manage risk. Design an action plan for each SLO with site reliability engineers. This plan should provide guidance on what needs to be done if an SLO is breached, trending toward breach and/or the breach is imminent.

Optimize Development and Design with SRE Practices

To further a culture of reliability within their teams, software engineering leaders need to incorporate SRE practices and tools that drive lasting improvement. There are several activities software engineers should be performing with site reliability engineers in order to optimize development and design for meeting SLOs and SLIs: blameless postmortems, chaos engineering, toil management, and monitoring and observability. 

Blameless postmortems can be used to identify what is causing triggering events such as failure or SLO breach. This practice allows organizations to learn and avoid repeating the same mistakes, and prevent future ones. Chaos engineering uses experimental failure testing to uncover vulnerabilities. This provides information about system behavior during failures and enhances software engineering teams’ ability to improve product design. Toil management eliminates low-value work and repeatable tasks. Lowering toil allows teams to focus more on meeting SLOs. Monitoring and observability identifies the best methods needed to measure SLIs and SLOs.

These technologies will allow software engineering teams and site reliability teams to work collaboratively to improve their ability and solve reliability issues. Software engineering teams need to work closely with site reliability engineers to help define SLOs, share accountability for meeting SLOs and adopt SRE practices and tools.

The post Software engineering teams must collaborate with site reliability engineers appeared first on SD Times.

]]>
Web 3: New scams for new kids on the blockchain https://sdtimes.com/software-development/web-3-new-scams-for-new-kids-on-the-blockchain/ Wed, 08 Feb 2023 17:27:11 +0000 https://sdtimes.com/?p=50271 Over the last 5 years, galaxy-brained folks have had time, thanks in part to a pandemic, to dream big about Web 3 after catching some inspirational podcasts and YouTube gurus. Or maybe watching Gilfoyle pitch a “new internet” on the last season of “Silicon Valley.” What was so intriguing to so many about Web3 anyway? … continue reading

The post Web 3: New scams for new kids on the blockchain appeared first on SD Times.

]]>
Over the last 5 years, galaxy-brained folks have had time, thanks in part to a pandemic, to dream big about Web 3 after catching some inspirational podcasts and YouTube gurus. Or maybe watching Gilfoyle pitch a “new internet” on the last season of “Silicon Valley.”

What was so intriguing to so many about Web3 anyway? Since nobody could really agree on exactly what it was, it could literally be whatever aspiring entrepreneurs imagined it to be.

Common threads appeared. Blockchain. Bitcoin and Ethereum. DeFi. Decentralization of organizations, infrastructure and data. Freedom from tech giants. Self-sovereignty. Privacy. Opportunity.

All the kinds of ideals that generate charismatic personalities. 

Who cares about maturing cloud adoption or better integration standards, when you can explore a whole new economy based on blockchain, cryptocurrency and NFTs? Why wouldn’t tech talent leave standard Silicon Valley-funded confines to live this Web3 dream?

When a space is overhyped and undefined, it encourages the rise of the worst kinds of actors. Web3 never had a chance, with its uncertain crypto-economic roots and the use of blockchain technology, which hasn’t proved adequate for enterprise-class business.

Crypto-Schadenfreude: Sham, bank run & fraud

Nobody enjoyed more of a media darling status in the Web3 world than FTX founder Sam Bankman-Fried, who famously played video games on investor calls and shuffled around the tradeshow circuit in shorts, as he donated millions to “effective altruism” charities and crypto-friendly politicians.

Now Sam’s been arrested and set for extradition from the Bahamas to face charges in the United States, with FTX the most famous failure among several other falling dominoes (Lumen, Celsius, Gemini, on and on) in the crypto rug pull. 

It was fun to mock celebrity shill ads, but it’s not funny to see $2 billion in investor deposits disappear into the ether. A lot of VC whales, other DeFi companies and hapless individuals were also duped and parked their funds there too.

There’s no cash reserve regulation or FDIC account insurance in place for crypto, so when buyer confidence eroded, market makers sold, accelerating the ‘rug pull’ effect. Ripples collapsed as much as $183B or more from the total market cap of cryptocurrencies.

Turned out, there’s nothing new about this Ponzi scheme, a Madoff-like phenomenon my analyst colleague Jason Bloomberg has commented on ad infinitum, even appearing as a gadfly at crypto conferences to say it has little use except for criminal enterprises like money laundering and ransomware, to audience hecklers.

Blockchains looking for solutions

Besides cryptocurrency, the most common term we hear in Web3 discussions is blockchain, which is a distributed ledger technology (or DLT) underpinning Bitcoin, Ether, Dogecoin, and thousands more dogshitcoins.

If cloud was just ‘a computer somewhere else’ then blockchain is more like ‘an append-only database everywhere else’ due to its decentralized consensus mechanism and cryptography. Even the first Bitcoin blockchain proved resilient to hacking unless someone finds a way to steal user account keys through other means.

Though I’m a skeptical analyst, I admit thinking there was some sleeper value in blockchain, if a few properly governed projects came along that could create safer, smoother rails to adoption.

We’ve seen vendors with very nice use cases for distributed ledgers, particularly in multi-party transactions, IP and media rights, legal agreements and audits, and proof of identity, where a blockchain can use a combination of transparency and immutability to provide a decentralized, shared system of record – whether nodes are exposed publicly or among permissioned parties.

That still doesn’t make blockchain a valid replacement for modern databases and data warehouses, which already offer enterprises more scalable back-ends for applications, with better security and governance controls.

The slow, energy-sucking processes of mining, recording and storing parallel blockchains haven’t proven sustainable for business use cases besides tracking a few heads of lettuce with e.coli on them.

The Intellyx Take

The main roots of Web3’s failure weren’t about technology, they were about misaligned incentives and the inevitable association of Web3 with crypto and NFT market madness. 

Unethical players could rise to the top, confidently claiming top-line growth, and attracting continued fundraising investments without mentioning the inevitable crash.

I’ve met early participants in the blockchain space with intentions for a better world with unique computing models and applications particularly well-enabled by decentralization. They weren’t building mansions on islands and taking crypto-bros out on yachts.

Who knows? Once the incentives and risks of easy money are washed out of the market for good, maybe the dream of global access to a new, decentralized internet of applications and value could someday be realized.

The post Web 3: New scams for new kids on the blockchain appeared first on SD Times.

]]>
3 key actions to improve developer experience https://sdtimes.com/software-development/3-key-actions-to-improve-developer-experience/ Wed, 07 Dec 2022 19:18:29 +0000 https://sdtimes.com/?p=49775 When it comes to succeeding with digital initiatives and building high-performing software teams, it is important to deliver top-notch developer experience. A superior developer experience helps attract and retain talented developers. Gartner’s 2021 Software Engineering Leader Survey shows that hiring, developing, and retaining talent ranks in the top three challenges for 38% of software engineering … continue reading

The post 3 key actions to improve developer experience appeared first on SD Times.

]]>
When it comes to succeeding with digital initiatives and building high-performing software teams, it is important to deliver top-notch developer experience. A superior developer experience helps attract and retain talented developers. Gartner’s 2021 Software Engineering Leader Survey shows that hiring, developing, and retaining talent ranks in the top three challenges for 38% of software engineering leaders.    

Developer experience refers to all aspects of interactions between developers and the tools, platforms, processes and people they work with, to develop and deliver software products and services. In order to create a superior developer experience, software engineering leaders must provide an environment in which developers can do their best work with minimal friction and maximum flow.

Software engineering leaders working towards improving their team’s developer experience should follow these three actions.

 Improve Developer Journeys

Developer experience extends beyond developer tools and technologies. Building and retaining a high-performance development team starts with a positive onboarding experience. A streamlined onboarding process enables developers to make meaningful contributions much faster, which in turn makes the entire team more productive.

Creating a frictionless developer onboarding experience will improve overall developer experience. For software engineering leaders, it is important to ensure that developers are equipped to get started on day one. Be sure to provide a fail-safe environment that is immune to accidental errors. Create a sense of belonging and camaraderie.

Developer self-service can also improve developer journeys by reducing process inefficiencies and, in some cases, eliminating unnecessary processes entirely. Self-service development workflows can be streamlined through the use of internal developer portals. Developers benefit from accelerated feedback loops, as they enable developers to continually improve code quality and understand what is working and what is not. By establishing feedback loops, developers are able to experiment, measure progress and continuously improve. The feedback helps improve their deliverables, as well as their ways of working. This shortens the time to value, thus providing quicker insight on value delivered from the user’s perspective. 

Optimize for Creative Work

To improve developer experience, software engineering leaders must go beyond optimizing development workflows and provide focus time for deep, creative work along with the freedom to fail and experiment.

A collaborative work environment is a crucial ingredient of developer experience, since software engineering is a team sport and involves multiple team members and teams. Teamwork and collaboration amplify original ideas and shorten the cycle time from idea to production. Collaboration between team members lends emergent properties to the team.

Fostering communities of practice can help create an open and collaborative work environment. Practitioner-led communities are fundamental to open, collaborative and effective learning. People own what they help create and work together to address challenges. Software engineering leaders must encourage their teams to create communities of practice through active engagement, regular activities, member focus, collaborative problem solving and a powerful strategic vision.

Be a connector manager who enables cross-pollination of ideas and skills. Connector Managers create a trusting and transparent team environment that supports peer-to-peer coaching. 

Finally, leverage automation for repetitive tasks to free up time for creative work. Automating away the routine and repetitive aspects of software engineering enables developers to focus on applying their creativity to solving problems. Developers should indulge in ideation, build new solutions, collaborate and communicate with their peers, partners and customers rather than maximizing the time available for writing code. 

Make a Meaningful Impact

Although most organizations focus on improving developer journey, developer experience in the long term goes beyond software development workflows. It involves giving developers the opportunity to make a meaningful impact. 

Foster a culture where developers don’t feel embarrassed to admit a mistake, ask a question, or offer a new idea. Psychological safety is the top predictor of high performance in software engineering teams. A number of organizations have reported that psychological safety  is a key characteristic of a high performing team.

The post 3 key actions to improve developer experience appeared first on SD Times.

]]>
Codifying software: An ideological perspective https://sdtimes.com/software-development/codifying-software-an-ideological-perspective/ Fri, 04 Nov 2022 16:02:07 +0000 https://sdtimes.com/?p=49503 Developers write code, thereby codifying software’s internal rules and outward appearances. Programming is not a belief system – it’s part of computer science for a reason. There is a systematic approach for improving development expertise, gathering and analyzing data, and proving or disproving that the software works. Logic and data are codified in software and … continue reading

The post Codifying software: An ideological perspective appeared first on SD Times.

]]>
Developers write code, thereby codifying software’s internal rules and outward appearances.

Programming is not a belief system – it’s part of computer science for a reason. There is a systematic approach for improving development expertise, gathering and analyzing data, and proving or disproving that the software works. Logic and data are codified in software and in our processes around creating software.

The human influences of societal norms or religion should have little to do with the quality or performance of the software a group of developers can churn out.

Ideology precedes architecture

A particular ideology for codifying technology sets an organization apart from its peers. When team members share beliefs and behaviors, the resulting products can gain consistency in design and utility that ‘just makes sense’ to customers who resonate with the approach.

The company’s founder, or an executive can set the tone for an organization of course – think Steve Jobs or Andy Grove. But for software development, an ideology is usually more than a cult of personality. 

Development teams with shared ideology can perceive and respond to opportunities and challenges as a group, like flocks of birds that seem to magically change direction.

The codification of the group’s encouraged and discouraged behaviors can take many forms, including a predilection for certain technologies or methodologies. In this sense, an ideology establishes an organizational intent that influences the architecture of delivered software.

A services methodology is not an ideology

A lot of services companies tout an overarching Agile or DevOps methodology, a ‘customer first’ mentality, or ‘proven processes’ for delivering great work. A skeptic sees these as branding exercises to give clients confidence and recruit better developers.

As analysts, we have a hard time evaluating and comparing services offerings as they relate to product value, except when they relate directly to product delivery and training, or operationalization of a SaaS solution for customers.

Open source collaboration magic

Open source projects start out as a kernel of code in a repository, and a code of conduct for founding the community of current and future contributors. 

Open source believes in a shared collaborative ideology and democratizing access to non-proprietary platforms, thereby leveling the playing field for individuals to build solutions atop them. Societies to benefit from the resulting innovation.

Attending an open source conference, the ideology of an agreed-upon code of conduct for treating each other with respect supersedes any actual discussion of code and components. Projects that lose their collaborative energy become toxic and get abandoned, as contributors take their talents elsewhere.

Design-first versus product-first

I covered the quandary of design versus product-led development modalities in my previous column on design-led versus product-led delivery teams. 

Design-led ideologies lean on developer intuition, the healthy competition of ideas, and fast iteration to constantly improve the software customer experience, whereas product-led development focuses on constantly delivering and improving features that meet customer demand. 

These modes of thinking coexist productively within many orgs. Engineering and operations groups may be able to bridge the gap between design and product orientations by crafting shared models that represent their commonalities, giving them a common language to mix the best of both worlds.

Inclusive versus exclusive

An ideology of making ‘software for all’ – users and employees of all skill levels, cultures, and abilities – sets a high premium on user experience and accessibility. The world’s most widely accepted products are practically self-explanatory and built upon this mindset.

Conversely, many software vendors cater unapologetically to expert practitioners only, or for industry specialists who bring deep domain knowledge. There’s value in delivering the right tool for the job after all.

No-code, low-code and pro-code development tools offer a spectrum of these ideologies in action.

Coding for global good

Ever since Google quietly dropped its own ‘don’t be evil’ mantra more than a decade ago, I’ve been skeptical of companies that say they exist to improve the greater good. The recent trend of ESG (environmental, social & governance) has been co-opted as the latest form of ‘greenwashing’ by corporate entities seeking to publicize their environmental concerns. 

Still, if such goals make data centers increase efficiency and run on renewable energy, and cause logistics vendors to reduce overall emissions by optimizing truck routes, that’s inherently good.

An AI company developing healthcare or self-driving cars can set out to save human lives, and the resulting software will be more likely to do so if it matters.

The Intellyx Take

A useful development ideology is not just defined, it is cultivated by a group over time. It is not something that corporate leadership can dictate.

In today’s fast-paced world, merged companies never retain their ideological foundations for long, as principal collaborators move on, engaging their efforts and beliefs in the next startup.

Strong ideologies, like proven methodologies, are built and reinforced from within. If ideologies resonate with customers when codified as code, later teams can inherit them for useful purposes.

The post Codifying software: An ideological perspective appeared first on SD Times.

]]>
It’s Time to Consider RISC-V https://sdtimes.com/software-development/its-time-to-consider-risc-v/ Fri, 07 Oct 2022 17:00:14 +0000 https://sdtimes.com/?p=49129 Over the last months, ARM has pulled licenses from the ARM server-focused company, Nuvia, because of Qualcomm’s acquisition of that company. Then, recently, it sued Qualcomm to block the use of Nuvia’s solutions, effectively restricting Qualcomm from benefiting from that acquisition. This suit made little sense on the surface because ARM is not a player … continue reading

The post It’s Time to Consider RISC-V appeared first on SD Times.

]]>
Over the last months, ARM has pulled licenses from the ARM server-focused company, Nuvia, because of Qualcomm’s acquisition of that company. Then, recently, it sued Qualcomm to block the use of Nuvia’s solutions, effectively restricting Qualcomm from benefiting from that acquisition. This suit made little sense on the surface because ARM is not a player on servers, so you’d think it would support any of its licensees going into that market. But another aspect of this is that the joint development of a PC part by these two companies would, at least on paper, create a better solution than Apple’s M1 processor, also operating under an ARM license, which may have caused Apple to force ARM to block Qualcomm from creating a Windows ARM part that would outperform Apple’s MacOS alternative.  

Regardless of why this lawsuit was filed, a licensing vendor suing a licensee for doing something the license appears to allow is not only unprecedented but suggests an unusual amount of control by one or more vendors on the ARM platform. It also indicates that ARM is moving into a period where it may become financially unviable and thus unable to resist becoming a competitive pawn for a powerhouse like Apple.

It may be time to hedge ARM development with RISC-V.  

ARM’s coming problem

What makes ARM appear to be vulnerable to pressure such as what Apple may be applying is that its IPO won’t provide the operating capital it will need post-IPO.  Back when ARM was going to be acquired by NVIDIA, the money for the purchase would have flowed to Softbank, which owns ARM, but NVIDIA had pledged to fund the company after that at competitive levels. The IPO, in contrast, just provided the money to Softbank and doesn’t seem to provide much in the way of significant operating capital to ARM.  

ARM is already downsizing in anticipation of that problem with 15% of its staff – or up to 1,000 jobs – expected to be laid off. Layoffs like this tend to reduce productivity substantially. Employees at risk are focused on finding a new job, which tends to disrupt those left behind, and, post-layoff, those that remain often also start looking for another job because they are overwhelmed with the additional work that was done by those that were laid off.  

Because they tend to be done in a rush, layoffs can and have removed critical employees and incented high-value employees to leave because they rightly anticipate that layoffs can lead to a company death spiral where the layoffs do execution damage, which cuts income and then justifies subsequent layoffs.  

In short, this litigation looks desperate, and that desperation may be a combination of Apple pressure, the layoffs and a poorly conceived IPO that doesn’t appear to provide needed operating capital to the coming independent (from Softbank) ARM.  

RISC-V

While RISC-V’s encroachment into the ARM space has mostly been with embedded systems to date, there is a reasonable chance that with the concerns for ARM’s long-term viability increasing, an open source, non-proprietary solution could be better long-term than the proprietary solution from ARM.  

RISC-V is open source, an approach far more popular with developers and large-scale users than ARM’s closed source. Companies like Qualcomm provide much of the value to the ARM technology they develop and sell, and this same capability could be transferred to RISC-V, which has much lower licensing fees attached to it and potentially far more flexibility. Effectively, licensees can do what they want with RISC-V cores.  

The reason that RISC-V has been growing and replacing ARM in the embedded market also supports the move from ARM to RISC-V on smartphones. The licensing issues with ARM are largely not a problem for RISC-V licensees. A company works with RISC-V International and licenses or works with another licensee to create a solution, something that ARM is currently objecting to Qualcomm doing, even though Qualcomm purchased Nuvia, making them one company now. Both Intel and NVIDIA work with RISC-V; Intel to help populate its FABs, and NVIDIA as an alternative to ARM that will likely get more focus now that its attempt to buy ARM has fallen through.   

One company, Microchip, has gone on record saying it made the move from ARM to RISC-V because it had lower development and licensing costs, better long-term outlook and more flexibility. In short, RISC-V better met their short-term needs, and particularly their low-risk, long-term needs.  

Consider RISC-V a hedge

We are entering a period of extreme change. We are rethinking where a smartphone leaves off and a PC begins. ARM has struggled with both PCs and servers, and its recent move against Qualcomm suggests that ARM is now shooting itself in the foot rather than making prudent decisions about market growth. Its IPO is problematic because it doesn’t fund ARM as much as needed and mostly just buys its freedom from Softbank, but the announced layoffs certainly suggest that a post-merger ARM will be in financial distress, putting the platform at risk and making it vulnerable again to an acquisition or failure.   

That is a lot of short-term and long-term risk. Companies were already hedging their ARM efforts with RISC-V before this all happened. While it is too early to suggest abandoning ARM, these indicators suggest that hedging with RISC-V could make a critical difference should ARM fail or be acquired by a company like Apple who is likely to terminate the ARM licensing problem much like it did years ago with the Mac.  

In the end, it is better to have a fallback position in case of failure because it is far better to have one and not need it than to need it and not have it. Suddenly, RISC-V is looking like a stronger long-term platform than ARM. 

The post It’s Time to Consider RISC-V appeared first on SD Times.

]]>
Analyst View: 12 Essential Skills for Every Agile Application Developer https://sdtimes.com/agile/analyst-view-12-essential-skills-for-every-agile-application-developer/ Tue, 06 Sep 2022 16:35:08 +0000 https://sdtimes.com/?p=48779 Agile is a prerequisite for digital business because it combines an early and frequent delivery of customer value with the ability to rapidly adapt to changing market conditions. Agile has become essential to compete with digital-born businesses and to remain relevant in a world of digital fluency. This means that the demand for experienced developers … continue reading

The post Analyst View: 12 Essential Skills for Every Agile Application Developer appeared first on SD Times.

]]>
Agile is a prerequisite for digital business because it combines an early and frequent delivery of customer value with the ability to rapidly adapt to changing market conditions. Agile has become essential to compete with digital-born businesses and to remain relevant in a world of digital fluency.

This means that the demand for experienced developers skilled with Agile processes and practices has reached a critical point due to significant growth in Agile adoption, up from 37% in 2020 to 84% in 2021 per the 15th Annual State of Agile Report.

Agile application developers should not wait for continuing external factors to drive the evolution of their skills; instead, they should proactively explore, identify, and learn skills to improve their ability to deliver business value. These skills help to continuously improve development and decision-making processes and strengthen technical and interpersonal skills to increase customer satisfaction.

Twelve skills are critical for Agile application development (app dev) teams to drive digital business.

Core skills: These skills are fundamental to Agile app dev. Keep in mind that not every developer needs to be an expert in every area, as Agile teams are cross-functional and rely on multiple individuals’ skills.

  1. Scrum

Scrum is the dominant Agile framework, providing an iterative and incremental approach for solving complex problems. Small collaborative teams typically deliver work in short iterations (sprints) of about two weeks.

  1. Kanban

Kanban is a method for visualizing, managing and continually improving a process’ ability to deliver a service. It is a pull-based delivery flow system that exposes constraints, creates flow by limiting the amount of work in progress and signals when capacity is available to start new work. 

  1. Metrics

Successful app dev teams objectively measure and analyze their software development processes. Metrics provide actionable feedback to guide Agile teams and enable better conversations with stakeholders.

  1. User stories

User stories in Agile development shift the focus from writing requirements to addressing customer needs. A user story contains a short description of a feature from the perspective of the role desiring the new capability, typically in the format: “As a <type of user>, I want <some goal> so that <some reason>.”

  1. Customer focus

Product development must become customer-centric, with developers getting closer to their customers, understanding their needs and validating success through actionable feedback. Learn to empathize with customers using user personas, customer journey mapping, in-depth interviews and usability testing. 

  1. Test-first

Test-first practices like test-driven and behavior-driven development ensure that application developers build the right software the first time. With the additional reuse benefits of validation and documentation, creating tests before writing the code provides exceptional value to the development process.

  1. Continuous learning

A key tenet of agility is that practitioners be open to learning new skills — not just from project to project, but also as part of a lifelong learning process. Waiting for an “expert” to perform a critical project step impedes team agility. Multiskilled individuals enable teams to quickly solve problems and achieve better business outcomes.

Value-added skills: These skills represent the next level of Agile maturity. In-depth knowledge of them enables the team to continuously improve the delivery process.

  1. Collaborative development

In collaborative development, more than one team member works on a single feature or application at any given time. This can benefit teams by providing a built-in mechanism for code review, reducing development cycle time and broadening skill sets as teammates learn from each other.

  1. Ownership and collaboration

Work style, attitude and interactions with others impact success as much as any technical or professional skill. Small, self-directed, autonomous teams collaborating to build solutions only succeed when all members of the team commit to a set of shared values, such as focus, courage, openness, commitment and respect.

  1. Agile architecture

Traditional approaches to software architecture do not support an Agile development life cycle. Inflexible monolithic applications, architectural complexity and technical debt burden development teams, impede agility and frustrate users. Component-based architectures provide greater development agility, increased deployment flexibility and more process scalability.

Specialized/emerging skills: These skills represent potentially significant, game-changing processes and practices for Agile developers.

  1. Agile database management

Agile teams quickly find that database changes become a constraint that limits velocity. To increase the speed of delivery, cultivate database management skills to become more self-sufficient and reduce dependence on database administrators.

  1. Scaling Agile

Expanding the validated success of Agile pilots to the broader enterprise is both challenging and rewarding for organizations. Agile practices will not only benefit other development teams but also infrastructure and operations, enterprise architecture and security by reducing risk, improving business outcomes and increasing predictability.

Bill Holz is a research VP at Gartner, Inc. focused on software development methodologies and web development. 

The post Analyst View: 12 Essential Skills for Every Agile Application Developer appeared first on SD Times.

]]>
Hopping on the low-code locomotive https://sdtimes.com/lowcode/hopping-on-the-low-code-locomotive/ Mon, 08 Aug 2022 17:51:50 +0000 https://sdtimes.com/?p=48503 Will enterprise developers go loco for low-code, or will the whole concept someday become a no-go?  Until recently, analysts would lump low-code in with no-code and a host of tools offering some form of drag-and-drop ease that enables ‘citizen developers’ (meaning: non-developers) the means to deliver apps. But where should you start in thinking about … continue reading

The post Hopping on the low-code locomotive appeared first on SD Times.

]]>
Will enterprise developers go loco for low-code, or will the whole concept someday become a no-go? 

Until recently, analysts would lump low-code in with no-code and a host of tools offering some form of drag-and-drop ease that enables ‘citizen developers’ (meaning: non-developers) the means to deliver apps. But where should you start in thinking about your own enterprise’s journey to low-code?

Looking at the rate of investment and vendor acquisition in the low-code solution space, it seems like this arena has never been hotter. But in another sense, it’s always been with us.

Ever since the first desktop applications appeared, software companies have sought a way to endow professionals who didn’t have the time or inclination to learn C with the ability to build and design functionality.

Before RAD and 4GLs appeared on the scene, there was Visio and some VB tools on Windows, and Hypercard on the Mac. Even Excel started a revolution among skilled spreadsheet wizards armed with a few macros. These early forms of low-code were pretty localized in orientation, predating the explosion of content yet to come via internet services.

Low code is on a continuum 

According to my colleague Jason Bloomberg, low code is on a continuum between no code (tools requiring no coding at all) and pro code (tools that ease developers in reusing code or leveraging development skills).

There are parallels between the low-code spectrum of ‘assisted development’ solutions and the closely related business process automation and testing spaces, which share several commonalities, including a business user-centric or ‘no-code’ point-and-click simplicity on one end, and an engineering-centric ‘pro-code’ side.

In fact, we have seen several assisted development players arise from testing or automation tools that found their stride in low-code.

What will drive further low-code adoption?

Low-code solutions arose from the natural desire of every business to get ‘all hands on deck’ and become productive in delivering on the needs of customers, given constrained IT resources and budgets.

Beyond that desire, there are several major challenges that call for a low-code approach:

Maintainability. By far, technical debt is the granddaddy of low-code challenges. The need to maintain existing systems and retire or refactor obsolete or malfunctioning application code takes up the bulk of most established companies’ IT resources.

Low-code tools must add features that are modular, interoperable and especially maintainable, so developers aren’t left trying to pick up the pieces within a muddled, object-disoriented code dump.

Integration. Many low-code tools started out from an integration perspective: allowing teams to stitch together and move data between multiple tools or services to provide new functionality that wasn’t readily available before. 

Low-code solutions should include both internal core systems and external services into workflows, without requiring users to understand how to construct their own APIs. Always try digging past the ‘wall of logos’ and make sure the CRM or OMS you already own can be effectively supported for end customers.

Security. SecOps teams are extremely resource constrained, and have difficulties figuring out exactly how to authorize conventional development teams for environment access, much less giving low-code citizen developers appropriate access.

Ideally, modern low-code solutions can ease this security management burden, with role-based access controls assigned for team and functional responsibility levels. Fail to do security right, and you will either get hacked, or get Rogue IT as teams run off to do-it-anyway without draconian oversight.

Functional integrity. Manually coded processes and rote processes hidden within monolithic silos need to be rebuilt by business domain experts inside the prospective low-code platform.

Whether the new functionality is vertical or horizontal the low-code platform should provide adequate ‘safety bumpers’ in the form of pre-flight testing and early monitoring and feedback, so owners can be alerted if any application is going off track.

The Intellyx Take

If low code was just about reducing labor costs or IT resource constraints, the space would gradually be consumed by adjacent development tools becoming easier to use, or automation tools becoming robust enough to define applications. 

Bringing the intellectual capital of business expertise to bear within our application estates might just be the biggest game-changer for the future of low code. Wherever the code road ends, a new opportunity arises.

Hey, thanks for reading! Why not watch my archive of the latest 2022 LC/NC Developer Day session now?

The post Hopping on the low-code locomotive appeared first on SD Times.

]]>