Alexandra Weber Morales, Author at SD Times Software Development News Mon, 14 Aug 2017 18:14:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Alexandra Weber Morales, Author at SD Times 32 32 Ethics, addiction and dark patterns https://sdtimes.com/addiction/ethics-addiction-and-dark-patterns/ https://sdtimes.com/addiction/ethics-addiction-and-dark-patterns/#comments Mon, 14 Aug 2017 18:13:40 +0000 https://sdtimes.com/?p=26614 We’ve all fallen prey to them at one time or another: Design techniques such as the bait-and-switch, disguised ads, faraway billing, friend spam and sneaking items into the checkout cart. These “dark patterns” are interfaces are “carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for … continue reading

The post Ethics, addiction and dark patterns appeared first on SD Times.

]]>
We’ve all fallen prey to them at one time or another: Design techniques such as the bait-and-switch, disguised ads, faraway billing, friend spam and sneaking items into the checkout cart. These “dark patterns” are interfaces are “carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills,” according to the website Darkpattern.org, which is dedicated to exposing these tricks and “shaming” companies that use them.

Many of these shady practices are classic business scams brought online. Perhaps more worrisome are the new ways mobile apps capture our attention — until we can’t break away.

In Addiction by Design: Machine Gambling in Las Vegas (Princeton University Press, 2013), MIT science, technology, and society professor Natasha Dow Schüll crystallizes her 15 years of field research in Las Vegas in an analysis of how electronic gamblers slip into a twilight called the “machine zone” — and how the industry optimizes for maximum “time on device.” Slot machines are one of the most profitable entertainment industries in the United States, according to Tristan Harris, a former design ethicist for Google.

In a disconcerting essay on Medium, Harris argues that Schüll’s findings don’t only apply to gamblers:

“But here’s the unfortunate truth  — several billion people have a slot machine their pocket: When we pull our phone out of our pocket, we’re playing a slot machine to see what notifications we got. When we pull to refresh our email, we’re playing a slot machine to see what new email we got. When we swipe down our finger to scroll the Instagram feed, we’re playing a slot machine to see what photo comes next. When we swipe faces left/right on dating apps like Tinder, we’re playing a slot machine to see if we got a match. When we tap the # of red notifications, we’re playing a slot machine to what’s underneath.”

Thanks to intermittent variable rewards, Harris and many others note, mobile apps are easily addicting. But when you design for addiction, you open yourself to ethical questions.

In Hooked: How to Build Habit-Forming Products (Portfolio, 2014), consumer psychology expert Nir Eyal recommends using operant conditioning — intermittent rewards — to create addictive products. But are all products meant to be addictive, or is a “viral” product one that will flame out after the hype is over? Are “habit-forming” apps a sustainable business model? In short, what are the ethics of addictive design?

Interestingly, though Eyal argues that technology cannot be addictive, Schüll’s gambling research indicates otherwise. And Eyal contradicted his own book’s.

Technology dependence and distraction are easily solved, so calling them addictive is overkill, he said: “Everything is addictive these days. We’re told our iPhones are addictive, Facebook is addictive, even Slack is addictive,” Eyal said. However, he admitted, one to five percent of the technology user population does struggle to stop using a product when they don’t want to.

“What do these companies that have people that they know want to stop, but can’t because of an addiction, do? What’s their ethical obligation? Well, there’s something we can do in our industry that other industries can’t do. If you are a distiller, you could throw up your hands and say ‘I don’t know who the alcoholics are.’ But in our industry, we do know — because we have personally identifiable information that tells us who is using and who is abusing our product. What is that data? It’s time on site. A company like Facebook could, if they so choose, reach out to the small percentage of people that are using that product past a certain threshold — 20 hour a week, 30 hours a week, whatever that threshold may be — and reach out to them with a small message that asks them do they need help?” Eyal said.

He suggests a simple, respectful pop-up message to these users that reads, “Facebook is great but sometimes people find they use it too much. Can we help you cut back?” It remains to be seen if Facebook will implement such a measure, but Harris has come out swinging in the opposite direction from Eyal. He has launched timewellspent.io, a movement to “reclaim our minds from being hijacked by technology,” according to the website.

Harris offers an eight-point ethical design checklist, recommending that technology products:

1. Honor off-screen possibilities such as clicking to other sites
2. Be easy to disconnect
3. Enhance relationships rather than isolate users
4. Respect schedules and boundaries, not encouraging addiction or rewarding oversharing
5. Help “get life well lived” as opposed to “get things done” — in other words, prioritize life-enhancing work over shuffling meaningless tasks
6. Have “net positive” benefits
7. Minimize misunderstandings and “unnecessary conflicts that prolong screen time”
8. Eliminate detours and distractions.

The post Ethics, addiction and dark patterns appeared first on SD Times.

]]>
https://sdtimes.com/addiction/ethics-addiction-and-dark-patterns/feed/ 2
What you want, when you want it. Key trends in modern UX design https://sdtimes.com/ai/want-want-key-trends-modern-ux-design/ https://sdtimes.com/ai/want-want-key-trends-modern-ux-design/#comments Wed, 26 Jul 2017 13:00:41 +0000 https://sdtimes.com/?p=26300 The year was 1997. Steve Jobs fidgeted on a stool in front of the World Wide Developer Conference, chatting with the audience: “You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure where you’re going to sell it. I’ve made this … continue reading

The post What you want, when you want it. Key trends in modern UX design appeared first on SD Times.

]]>
The year was 1997. Steve Jobs fidgeted on a stool in front of the World Wide Developer Conference, chatting with the audience: “You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure where you’re going to sell it. I’ve made this mistake probably more than anybody else in this room — and I’ve got the scar tissue to prove it.”

Jobs was deftly answering a man who had just accused him of abandoning a pet technology. The Apple founder went on to explain that his company’s mission was to discover “What incredible benefits can we give to the customer? Where can we take the customer? Not, ‘Let’s sit down with the engineers and figure out what awesome technology we have and then how we’re going to market that.’ And I think that’s the right path to take.”

As it happens, 1997 was also the year Jony Ive became Apple’s senior vice president of industrial design. He went on to determine the curves, gloss and heft of the iMac, iPhone, iPad and more. Two years ago, the San Francisco-based Ive became Apple’s chief design officer — a role that exemplifies how strategic user experience has become in the technology world.

Inclusive design is strategic design
Unveiling his third annual “Design in Tech Report” in a March 12, 2017, talk, Silicon Valley design guru John Maeda noted that design is now a top priority for venture capitalists, consultancies and even stuffy enterprise software giants: “IBM design has been probably the largest corporate effort to amass design energy. […] Google is cool. Who would have thought? The perception on Google [has] definitely shifted.”

Google has indeed changed its tune. Google Design has created “a visual language for our users that synthesizes the classic principles of good design with the innovation and possibility of technology and science,” according to the spec at Material.io. The Material tools and components help developers build mobile-ready cross-platform experiences that have touch, voice, mouse, and keyboard as first-class input methods.

In Maeda’s formulation, “computational design” is a discipline that melds artistry, business, engineering — and inclusion.

“In my official title at Automattic, I’m the global head of computational design and inclusion. People ask me, ‘Why do you have the word ‘inclusion’ in your title?’ It’s because I believe that design and inclusion are inseparable,” he said. Creativity is intrinsic to inclusion, according to Maeda, but that energy is lost when inclusion is relegated to a human resources process rather than seen as fueling beautiful user experiences.

Inclusiveness is a common theme for Google as well. Reaching “the next billion users” was a mantra at the Google I/O conference in May 2017. Google speakers noted that many of these future customers are now or will be disabled: one in five people will have a disability of some sort in their lifetime.

“This isn’t just for users with a disability or an accessibility need. I want to get across that this helps all users,” said Patrick Clary, a product manager on accessibility at Google who himself uses a wheelchair, in his Google I/O ’17 talk, “What’s New in Android Accessibility.”

Why should accessibility interest app developers? Blind or low-vision products can help those who have their eyes otherwise occupied, such as drivers, he said. Designing for those with motor impairment helps others who can’t use their touch screen because it’s inconvenient or dangerous. “It’s really about designing for the widest possible range of abilities within the widest possible range of situations,” Clary said.

Android accessibility settings, APIs and long-running services are nifty developer tools for changing how users consume or interact with devices. For blind users, services include TalkBack and BrailleBack (which can activate a refreshable braille display), while Switch Access and Voice Access are targeted to those with motor impairment such as a tremor.

Meanwhile, Apple’s design aesthetic continues to revolve around user experience. At the 2017 WWDC, the company reminded attendees to develop not for “users”, but for humans. It turns out this is a longstanding tenet for the company: Apple’s evolving Human Interface Guidelines actually date back to 1987, which was also the year the Macintosh II personal computer was launched. At this year’s WWDC, the company maintained a forward view with its emphasis on humanity — and not just what humans see and do, but what they hear.

Sound: The next frontier
In 2003, Web usability expert Jakob Nielsen wrote, “Visual interfaces are inherently superior to auditory interfaces for many tasks. The Star Trek fantasy of speaking to your computer is not the most fruitful path to usable systems.” He was wrong.

With Siri, the first commercially viable personal assistant, the 1970s futurama had come to fruition. By 2020, Gartner predicts that nearly a third of all web browsing will be done without a screen and 85% of customer interactions will be managed by bots. ComScore predicts that half of all searches will be via voice. Sound: the next frontier.

According to Apple sound designer Hugo Verweij, sound can transform user experience, but too often, app developers miss the opportunity to compose custom audio notifications to distinguish their apps from others. In a compelling talk at WWDC 2017, Verweij offered important guidelines for using sound: “Will my app send frequent notifications? Can sound play a role in my app’s branding? Can the UI benefit from an audible component? How would I understand my app without a GUI?”

“Don’t overdo it — silence is golden,” he said, displaying a hilarious cautionary example of the iOS maps app overdone with silly sound effects, as if the comedian Victor Borge, of the famous “Phonetic Punctuation” routines, had commissioned it.

“If you’re making a game, it makes sense to make a whole world of sound, but we don’t want every app to sound like a game,” he said, noting the importance of always giving users the option to mute apps as well.

When it comes to sound, details matter. It can be a tricky game of trial and error to synchronize sound to haptics or animation — and getting it wrong can create illusions such as making buttons feel sluggish, or awkwardness when sound isn’t synchronized to video. When it comes to editing, while it’s advisable to work with an expert sound designer or sound engineer, simple tools such as Garage Band can make a huge improvement, Verweij advised.

Getting back to the Star Trek scenario, a plethora of machine learning APIs from Google (Cloud Speech API, Cloud Natural Language API), IBM (Watson Conversation), Oracle (Chatbots) and more make it easier than ever to harness voice interfaces for new apps. As it happens, that same combination of artificial intelligence and big data that’s powering machine translation and speech also holds the promise for predictive user experience.

Predictive UX with AI and analytics
One thing the design-focused Jobs might not have foreseen is how much user data we would be accumulating in 2017 — and how rapidly we are learning to put it to good use. User experience is no exception.

“In the UX world, AI and automation is transforming the role of the designer. Traditionally, UX teams would turn to metrics and tools such as usability tests, usage data and heat maps, to understand how to improve the functionality and effectiveness of a system. However, in the age of AI, we now have empirical, actionable data that we’ve never been privy to before, giving us greater granularity into optimizing the user experience,” said Rephael Sweary, cofounder of the San Francisco-based digital adoption platform WalkMe.

According to Sweary, AI helps conduct quantitative usability testing, easily extrapolating characteristics such as:

  • Location, job title, device
  • Time of day and length of session
  • User flow and drop rates within the application
  • Behavior analysis based on screen recordings of drops from user flows
  • Total number of users, unique visitors and sessions

“What we do with AI is optimize adoption. We define a goal for the AI algorithm, like ‘increase users who use feature X.’ We run our AI algorithm across our entire data set and look for people who use this feature. Then we predict adoption based on the people who use this feature. For example, people who use the app more than three times at work uploading two or more photos are most likely to use the ‘share’ feature,” said Kobi Stok, director of mobile product and technology at WalkMe. The company calls the ideal time to make a request or introduce a feature the “happy moment” for user engagement.

User experience metrics can also improve onboarding and training. “Typically, training is done with a firehose approach. You take a group of users away from a productive line of work, you train them for a few days and then you send them back. Wouldn’t it be nice to tailor training to only issues they have been experiencing while using the software?” asked Bogdan Nica, vice president of product and services for Knoa Software in New York City.

Knoa Software specializes in SAP application performance management. Now, as SAP is consolidating around a user interface revamp called Fiori, Knoa’s UX metrics can help ease the migration and identify “adoption gaps”.

“A main pain point that SAP users have had is that there are so many different UI standards,” Nica said. “A major migration takes a year. It makes sense to start collecting data before the migration to establish a baseline. You continue collecting during the migration. Then, when it’s completed, you take a look at metrics at the end of project so you can do a before-and-after analysis, but also to make sure it’s fully adopted — to identify adoption gaps, because there’s always something that goes wrong. Maybe everything works from technical point of view, but business processes are out of whack.”

Enterprise software design is being forced to improve user experience as it competes with consumer apps for employee attention, Nica notes. “There are different expectations of what good software looks like now. You can no longer force customers to use business software that looks like it was designed in the 80s or 90s,” Nica said. But what about apps that are too immersive? As design grows in importance, so does the obligation to use it responsibly.

Thinking about designing user experience responsibly should join security and privacy as a first-class concern — and it has become a priority to limit, say, texting and driving through driving detection in mobile apps. Like security and the other “-ilities,” it may still get lost in the shuffle as developers strive for faster releases. Unless… Could a new class of hybrid UX/techie bring these issues to the fore?

The new designer/developers
As a profession, design is embracing software development technology, Maeda believes. His surveys find more and more hybrid designers who have coding skills in JavaScript, PHP, or Ruby on Rails. He emphasizes that computational design requires an ability to iterate based on UX metrics, understand algorithms and embrace cutting-edge form factors such as self-driving cars and other connected devices.

And some hybrid designer/developers, like Michael Hoffer, started on the developer side. He’s a research scientist at Goethe University in Frankfurt, Germany, who created VRL Studio, a slick visual programming environment for Java.

“There are many powerful textual programming languages out there that already have a diverse and comprehensive ecosystem around them. Building a new visual programming language is challenging, at least if it is supposed to serve as a replacement for general purpose programming languages. For me, it is very important to provide visual programming environments that do not isolate developers from the ecosystem of existing languages and platforms. Therefore, I develop new interactive visual representations for existing textual programming languages,” said Hoffer.

VRL is not only sleek and powerful, it’s easy on the eyes — and Hoffer has done this on purpose: “Aesthetic aspects play a huge role. Actually, they are important for textual programming languages as well. Even though outsiders do not usually understand the beauty of well-structured source code, developers who have to look at and work with that code all day long do certainly develop a taste for beautiful code.”

The same applies for IDEs, Hoffer believes, and it can help developers find productive flow — and even reason more effectively about program structure: “Providing a good user experience for developers is highly important. Designing development environments, especially visual programming environments, that are aesthetically pleasing is very hard. Good user experience is correlated to finding the right abstractions. For general purpose development environments, this is especially hard because any simplification runs into the danger of limiting the possibilities of the IDE.”

As software becomes ubiquitous, much of its arcana will be made accessible to the masses via more beautiful, inclusive and usable designs — a fact that has motivated SAP founder Hasso Plattner to fund prestigious design schools around the world.

“Hasso Plattner is a very systems-oriented guy. He’s the architect behind the HANA in-memory database technology, but in a lot of the recent events that he’s had, he’s started to focus more and more on the UX side,” said Knoa Software’s Nica. “They realize no matter how powerful SAP is on database or server side, none of that matters if users can’t use it. That includes AI, moving to cloud — if that is not done with the ultimate objective of improving the user experience, none of that matters. It’s a validation that you cannot fail in software if you singlehandedly focus on the user. That’s your best course of action.”

Design education: Learn more
If you want to learn design, there are a growing variety of options, starting with the written word. In Make it New: The History of Silicon Valley Design (MIT Press, 2015) Barry Katz spotlights how influential design has been since tech’s early days.

John Maeda’s annual “Design in Tech” report, now in its third year, provides an invaluable snapshot of industry trends. Online resources for insights, education and training include free and paid blogs and courses at MIT Media Lab, Design.blog, Wizeline, Lynda.com, Youtube and Pluralsight.

There are also brick-and-mortar schools: You can get an MFA in interaction design from New York City’s School of Visual Arts, attend the Center Centre (formerly the Unicorn Institute) in downtown Chattanooga, TN, for its two-year user experience design program, or get a BFA in UX from SCAD (Savannah College of Art and Design, in Savannah, GA). Finally, around the world, three Hasso Plattner Institutes of Design Thinking have sprung up thanks to SAP founder Plattner’s philanthropy. These “d-schools” are sited at Stanford University, Potsdam University and the University of Cape Town.

The post What you want, when you want it. Key trends in modern UX design appeared first on SD Times.

]]>
https://sdtimes.com/ai/want-want-key-trends-modern-ux-design/feed/ 1
Industry Spotlight: How data science improves ALM https://sdtimes.com/alm/industry-spotlight-data-science-improves-alm/ https://sdtimes.com/alm/industry-spotlight-data-science-improves-alm/#comments Sat, 01 Jul 2017 14:00:28 +0000 https://sdtimes.com/?p=25964 If you’re an agile team, you may still be planning, developing, testing and deploying by instinct. But what if you bring data science into the picture? Enter HPE Predictive Analytics, which can surface everything from accurate planning estimates in agile projects to efficiencies in defect detection for continuous testing. SD Times spoke with Collin Chau, a … continue reading

The post Industry Spotlight: How data science improves ALM appeared first on SD Times.

]]>
If you’re an agile team, you may still be planning, developing, testing and deploying by instinct. But what if you bring data science into the picture? Enter HPE Predictive Analytics, which can surface everything from accurate planning estimates in agile projects to efficiencies in defect detection for continuous testing.

SD Times spoke with Collin Chau, a 10-year HPE veteran based in Sunnyvale, about how HPE is applying data science to historical ALM project data. Chau, senior marketing manager for HPE ALM Octane and Predictive Analytics, explained how machine learning, anomaly detection, cluster analysis and other techniques improve the fours stages of the lifecycle.

SD Times: Where do you get data to improve ALM?
Chau: There’s metadata that sits within the ALM platform that’s untapped. If you look closely, a lot of this data can be used to accelerate quality application development lifecycle. The customers we spoke to want to get more out of that data, to use it to help guide application project  teams optimize resources and reduce risk when managing the application development lifecycle.

We have  an experienced team of data scientists today developing algorithms for one thing alone: quality application delivery. Data science with the absence of domain knowledge is useless information. We have data scientists sitting in application development lifecycle teams to cross-pollinate our tools with ALM-specific data science, offering users prescriptive guidance that is pertinent.

These advanced analytics are multivariate in nature, and borrow technologies specific to machine learning that continuously learn from past data — because only with a constant learning cycle that feeds on updated data that you improve and offer better recommendations and predictions.

Where do these analytics show up?
Predictive analytics is offered and sold as a “plug-in”, to be shown through the ALM Octane dashboard, which we are positioning as a data hub that will feed into other ALM tools that are on the market to offer a single source of truth.

Do you have real-world examples?
We have several customers who are participating in the technology review. Predictive Analytics for ALM will go public beta next month, and we’ll have general availability shortly.

You describe four stages of predictive analytics in ALM. What’s the first one?
The first is predictive planning. Most agile projects have no proper planning; teams start out in the dark running as fast as they can. Development time frames can get over-extended or mis-resourced. In predictive planning, the tool learns from past historical data and provides teams the recommendations in terms of user requirements, story points, and feature size estimates etc.

Next is predictive development?
For coders, the number-one job is to build quality code fast. The tool is intelligent enough to predict code that will break the build even prior to code check-in, to proactively analyze source code for defects or complexity. It can also recommend code to supplement the build – I’m pretty excited that it has the ability to continuously learn from different data points and classify it into information that developers can actually use to avoid rework.

Stage three is predictive testing — does that prevent QA from being squeezed on both ends by DevOps?
Yes. It’s about how you accelerate not just testing, but continuous testing. In this world of continuous delivery, predictive testing gets you to the next level. It helps identify root cause analysis in test failures. By so doing, it actually goes a step further by recommending a subset of tests to be done based on code changes checked-in.

Predictive analytics helps you zoom in. It says you don’t have to run a suite of 20,000 automated tests, when 100 specific ones are sufficient enough to cover the latest code commits.

The final stage is predictive operations?
We are taking real-world production data and leveraging that to tell the customers where there are test inefficiencies. The wow about this is that, because we are taking actual production data, we are infusing application development decisions with data from real-world conditions. It’s no longer lab-based or static, as applications are consistently refined to meet needs in actual operating environments.

How can teams try this on for size?
To sign up for the public beta, go to saas.hpe.com/software/predictive. Learn to optimize your resource investments and reduce risk for agile app releases within DevOps practices. Discover how predictive analytics multiplies the power  of ALM Octane’s data hub as a single source of truth.

The post Industry Spotlight: How data science improves ALM appeared first on SD Times.

]]>
https://sdtimes.com/alm/industry-spotlight-data-science-improves-alm/feed/ 1
Industry Spotlight: Extracting data from inside the app https://sdtimes.com/data/industry-spotlight-extracting-data-from-inside-the-app/ https://sdtimes.com/data/industry-spotlight-extracting-data-from-inside-the-app/#comments Thu, 01 Jun 2017 13:34:01 +0000 https://sdtimes.com/?p=25428 As data streams threaten to drown companies in too much information, the trend in business intelligence is now to house analytics smack in the middle of applications, where they can quickly and securely surface actionable information to developers, users and businesses. With Progress OpenEdge Analytics360, Progress has a solution for both ISVs and IT departments … continue reading

The post Industry Spotlight: Extracting data from inside the app appeared first on SD Times.

]]>
As data streams threaten to drown companies in too much information, the trend in business intelligence is now to house analytics smack in the middle of applications, where they can quickly and securely surface actionable information to developers, users and businesses.

With Progress OpenEdge Analytics360, Progress has a solution for both ISVs and IT departments that don’t want to make the common mistake of purchasing a standalone BI solution that they can’t implement, or endure the frustration of developing their own. For an understanding of how analytics can help businesses, we spoke with Progress Senior Principal Product Manager Mike Marriage, a data warehousing and BI expert in Atlanta, GA. Also in Atlanta, Barbara Ware is a Progress Senior Product Marketing Manager responsible for Progress Services, including business intelligence and data replication.

SD Times: Are most organizations even doing embedded analytics?
Marriage: A lot of people that I come across, if their analytics are not embedded, they address their analytics need in one of three ways: One, they have no analytics whatsoever. Two, they report off of Excel spreadsheets that they use to manually merge data they’re exporting from various operational sources. Three, they have a standalone business intelligence system where they’re taking information from those sources and keeping it outside of the application.

Ware: Also, people often confuse analytics with straight reporting — what’s more, the report might be several days old and has been turned into a chart.  That’s not analytics.

Marriage: Yes, data gets stale. Analytics is about making data actionable to the user. Many of these other systems work against stale information, which can lead to incorrect business decisions.

Embedded analytics is really a matter of taking those analytics and making them part of the transactional application.  By embedding analytics, you maintain context for the user and allow them to take immediate action where it makes sense, and when it needs to happen. To the end user, it’s all transparent. We have the ability to give the analytics solution the same look and feel as the application — because you don’t want it to be a completely different experience. A seamless user interface is very important.

What’s a common misconception around embedded analytics?
Marriage: A lot of vendors claim they can embed within an app, so they take a chart or graph and embed it. That’s just a report working against a snapshot of data. A real embedded analytics solution has to provide workflow integration and respond to events and triggers within the host application. Likewise, the interaction that the user is having with those embedded components should affect what happens in the parent app. Maybe clicking on that order number will open up the order page while adhering to any security rules that have been set. That’s an example of complete integration.

Ware: We like to say we’re breaking down the wall between transactions and analytics.  We leverage the Progress OpenEdge database to not only provide measurements and results based on extracted data, but also relevant operational data directly from the application for real-time decision making.

How do you make sure the results aren’t just ignored? Do push notifications get it to the right set of eyeballs?
Marriage: Analytics solutions are only valuable if the user utilizes them. With an embedded solution, we have the capability to provide push notifications. We can automatically send out a text message or email alerting the user — maybe even sending that content to them — and guiding them back to the application.

Do you use Analytics360 yourselves to figure out what types of data are most valuable to your own customers?
Marriage: Yes, we’re using our own analytics to analyze our solution as users work with it. We want to make sure the content is being utilized — and also that it’s performing as well as it should be. Maybe some content is not used often today, but down the line it becomes more popular. We want to know when this happens so that we can optimize whatever content is now in demand.

What’s another “gotcha” in analytics?
Marriage: It’s important that when you look at your data to determine why an event has occurred, that you look at data blended from many sources. For example, working with one of our customers in manufacturing, if I look at defects occurring in certain components, I might find 20 instances of doorhandles coming loose and three instances of transmission slippage. To the untrained eye, the greater number requires my attention. But if I start lining that data up with cost information, I can see the transmission issue is more important.

Taking this example one step further, you can also factor in social media. Maybe for most of my customers their transmission slippage is being rectified at the dealer, but they’re reporting the shoddy door handles negatively on social media. Too often, companies only respond to one data point. It’s important for a company to obtain and review data points from all sources, and analyze the data from many different angles.

To what extent are people using public or external data, say mapping or weather, for context on analytics dashboards?
Marriage: If it has an API (application programming interface) or a web service, you can use it. We’ve worked with organizations that plot assets on a map to see where they are and how they’re operating in real time, through satellite feeds.

It’s not always against stored data; we can also take data streams, say in a manufacturing capacity, to show scrap, defects, quantity produced — and kick in with push notifications and alarms as monitored events are triggered.

Do you see big data usage coming into the mainstream?
Marriage: On the big data front, I believe we’re starting to see greater adoption. Historically, we’ve only seen the largest of organizations embrace that. Setting up a Hadoop stack is a challenge — but within Analytics360, we have connectors to Hadoop, MongoDB, Cassandra, etc. So as companies embrace big data technologies, we’re ready for them.

Also, the skill set for big data is not easy to find.
Marriage: One issue that our customers have is the lack of experienced resources to perform predictive analytics. We’ve realized there aren’t a lot of people in the data science space. There’s a shortage — and it’s only going to get worse. So we’ve included functionality within Analytics360 that will push the information back into the hands of the more casual analytics users, applying formulas to data and forecasting how it might look in the future.

Are there any legal risks when it comes to analytics and have you seen any customers take them?
Marriage: The most important thing is the security of data, whether it’s to comply with HIPAA (Health Insurance Portability and Accountability Act) or European privacy rules such as the GDPR. A lot of companies are simply moving data by exporting in clear text files, dropping it onto a server and importing into an analytics solution. Make sure data is encrypted when you transport it and store it. It’s something we take very seriously.

Does embedding the analytics help or hurt user productivity?
Marriage: A study found that 84 percent of business users want access to analytics within the applications they’re already using, but nearly 70 percent found themselves switching from their usual business apps to separate analytics tools to get the data or analysis needed. In their report, “Augmenting Intelligence with Embedded Analytics,” Nucleus Research estimated this wastes up to two hours of productivity per worker per week. Think about that. In a year, that is about 100 hours of time saved per user. When you factor that over an entire organization, saving that time and cost is another huge benefit of embedded analytics.

Who are you primarily reaching with Analytics360?
Ware: The trend now is not just business intelligence within packaged apps; it’s also in IT organizations and their internal apps.

We have two main audiences, direct and partner. Embedded analytics provides value to both. For our ISV partners, embedding business intelligence and data analytics into their applications offers new revenue streams — and keeps them from losing customers to other apps with this functionality. For companies that use the app internally, it increases users’ participation and adoption rates and keeps users satisfied. Users aren’t going to use an app when they don’t understand the value they’re gaining from it.

How does your solution compare to others on the market?
Ware: There are a lot of things that differentiate Analytics360: The fact that it comes with pre-built content makes it easier to implement and gives a faster return on investment. Also, if you don’t have in-house business intelligence expertise, we have a services team that is well-versed in business analytics and can help you.

Another difference is the fact that it’s built for our database and platform , Progress OpenEdge, so that it can extract operational content quicker and more accurately than any other product. Having an integrated extract, transform and load process increases the efficiency quite a bit.

Marriage: And we do have customers using it real-time — controlling manufacturing flows, or tracking assets moving on a map.

Business intelligence has gone from standalone services to becoming embedded in every application. What’s next in its evolution?
Marriage: As the velocity and volume of data continue to increase, it’s going to become more and more unmanageable. We’re going to see a marriage of cognitive applications to analytics as these analytics solutions begin to make more decisions. It might be automatically adjusting prices to maximize profit margins, or making an adjustment to a manufacturing line, or predicting when there’s going to be component failure. Our vision for Progress is that cognitive applications are coming next.

The post Industry Spotlight: Extracting data from inside the app appeared first on SD Times.

]]>
https://sdtimes.com/data/industry-spotlight-extracting-data-from-inside-the-app/feed/ 1
Industry Spotlight: Three disruptions to ALM — and how machine learning could help https://sdtimes.com/agile/industry-spotlight-three-disruptions-alm-machine-learning-help/ Wed, 31 May 2017 20:30:38 +0000 https://sdtimes.com/?p=25382 Application delivery teams and the application lifecycle management tools they use are reaching a breaking point as they struggle to support continuous delivery of applications with an expanding array of architectures and form factors. According to Ashish Kuthiala, Austin-based senior director for Agile and DevOps portfolio offerings at Hewlett Packard Enterprise Software, three key disruptions … continue reading

The post Industry Spotlight: Three disruptions to ALM — and how machine learning could help appeared first on SD Times.

]]>
Application delivery teams and the application lifecycle management tools they use are reaching a breaking point as they struggle to support continuous delivery of applications with an expanding array of architectures and form factors. According to Ashish Kuthiala, Austin-based senior director for Agile and DevOps portfolio offerings at Hewlett Packard Enterprise Software, three key disruptions are reshaping ALM and testing: DevOps, increasing application complexity, and Cloud and SaaS models. In response, enterprise development’s next wave of productivity will be increasingly automated, collaborative and powered by big data.

SD Times spoke with Kuthiala about these disruptions and how predictive analytics and machine learning will be necessary tools for building quality software from the start.

Ashish Kuthiala, Austin-based senior director for Agile and DevOps portfolio offerings at Hewlett Packard Enterprise Software

SD Times: What’s your background and how has that positioned you to see these disruptions?
Kuthiala: I’ve held different roles across the software development value chain — a developer, on early Extreme Programming teams, and hands-on roles on the operations side of the house. When I found myself under pressure to accelerate this delivery chain due to business urgency and new technology, I could see that seismic changes were needed to keep up.

Since you mentioned Extreme Programming, do you feel that test-first programming and XP ultimately led us to DevOps?
Paradigms such as Extreme Programming were precursors and accelerators to DevOps, but a lot of those methodologies were more focused on just the dev-test teams. Agile focused on development, testing and users, but it fell short on delivering the value quickly to end users. You’d work fast and hard on smaller deliverables to get them right but then wait to bundle it up and and throw it over to production teams.

QA has a lot of ingrained processes and systems — and historically, that was the right thing to do. The QA organization’s main charter was not to let shoddy code slip by, so processes, tool sets and teams were built not so much for speed, but more for quality. Now, there’s a lot of pressure on the QA team to re-look at their processes, because quality processes that take long cycles cannot hold up the speed of delivery to the end user — and more importantly, quality cannot be achieved by a siloed team.  Quality assurance needs to be pervasive throughout the software value delivery chain.

So DevOps is the first disruptor.
Today, application design, development and testing happen simultaneously, requiring test creation and execution earlier in the lifecycle, even before coding begins. This puts new pressures on QA to adapt or risk being cut out of the DevOps process.

First, test definition must start with user stories and requirements, before code is written, and facilitated with proven practices such as Business Driven Development (BDD) and Test-Driven Development (TDD).  

Second, testers must skill-up to increase their use of automation at every phase:  Unit, functional, regression, load and security, while using automation best practices to strive for good functional test design and re-usability.

Third, testers must align with the teams implementing the Continuous Integration (CI) toolchain so that unit/functional, regression, performance and security testing execute with the continuous build cycle, within a single sprint.

Finally, with the complexity of today’s application landscape, there is always more to test than time allows.  Testers should get comfortable leveraging production analytics to understand how apps are actually being used in the wild and use that insight to focus their test activities.

Then the second disruptor is complexity — but this is where your approach gets interesting, because you advocate using predictive analytics within QA.
A: When we talk about software complexity, the whole model of how software is now being built and delivered is radically different than it was one, two or even three years ago. Development teams are adopting new architectural models to create and deliver applications in the continuous delivery model. We are also seeing an explosion in the sheer number of distinct platforms and forms such as web, mobile, and Internet Of Things from which software is consumed.

These fundamental changes in delivery cadence, software platforms and software architecture are increasing the complexity of lifecycle management, to the point of chaos. Heterogeneous dev processes, apps built with shared services and APIs, widespread open source in code and tools, different protocols and characteristics of IoT and mobile application delivery, challenge app delivery teams. Even the smallest code change has so many ramifications — it’s a network effect.  

For example, even changing one line of code can have severe impact beyond the module within which it is contained in.  How do we analyze and track these impacts? Tapping into and analyzing the vast amount of data that lies across your software development ecosystem can provide you with the insights and alerts you need to make fast and intelligent decisions.

What kinds of predictive analytics and machine learning would you look for?
A: Today, if I make a change, it may take five hours for me to see if it passes all the quality tests — that’s if you’re doing things really well today. Sometimes, it takes a week or even a month in many organizations.  To deliver at the speed at which business wants to move, this is increasingly unacceptable.

If I were to embrace data-based machine learning and analytics driven testing, the metrics I would like as a developer or a tester would be the number of tests I have to run for my code changes — do they seem to go down with each cycle? Do the test cycle times get faster? What is my confidence level in accepting the machine learning recommendations about my tests? Is there learning from each cycle? Perhaps the first time I ran 120 tests, and the next time I only have to run 25 based on the past learning.

Your third disruptor is cloud and SaaS (software-as-a-service). What effect does that have on the lifecycle?
There’s an increasing adoption of  infrastructure  models that can be instantiated at the click of a button — scalable Cloud and SaaS models that are fundamentally changing the way applications are both composed and consumed.

Meanwhile, legacy systems aren’t going away — they have a long half-life. When you start to manage a mix of such models, how do you rapidly provision or consume services from these different platforms? How do you scale up and down based on your needs? How do you test and replicate all the hybrid platforms: Amazon, Azure, on-premise, mobile, web…?

The proven cost savings garnered from moving to the cloud and the elasticity of cloud delivery are enabling teams to rapidly deliver against business requirements and meet un-predictable consumption loads, but there are huge challenges in harnessing these models to your benefit.

What is HPE’s solution to these problems?
We believe that application delivery teams need to prepare for a hybrid-cloud world by investing in skills and tools to test and manage software composed of on-premises and cloud services, and investigate a hybrid-cloud approach to application delivery management as well.

HPE’s ADM software suite supports hybrid cloud delivery with a highly elastic, cost effective choice of consumption models.   First, we provide a choice of on-premises or cloud-based automated lifecycle management, functional, performance and security testing and the ability to set up a flexible, on-demand test lab in the cloud.

Second, HPE’s ADM suite can rapidly provision and scale all forms of testing globally across on-premises, private and public cloud footprints with your choice of where the application under test, the integrated services and user devices are present—on-premises or in the cloud.

Third, we build in service and network virtualization, which enables continuous development and testing across teams even when services are not ready yet, or in the cloud and difficult if not impossible to access, because global network behavior can create obstacles to quality and performance.

Given these three disruptors, what is a simple yet bold move a development organization can take right now?
Businesses — and therefore their IT colleagues — are under relentless pressure to innovate faster than their customers. This transformation needs to cut across the teams, processes and the tooling underneath it. It cannot be an overnight change; it’s an ongoing journey to continuous improvement. Start by attack your biggest problem or bottleneck in the system. Be ready to experiment, fail and learn fast. Analyze your data to learn and get better.

Once you solve this problem, you’re going to move on to the next bottleneck, and so on. Having this mindset is what we see in organizations that are very successful.

The post Industry Spotlight: Three disruptions to ALM — and how machine learning could help appeared first on SD Times.

]]>
6 ways platform-as-a-service is giving developers superpowers https://sdtimes.com/aws/6-ways-platform-service-giving-developers-superpowers/ https://sdtimes.com/aws/6-ways-platform-service-giving-developers-superpowers/#comments Thu, 27 Apr 2017 17:00:13 +0000 https://sdtimes.com/?p=24806 We asked developers, CTOs, entrepreneurs and consultants across the country to describe concrete ways in which PaaS has changed their development style. RELATED CONTENT: Three cloud PaaS trends to watch in a serverless world 1. Reducing headcount Rob Reagan, CTO of Text Request At Text Request, we’re able to also reduce headcount using Azure’s PaaS offerings. Without … continue reading

The post 6 ways platform-as-a-service is giving developers superpowers appeared first on SD Times.

]]>
We asked developers, CTOs, entrepreneurs and consultants across the country to describe concrete ways in which PaaS has changed their development style.

RELATED CONTENT: Three cloud PaaS trends to watch in a serverless world

1. Reducing headcount
Rob Reagan, CTO of Text Request

At Text Request, we’re able to also reduce headcount using Azure’s PaaS offerings. Without PaaS, we’d have to staff a very senior infrastructure and security expert. It’s pretty rare to find developers who really know how to harden servers. However, our developers are very familiar with hardening an application.

Azure really shines with PaaS, far outpacing Amazon. If you’re looking for IaaS, Amazon leaves Microsoft in the dust.

Note: There is likely a point where the cost curve for PaaS bends backwards. If you’re maintaining a site like Reddit and have a few hundred servers, an infrastructure team is probably cheaper than multiple PaaS services.

With PaaS like Azure Web Apps, I don’t stay awake at night worrying about network-level intrusions. Microsoft’s security experts at their Azure data centers are probably going to do a much better job than our comparatively smaller team.

2. Conserving startup cash flow
Peter Kirwan, CEO of Collexion, Inc.

My latest startup, Collexion, has built its entire product on PaaS. Our core features are built on AWS, but we have gone a lot farther than other companies by making the commitment to develop critical parts of our application architecture incorporating many specialized AWS applications.

For example, we use AWS’s Cloudsearch to index millions of items to increase performance and take the load off our database. There are other examples, like their AI tools and image recognition, that are pay-per-query via an API so that we use the platform but don’t manage any of the infrastructure. In addition to AWS, we integrate with third-party cloud-based applications through APIs, Zapier and IFTTT.

I made a strong push when founding the company to use as many PaaS and cloud applications to rent vs. build, which not only saves a massive amount of software development, but eliminates the need for 24/7 management of the site in the early stages of the company.

3. Accelerating HMI development
Kim Rowe, CEO and founder of RoweBots Ltd

PaaS allows us to accelerate analytics and human-machine interface (HMI) development, while still having embedded solutions that are secure and precisely meet embedded sensor requirements. For example, we built a concussion sensor demonstration in 30 calendar days with 2.5 developers. This would have been impossible without the Microsoft Azure framework.

The powerful analytics developed by the cloud vendors are readily available for a price, accelerating development by years in some cases, which is certainly a superpower.

A system that would have taken 6-8 months to complete can now be completed in 30 calendar days. An Azure system that will scale to multiple wireless routers and hundreds of end users is underway with an extra month of effort in total.

Our favorite tools are MQTT (a machine-to-machine connectivity protocol for IoT-type publish/subscribe messaging transport) and Azure — and we’re currently looking at Ayla, MediumOne and Watson for other clients.

4. Building a DevOps pipeline
Marek Sadowski, IoT advocate

As a Bluemix developer, I can spend more time on the business logic of the application itself. Before developing on Bluemix, a large amount of my time was unfortunately consumed by implementing container fixpacks, upgrades, etc. Now it is all provided for me. I have access to enterprise grade systems — regardless if I’m developing for a large corporation or a startup. Also, all of the configuration and the connectivity to the other elements of the system are elevated now — I use what is provided in the description of the service table.

As an architect, it is very easy to rely on the availability of the system. Simple scaling up (or down) mechanisms take care of the irregularities of traffic to my apps and services. Furthermore, there is no need for system administrators — this role is taken over by Bluemix as well.

If I deploy an application on Bluemix it can be reachable globally, and I can achieve this reach quickly without database administrators, system support teams or hardware engineers.

Finally, there is no need for upfront investment, so startups can now match large enterprises with access to infinite resources — paying for them as they go, starting small and growing with the user base and app usage flexibly and as needed.

Recently, I started to leverage DevOps services on Bluemix to automate deployment from development to test and to production. The production cycles are counted in single weeks instead of months or even quarters. So everything becomes very efficient. The most modern languages (Javascript, Swift) and standards and concepts (cognitive services, Kubernetes, serverless computing) also become instantly available to my team.

5. Faster prototyping
Hernan Santiesteban, Founder of Great Lakes Development Group

PasS has definitely changed the way I build software. The ability to quickly get a system up that contains all the necessary tools is a great time-saver. I mainly work with Azure, but the same can be said for most of the cloud services providers. With PaaS tools, you can get a fully functional web application up in just a few minutes. This includes all the basic necessities like a database, web API scaffolding and authentication.

The ease with which you can get a system up makes prototyping a breeze. This gives you ability to focus on the problem you’re trying to fix. No need to spend valuable time settings up the foundation of a system that may not be in existence for more than a few hours or days.

If you’re running a production application, the ability to automatically scale if your app encounters an unexpected traffic spike can help you rest at night.

However, if you’re just running an app with a small number of users, you have no need to prototype, and you can easily handle all the maintenance yourself, then PaaS may not be the right answer.

6. Microservices architectures
Gal Oppenheimer, Senior product manager for Built.io

A proper, stable PaaS can be a breath of fresh air. When we launched PaaS as a feature in Built.io Backend in October 2013, it enabled both our internal teams and developers. Any developer could now build a fully automated application — frontend, backend and mobile — on their own.

If you factor in the time it takes to setup, secure and scale a server, you could easily bring a three-month project down to 1.5 months or less. For a project with one web developer and one application developer, you can completely forgo a 50% DevOps engineer.

At Built.io, we’re very big fans of Docker and Node.js. Combined, they offer significant simplifications in your server stack and enable cleaner cross-compatibility of code and content by eliminating data transformations between your APIs and server code.

If you’re doing work that benefits from direct resource access (i.e. processing video or graphics), it’s often important to have fine-tuned control of your infrastructure. However, in the modern, microservices approach to development, we’d recommend separating this feature and either using a third-party service that solves this need or build it from scratch.

The post 6 ways platform-as-a-service is giving developers superpowers appeared first on SD Times.

]]>
https://sdtimes.com/aws/6-ways-platform-service-giving-developers-superpowers/feed/ 2
Three cloud PaaS trends to watch in a serverless world https://sdtimes.com/cloud/three-cloud-paas-trends-watch-serverless-world/ https://sdtimes.com/cloud/three-cloud-paas-trends-watch-serverless-world/#comments Thu, 27 Apr 2017 16:00:46 +0000 https://sdtimes.com/?p=24799 The future may be serverless, but for now, commoditized infrastructure is making platform-as-a-service increasingly attractive for startups, enterprises and developer shops. Led by Amazon and Microsoft, vendors such as Salesforce, Google and Oracle are pitching platforms for every development style, architecture, language and use case. And cloud-native programming is even attractive on-premises: a desire for … continue reading

The post Three cloud PaaS trends to watch in a serverless world appeared first on SD Times.

]]>
The future may be serverless, but for now, commoditized infrastructure is making platform-as-a-service increasingly attractive for startups, enterprises and developer shops. Led by Amazon and Microsoft, vendors such as Salesforce, Google and Oracle are pitching platforms for every development style, architecture, language and use case. And cloud-native programming is even attractive on-premises: a desire for consistent processes and DevOps-style tools is driving Microsoft’s Azure Stack, which works seamlessly in hybrid deployments with various Azure platform services. There’s also a thriving community around Cloud Foundry, an open source PaaS that comes in commercial distros by Pivotal, HPE and IBM.

Open source is holistic
Though Amazon Web Services is usually top of mind for infrastructure, it’s slightly less sought-after on the platform side. Here, Microsoft Azure shines, thanks to years of developer tool expertise — and a well-documented ability to pivot toward any market it initially missed.

But before perusing Azure’s plethora of options, it’s worth taking a closer look at how San Francisco-based Pivotal runs its two core open source projects, Cloud Foundry and Spring Boot.

“Pivotal Cloud Foundry goes the whole way from embedded operating systems — so you don’t have to buy anything from Red Hat ever again, to cloud orchestration — so you don’t need Puppet and Chef, to middleware — so you don’t need IBM WebSphere or Oracle WebLogic, to load balancing and some API services, all the way up to cloud-native frameworks such as Spring Boot, which is the most popular Java framework for cloud apps in the world,” according to James Watters, senior vice president of product at Pivotal, in a January 2017 video interview with Datamation.com.

As Watters sees it, Pivotal’s holistic vision is exemplified by its cloud-native apps consultancy, Pivotal Labs. Ford’s connected car service, for example, chose Cloud Foundry running on multiple clouds and partnered with Pivotal Labs to executing their apps.

To be sure, any of the current PaaS vendors, including IBM and HP, building off of Cloud Foundry are adding a plethora of features for orchestration, containers, DevOps, testing and management, not to mention more specialized features such as chat bots, AI, blockchain-as-a-service and functions-as-a-service. But one thing no PaaS user should take for granted is the potential for malicious activity.

Security is critical
“The cloud has made delivering software easier but has opened up a huge attack surface. We use AWS serverlessly to protect AWS,” said Matt Fuller, founder of CloudSploit, which provides open source and hosted automated security and configuration monitoring software for AWS cloud environments.

According to Fuller, “Even the most secure cloud providers only offers security of the cloud. The user is responsible for security in the cloud. As groups, roles, devices, etc. change, oversights and misconfigurations open vulnerabilities that lead to outright hacks or just a financial DDOS [distributed denial of service]. Unfortunately, a single misstep can compromise your entire infrastructure.”

CloudSploit monitors your AWS instance for anomalous activity with tests you choose or create. An open source project available at https://github.com/cloudsploit, security experts from around the world contribute to CloudSploit with the goal of increasing compliance with best practices, to protect the company infrastructure and their client’s information.

Even those who eschew specialized monitoring take confidence in the fact that a core benefit of PaaS is not having to patch the underlying frameworks and operating system. According to Omar Khan, Redmond-based general manager for Microsoft cloud app development and tools, “Developers spend a lot of time, especially in a DevOps world, making sure that the components that their code is running on are updated to avoid any vulnerabilities. PaaS eliminates a lot of that, because the patching is done automatically, and that’s a huge time savings.”

The shift to DevOps culture has also taken effect, Khan explained: “Cloud is enabling DevOps more and more. And we’re seeing developers bringing security into the lifecycle through ‘rugged DevOps’ or ‘shift-left’ of the scanning within the development process — not having to wait to do that stuff once in production.”

Low-code PaaS gains traction
As PaaS gains in popularity, the panoply of flavors increases. In addition to iPaaS (integration PaaS) and PaaS for testing and QA, there are low-code options available. In September 2016, Oracle launched Project Visual Code, a low-code platform for business users and developers to extend services and build new applications with little to no coding.

Low-code platforms are emerging around specific niches, such as UK-based Naqoda’s recently launched Core Banking Platform as well as its existing Tax Engine. The cloud-enabled system enables European open banking via the Payment Services Directive 2 (PSD2), which enables financial information sharing and APIs for new financial products.

QuickBase is a veteran player in the space and has been collecting metrics on low-code speed gains. Last fall, the company’s “2016 State of Citizen Development” report found that among respondents, a majority said they were able to deliver apps in less than a month. In contrast, for delivering traditional hand-coded apps, two-thirds of developers reported requiring over two months, and nearly one-third required over six.

For some, no-code is a game-changer: “Because all of our applications are produced on a no-code platform as a service, we are able to staff our team with individuals who are less experienced and/or less technical than traditional development shops,” said Treff LaPlante, CEO and founder of CitizenDeveloper.com and WorkXpress in Harrisburg, PA.

“The results have been astounding. We have reduced the average hours to deliver a project from beginning to end to only 273. On this platform we have materially grown our business year over year and are now able to pursue new markets,” he said.

When PaaS isn’t the answer
Of course, PaaS isn’t a panacea. Kim Rowe, CEO and founder of Toronto-based RoweBots Ltd., does custom embedded and Internet of Things development with PaaS, but notes that embedded PaaS is weak in one way or another. Like any good coder, Rowe’s solution was to build his own PaaS. Unison RTOS tackles what he calls the seven key characteristics (lean, adaptable, secure, safe, connected, complete, and cloud) required to build quality embedded systems. Perhaps an eighth key is cost.

“For example, a concussion-detection system we created needs servers running in the cloud. Even if it may not be used for a significant portion of the time, we’re still charged for hosting. Figuring out cloud billing needs to be built into the design. It is one thing if it is a mine collecting data 24/7/365, and another if it is a ball team that uses the sensors two hours per day, four times per week during the school year,” Rowe said.

Adam Stern, founder and CEO of Infinitely Virtual, a cloud service provider, is not a fan of using PaaS to develop for external customers.

PaaS is ideal for companies writing applications that are specific to their business. PaaS makes it possible, even easy, to develop applications rapidly with little technical know-how — applications that aren’t intended to be sold but that run on a single, captive platform,” Stern said. “When it comes to creating an app for customers, however, it’s a different story. If the platform for which the app was written changes or ceases to exist, you’re stuck.”

The danger, as Stern sees it, is too much ease-of-use: “PaaS does tend to put internal development teams on the IT rollercoaster, forever investing and reinvesting in platform-specific application development.”

Finally, all that convenience doesn’t always come cheap, either in terms of  freedom or finances. “We like Amazon Web Services quite a bit, so let’s pick on them. Their DynamoDB (on-demand database) service is great, but after using it for a few months, it becomes quite an undertaking to port it to a different platform,” said Scott Williams, director of software at Tallwave.

“As Fred Brooks says, there are no silver bullets; PaaS systems do tend to be more expensive, and that cost can go up significantly. It’s easy to throw a switch, quadruple your processing capabilities for a spike, and then pass out when the invoice arrives,” Williams said.

Could Serverless be the next Docker?
In 2014, Amazon unveiled its Lambda functions, and since then there’s been a flurry of new serverless offerings.

Along with Iopipe.com and Apex, there’s Serverless Inc., the company behind the actively managed MIT open-source project of the same name. All comprise a new ecosystem of tools to manage, version and test serverless functions, especially Lambda functions. And similar — but by no means identical — compute services are evolving, including Microsoft Azure Functions, IBM Bluemix OpenWhisk, and Google Cloud Functions. Finally, you know it’s a trend when a conference appears: On cue, check out Serverlessconf in Austin this year in April.

What all these serverless function tools have in common is the ability to execute standalone commands in languages such as JavaScript, Python, C#, or Java on cloud infrastructure, with pricing based requests, duration and memory. In his forthcoming book Serverless Architectures on AWS (Manning, in press), Peter Sbarski, VP of engineering at A Cloud Guru, defines five principles of serverless architectures:

  1. “Execute code on demand.”
  2. “Write single-purpose stateless functions.”
    3. “Design push-based, event-driven pipelines.”
    4. “Create thicker, more powerful front ends” and
  3. “Embrace third-party services.”

Indeed, Andreesen-Horowitz parter Peter Levine believes PaaS and the centralized mentality of the cloud will be supplanted by edge devices communicating with each other. That’s not inconceivable, according to Microsoft.

“Moving from a server-based deployment to a container-based deployment really increases agility around being able to update and deliver value faster. When you look at serverless, it continues that trend,” said Omar Khan, Redmond-based General Manager for Microsoft cloud app development and tools.

“Serverless enables you to architect code that is very much a microservices pattern by nature, because each function is its own thing. Serverless enables microservices at a smaller granularity than even containers, as an example.  And when you get more granular microservices, then you think, well, some of these microservices run in the cloud and that’s the right place for that code to execute, but why wouldn’t these microservices run at the edge as well? That’s a trend that is very interesting,” he said.

The post Three cloud PaaS trends to watch in a serverless world appeared first on SD Times.

]]>
https://sdtimes.com/cloud/three-cloud-paas-trends-watch-serverless-world/feed/ 7
Learning to failover https://sdtimes.com/cd/learning-to-failover/ https://sdtimes.com/cd/learning-to-failover/#comments Mon, 03 Apr 2017 13:00:24 +0000 https://sdtimes.com/?p=24260 Blame the cloud, DevOps, consumer demand or continuous delivery. No matter the reason, a wide variety of applications are now aiming for high availability (HA) — and increasingly, that overlaps with planning for disaster recovery. Too many software organizations not only lack tools that can help, they fail to test their disaster recovery plans until … continue reading

The post Learning to failover appeared first on SD Times.

]]>
Blame the cloud, DevOps, consumer demand or continuous delivery. No matter the reason, a wide variety of applications are now aiming for high availability (HA) — and increasingly, that overlaps with planning for disaster recovery. Too many software organizations not only lack tools that can help, they fail to test their disaster recovery plans until it’s too late.

Brian Bowman, senior principal product manager for Progress® OpenEdge®, has plenty of experience in improving availability. A 20-year veteran of Progress, he’s performed database tuning and disaster planning for customers of all sizes around the world. According to Bowman, in addition to some process-oriented changes, new aspects of the Progress OpenEdge 11.7 application development platform, including OpenEdge Replication Target Synchronization, can help you succeed — at failover.

What change are you seeing among your customers in terms of the need for high availability applications?
What we’re finding is that always on in the 24x7x365 environment, depending on the vertical or business, is not just seen as potentially possible — it’s expected.

But that’s an ops function, isn’t it?
Historically, the people in an IT organization who maintain and restore a system aren’t the same people who develop the application. Today, those two groups are working together. High availability has always been about making sure the app is up and running. If the database is up, but users can’t use it, that’s not good enough.

Anecdotally, we have customers who have a team to develop, then turn to operations people to deploy itbut then the app doesn’t work. They go back to the developers and say it’s not working. Dev says, “It worked on our end.” As the saying goes, “If it compiles, ship it.”

Continuous delivery is in part driving this. Some of our customers are making changes daily to their applications. It becomes important for them from a continuous integration/delivery (CI/CD) standpoint to be able to tie the changes they make to what’s being deployed.

Is there a new way to deal with this challenge?
From the Progress OpenEdge standpoint, we’re attacking it from two places, maintainability and disaster recovery, with two features, online index activation and replication target synchronization.

From the maintainability perspective, continuous delivery requires that the application has to be continuously running. Starting in OpenEdge 11.7, we’re helping our customers maintain that system in a near 24x7x365 environment.

Part of it is updating the application on the fly. You also want to make it so those changes can be accessed and used immediately, not put off until the weekend when you have a two-hour outage so you can reboot.

Online index activation lets you add a new index to the schema on an OpenEdge database without shutting down. If you have dynamic queries, they can make immediate use of new indexes.

So are most HA concerns database-oriented?
This is an industry-wide shift in disaster recovery. Everyone is focused on data and databases, but we look at the whole application as a holistic solution. It is much bigger than the database. Online index activation is about making everything available to users.

Is there a competing standalone solution that does something like online index activation?
Really, no. The competition is accepting that you need to shut down–or just taking an outageto do the maintenance.

And what about disaster recovery?
The last thing you want is to have two disasters back-to-back.

In disaster recovery/HA, one of the rules is that you eliminate your single points of failure.

Where we really see this applying is in cloud-based and SaaS apps. If I’m a consumer using an app and it goes down, if it doesn’t come back in, say, 2-8 seconds, I’m going someplace else.

OpenEdge Replication Target Synchronization means that in the event of a disaster, there is no single point of failure. It’s a three-pronged failover approach using two backup databases that automatically serve as the sources when needed, to keep the system up.

One of our customers is a wholesale distribution ERP running a new SaaS business in the cloud. This is the technology they need to deliver to their customers, and it needs to be available all the time.

Not only that, this solution goes towards maintaining the app as well. You can configure automatic or manual transitions for unplanned or planned downtime. It serves both purposes, by allowing you to move from a production to a maintenance machine.

Software teams often struggle with basic failover and recovery. You see systems in place, but ends up not working at the critical moment. What kinds of mistakes do you see in disaster planning?
The single biggest mistake people make is not testing their plan.

I was visiting a company that had prepared disaster recovery plans. A storm was coming through. They had a power outage, and they were very proud of their computing center, so we went into their computer room to see if failover was happening correctly. As we’re looking at the computers through the window, they realized their entry security systems — the card readers to get into the computer room — were not on backup power. Their backup generators were designed to only run for 60 minutes. So we watched their systems one by one go down as they ran out of power.

It was tough to just stand there and watch them shut down. In disaster recovery or business continuity, you’ve got to test all the parts.

But the biggest challenge organizations have is simply declaring the disaster. If they’re confident in their ability to fail over, they should act quickly to meet business requirements such as service level agreements.

How do you make that decision, then?
There are two metrics in disaster recovery: Recovery Time Objective (RTO) and Recovery Point Objective (RPO). RTO is how fast do I have to have the application back to availability? RPO is how much data can I afford to lose?

With an RTO of 30 minutes, as often happens in a manufacturing line, the manufacturing stops. But if you think about a financial system, their RPO is that they can’t afford to lose any data. Those metrics drive their business.

I was on a customer site at a mortgage company where we were implementing a DR solution. We were in the final process when they had a system outage—a disk on a drive failed. They had a meeting and brought in all the executives. The solution was very simple: They just had to go to backup system. The executives argued for an hour.

It was very interesting for me to stand there and watch that debate, knowing full well it would work.

It’s amazing how much a disaster helps a disaster recovery program.

CI/CD emphasizes automation because software is being delivered so quickly, but should automation be automatic?
Often we hear people asking, “Can I automate this whole thing so that it ‘automagically’ fails over?”

I’m in in the European office today meeting with Tech Support and was talking to them about Replication Target Synchronization. They were very skeptical. They said, “You mean I can automate this process so that in the failover it brings the other systems up and running and transitions to those other systems?” But then they realized there’s also an option for human intervention. That’s an advantage we bring to the table: You can configure it both ways, automatically or manually.

You can configure your system and failover environment so that if the network is down for 10 minutes, you automatically failover. But if you know the network is only going to be down 15 minutes, should you failover?  You can decide “Yes, we should” or “No, we shouldn’t.”

How do you get developer buy-in to HA and DR?
They need to buy into the complete process. What does business continuity mean to their application? They need to think about it when they start building the app. We have one customer who has a team that works through the whole disaster recovery process as they’re building the product and the features. This is when it’s the most successful.

The post Learning to failover appeared first on SD Times.

]]>
https://sdtimes.com/cd/learning-to-failover/feed/ 2
11 reasons why Android is winning https://sdtimes.com/11-reasons-android-winning/ https://sdtimes.com/11-reasons-android-winning/#comments Thu, 30 Mar 2017 13:00:24 +0000 https://sdtimes.com/?p=24110 You know the smartphone has supplanted every other consumer technology when all anyone really wants in a car now is a “smartphone on wheels.” In a world where most smartphone users have Android-based models, Google is aiming to reach the next billion users coming online — with Android as the nexus of activity. Whether it’s … continue reading

The post 11 reasons why Android is winning appeared first on SD Times.

]]>
You know the smartphone has supplanted every other consumer technology when all anyone really wants in a car now is a “smartphone on wheels.” In a world where most smartphone users have Android-based models, Google is aiming to reach the next billion users coming online — with Android as the nexus of activity.

Whether it’s as a Google Home oracle/assistant, Android Auto smart car integration, TensorFlow machine learning or DayDream virtual reality, the Internet search behemoth now aims to become the search engine for your life. Add to that a serious focus on developer tooling and solutions such as Firebase and Android Studio 2.3, and it’s clear that Google is ramping its current ubiquity up to a whole new level. Here are 11 reasons why Android isn’t just for phones anymore.

  1. College students learn it

“We’re finally seeing a moment in technology where a ground swell of students are graduating from college have been trained on Android—thanks mostly to its open-source nature. As a result, the industry at large is now more well-suited to do high-quality Android development,” said David Evans, CTO of Uncorked Studios and a former instructor in mobile development at the University of Pittsburgh.

“While the iPhone is still the fashion mobile device, what you’re seeing is that the development tide is turning toward Android. Google has been doing a great job with focusing their platform and developers on this, and taking it as seriously as their developer community is taking it,” he said.

  1. Device makers like it

Previous to launching Bugsee, a “black box” tool for debugging Android apps, CEO Alex Fishman helped design a smart camera at Lytro (which exited consumer photography in 2016 and now focuses on virtual reality).

“We built it with Android because it was a phenomenally powerful OS, far better than anything else out there. We were also constrained because Lytro was running on a Qualcomm solution. If Qualcomm agrees to sell you their chips, which is a big ‘if’, their software stack requires Android. If you take a Qualcomm solution, then say, ‘Let’s go to Taiwan to build this IoT device,’ they’ll say, ‘Our engineers are only trained in Qualcomm with Android.’ If you are trying to build a specialty device, Android is almost the de facto standard,” Fishman said.

  1. Branded mobile wallets are possible

Google has been banging the drum about Android Pay (formerly Google Wallet) for developers. The Android Pay API provides Java methods for the Android Pay buy button, encrypted shipping and payment, user information and making transactions. However, some find that this isn’t sufficiently flexible for their needs. Luckily for them (and, arguably, for hackers), Android’s open model offers another option.

“We at Fiserv want to build bank-branded mobile wallets. None of the three proprietary solutions — Apple Pay, Android Pay and Samsung Pay — have an open API at moment. All are branded by the parent company, and the APIs are pretty minimal. We don’t like that fact,” said Scott Hess, vice president of user experience, consulting and innovation, at Fiserv in Portland, OR.

The solution? While Apple keeps credit card information in a secure element that only the operating system can get at, “Android has Host Card Emulation, which gives developers who are certified the ability to store things on the device and leverage them to do mobile point-of-sale transactions. We’re actively exploring building HCE mobile wallets for domestic and international partners,” he said, noting that mobile payments are quite common in the UK, Australia and New Zealand, where Fiserv, a leading financial technology company, is partially based.

  1. Google Home shows promise

As creepy as the commercial for Google Home shown at last year’s I/O conference was, it’s undeniable that consumers have gotten comfortable speaking the words “OK Google” to their devices. This technology is unlikely to die an ignominious, Google Glass-style death — but it could herald the end of the nuclear family. Kidding aside, it’s a promising interface for finance, according to Fiserv: “We’re doing investigation around home banking using Google’s equivalent of Amazon Echo. You’d speak into Google Home and get your balance, for example. It’s based on Chrome OS, but it’s a similar development environment to Android phone,” Hess said.

  1. Machine learning works with it

Where Google Home could really shine is in the company’s machine learning leadership. Now, Google has open-sourced its newest machine learning library, TensorFlow, which can run on mobile. The TensorFlow repo on GitHub contains Android examples demonstrating how an app can use the camera to identify objects, among other tasks.

  1. Automakers have embraced it

With the exception of Toyota, which has invested years in refining its clunky in-car navigation system and tying it to specific vehicle functions like door locks and lights, most car manufacturers are finally willing to accept Google’s market leadership with Android Auto. Now that consumers increasingly want their cars to act like giant versions of the smartphones they’ve grown addicted to, Android has an enviable position. Further, Google’s testing in this space is admirable. Finally, there’s the company’s foresight in mapping and autonomous driving technology. All these together mean Android Auto, launched in 2014 with the Open Auto Alliance, is a powerful turnkey opportunity for developers.

But so long as people are driving themselves, apps will continue to be a major distraction, creating cognitive load and increasing risk to the driver, passengers and other cars. It’s critical that Android create a new driving framework to diminish the addictive distractions and interactions that are otherwise cornerstones of smartphone app development.

At Google I/O 2016, Jeff Bush, engineering manager on the Android Auto team, promised that Android Auto would soon get Waze, OK Google and wireless capability. But the canniest thing Google has done is allow Android Auto to function as a smartphone app for older cars.

In a market that’s likely more vigorous than wearables, Android Auto will let developers participate in a new ecosystem — without causing accidents. There are different rules, of course: One requirement is that apps for Android Auto work seamlessly with input from capacitive touch screens, resistive touch screens and rotary controls.

  1. Virtual reality is trending

While from the outside, the goggles look like a health and safety risk, there’s no arguing that the world of VR is intoxicating once your eyeballs can soak it in. Developed for Nougat, the seventh big iteration of Android mobile OS, Daydream is a VR platform with both hardware and software specs. Google has taken care to design an seamless, immersive experience from the moment the goggles are on: The home screen is itself VR, setting the stage for selecting games or apps using a handheld 3D controller.

  1. Hackers love it

The bad news about Android? It remains a growing vector for malware and malicious activity. The good news is, as with most software security issues, much protective technology is there, and developers — and consumers — just have to use it. The biggest protections come from better coding practices and layered security protocols, which are increasingly available at the device level.

“A lot of the biggest banks, as well as big tech shops like us in banking, are acknowledging that people are getting tired of passwords. No level of password is enough. That’s why we’re looking at at biometrics: fingerprints or facial recognition. We’re also leveraging geolocation — where the user would be doing mobile banking. And we’re leveraging partner technologies to check the device to know if has been rooted or if it has malware on it. Those layered security tools are in production and universal today,” said Hess.

  1. Native user experience is a must

Google continues to make ground-breaking strides towards unified web experiences, be it via progressive web apps or instant apps. Both of these techniques, while promising, are still in early days.

“We at one point were trying to minimize development effort by using non-native, hybrid apps. We were using Phonegap. We found that end users could see the difference between native and non-native — buttons go in different places, design patterns differ, you get bad ratings. We moved back to mostly native to get a better user experience. Maybe the approach going forward is, for the features I only use once a year, like changing my address, those may be ok to do in a hybrid or progressive web app fashion.”

“We looked at progressive web apps. While it does promise to reduce development efforts, I wouldn’t redirect efforts to it yet. Also, the app stores started saying, ‘If you use too much HTML in your app, we’re going to start rejecting it.’ They never said how much HTML that was,” Hess said.

  1. The tools are improving

“In the Android space, the device manufacturers are all over the place. We have 2000 banks and credit unions running our mobile banking platform, so we do 6000 Android releases a year, certifying all the phones, versions and operating systems,” said Hess.

With that complexity showing no signs of diminishing, it’s critical that development environments rise to the challenge. Built on IntelliJ, an open source Java IDE, Android Studio does advanced code completion, refactoring and code analysis. Android Studio 2.3 added improvements such as Instant Run, layout editor changes, WebP image format support, App Links Assistant and lint baseline mode. Google boasted at last year’s I/O conference that 92% of the top 125 Android games use Android Developer. But there are other efforts afoot to give Google a Microsoft-like developer stack, with Firebase.

  1. Cloud platforms run on it

Described at Google I/O 2016 as “the most comprehensive developer update we have ever made,” the aquisition and expansion of the Firebase backend-as-a-service into a unified app platform for Android, iOS and mobile web development could turn into something great — though not everyone is blown away by the new platform.

“There are a lot of competitors to Firebase. At the show floor at GDC [Game Developer Conference], there were a lot of people launching platforms for doing these things,” said Patric Palm, CEO of Swedish software project management tools Favro and Hansoft. Favro, especially, has become popular with game makers using the popular Unity Player platform.

“Unity is a good example of someone who really played the platform business right. They won. In many ways it’s more interesting what Unity did than what Google is doing. My feeling with Google is that they don’t have that loyalty to a technology that a smaller company like Unity would have.  The problem with Google is, if they aren’t successful in a space, they have a tendency to just leave it and move on to the next thing,” said Palm.

IOS-first no more
Ultimately, the Android world promises to reach consumers beyond Apple’s walled gardens.

“When a company, five or six years ago, was trying to build a general app, they’d do iOS first, then go and clone it, in a very broad sense of ‘clone,’ for Android phone. That iPhone-first trend is now changing. Android is slightly easier, and there are so many developers, tools and communities to help with existing frameworks. It’s also very much proliferated in the world, with all the different phone makers. Today, it might be a better business decision to go with Android first,” said Fishman.

The post 11 reasons why Android is winning appeared first on SD Times.

]]>
https://sdtimes.com/11-reasons-android-winning/feed/ 10
How UML makes a DevOps-driven digital transformation possible https://sdtimes.com/sparx-systems/uml-makes-devops-driven-digital-transformation-possible/ Wed, 01 Mar 2017 14:00:00 +0000 https://sdtimes.com/?p=23724 What’s the best way to build complex, software-intensive systems? A powerful approach, according to the Australian software vendor Sparx Systems, places visual modeling tools at the hub of a DevOps-style operation. The company’s flagship modeling platform, Enterprise Architect, was commercially released in 2000 and continues to rise to the challenge of faster software delivery. We … continue reading

The post How UML makes a DevOps-driven digital transformation possible appeared first on SD Times.

]]>
What’s the best way to build complex, software-intensive systems? A powerful approach, according to the Australian software vendor Sparx Systems, places visual modeling tools at the hub of a DevOps-style operation. The company’s flagship modeling platform, Enterprise Architect, was commercially released in 2000 and continues to rise to the challenge of faster software delivery.

We spoke to Geoffrey Sparks, CEO and founder of Sparx Systems in Creswick, Australia, about how UML-based graphical modeling tools assist in enterprise digital transformation.

SD Times: What is the Sparx approach to DevOps? What problems do you hope to solve?
Sparks: Digital transformation at the enterprise level can only be achieved when it is underpinned by a solid platform to deliver and support new applications, services and technologies. Through transparent collaboration, Enterprise Architect encourages increased productivity between development (Dev) and IT operations (Ops) by eliminating functional silos.

Enterprise Architect has been built as a team-based visual modeling platform, an approach that continues for groups of people working on the same projects, sharing information, ideas and models. We like to say Enterprise Architect was enabling DevOps before the term was coined.

Collaboration lies at the heart of the Sparx Systems support for DevOps, which in turn, improves the software development life cycle through automation and best practices. Prior to DevOps, project-delivery cycles lacked transparency and the progress visibility the managers had was gathered from assumptions rather than from objective data. Within the integrated Enterprise Architect project workspace, artifacts can be viewed and updated with version control, code review, and Continuous Integration tools. This is the level of functionality that defines Enterprise Architect as a leading DevOps solution.

What are the advantages of a single repository?
Using a single platform helps to eliminate problems that arise from using a complicated DevOps tool chain. Everything you need is available at your fingertips, with the added advantage of traceability, integrated communication tools, model-driven design, and powerful visualization tools to improve understanding. The addition of cloud technologies and a shared repository allows team members to contribute to the DevOps process from anywhere in the world. Enterprise Architect has been designed from the ground up to improve communication, enhance collaboration, and facilitate information exchange between all stakeholders and team members. Enterprise Architect also provides tools for resource allocation, task tracking, project management and team communication, to ensure everyone can contribute to and measure the success of a project.

How prevalent is UML in organizations that have a successful DevOps culture? How does UML assist in automating aspects of the delivery pipeline?
UML is the established standard for software modeling—anyone sketching a simple use case is modeling in UML. A substantial body of online research provides testimony to the value of UML on software quality. Within those organizations that are undergoing digital transformation to improve operational efficiencies (many of whom are Sparx Systems customers), there is a prevalence of UML tools deployed to manage application delivery, a process which relies implicitly on code quality assurance.

Within the digital transformation imperative, there is a natural tension that exists in the flux between legacy systems, which are “keeping the lights on,” and integration of new technologies.

Visualizing the software legacy, taking control of software development for evolving systems structure, navigating through complex systems, and selective exposure of detail can all be achieved using UML. The adoption of standards-based UML tools is a major step toward technical debt reduction and associated quality improvement, as they provide the templates and frameworks to reduce risk and increase efficiency, while supporting collaboration between stakeholders across the enterprise.

How does Enterprise Architect assist with code generation, version control, code review and Continuous Integration (among other activities)?
Enterprise Architect includes a number of design tools to create models of software, automated code generation, reverse-engineering of source code, binaries and schemas, and tools to synchronize source code with the design models representing the software system. The programming code can be viewed and edited directly in the integrated code editors within Enterprise Architect, which provide intelligent code completion and other features to aid in coding.

Another compelling aspect of the environment is the ability to trace the implementation Classes back to design elements and architecture, and then back to the requirements and constraints and other specifications, and ultimately back to stakeholders and their goals and visions.

From an engineering and quality perspective, the most compelling advantage of this approach is that the UML models and therefore the architecture and design are synchronized with the programming code. An unbroken traceable path can be created from the goals, business drivers and the stakeholder’s requirements, right through to methods in the programming code.

Enterprise Architect’s Visual Execution Analysis tool set vastly improves the accuracy of Continuous Integration by allowing the developer to debug and profile the running code in comparison with the model. Automated reports and model generation, based on code sequences and code timing, afford a greater understanding how the code is operating in real time. Time Aware modeling allows the Dev team to then plan the next steps for the project iteration without touching the current working model. Different design scenarios can be planned out, signed off by the stakeholders, and then promoted into the current working model.

How is requirements definition and management a key to effective (and speedy) DevOps in software-defined businesses?
Effective gathering of requirements and their accuracy can dictate the future success of any software development project. Enterprise Architect has the capability to source requirements in a variety of forms, native to the methods of each stakeholder’s role.

Enterprise Architect hosts complete traceability, from strategic modeling to implementation, architectural definition, deployment and maintenance. Cloud-based repositories, discussions, impact-analysis tools, auditing, reporting and a host of other capabilities make Enterprise Architect the ideal platform for storing and working with requirements.

In addition, the Specification Manager is a document-like interface that provides an easy and familiar environment for creating and editing requirements without needing to use diagramming or visual tools. This is particularly beneficial for the provision of inputs gathered from non-technical yet essential elements of the enterprise.

What sort of software delivery pipeline productivity metrics can be derived from artifacts contained in Enterprise Architect? Can a “last mile” deployment problem be anticipated in UML design upfront?
As a team-based modeling tool, Enterprise Architect has a lot of features built in that facilitate correct software/process at the time of deployment:

Automatic Test Generation from the model provides a set of test scripts, test sets, test cuts and test steps for testers to use to test the system through the conceptual passes before being deployed, effectively allowing a user to implement Test-Driven Design.

Because the project can be managed within Enterprise Architect’s extensive project-management features (like Gantt charts, resource allocations, Kanban, workflow engine, etc), “last mile” problems, like dependent parts of the system not being ready on time, become evident early in development.

For process development, Enterprise Architect offers simulation, so users can explore and identify optimal “what if” scenarios prior to real-world implementation, saving them time and money.

Finally, Enterprise Architect is an end-to-end tool where everything from the initial mind-maps and strategic decisions can be traced to delivery, through requirements, use cases, process diagrams, down to the deliverables, code, business rules, etc. You can use the project itself to make sure you are delivering what has been asked for.

Tools like the Traceability Matrix and Gap Analysis tools make it very easy to spot holes in requirements and the downstream activities that will be deployed to realize these requirements. It also shows if extra functionality has been introduced to the project that was never asked for in the first place, with the goal of reducing scope creep.

Can iterative design and development at the project level end up transforming a company?
Definitely. From its inception, Enterprise Architect has been designed to used as a “master plan” for enterprise continuous improvement. Once the enterprise has been modeled based on the collection of requirements, further refinement may include simulation, testing, scenario analysis and much more. Furthermore, a subset of the enterprise, such as a single business process, can also be captured, modeled, analyzed and improved upon… possibly even automated into software. Once the repository is developed, this can be used to manage the generational evolution of any business unit.

The post How UML makes a DevOps-driven digital transformation possible appeared first on SD Times.

]]>