chatgpt Archives - SD Times https://sdtimes.com/tag/chatgpt/ Software Development News Fri, 12 May 2023 17:37:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg chatgpt Archives - SD Times https://sdtimes.com/tag/chatgpt/ 32 32 Google removes waitlist for Bard, highlights recent and upcoming improvements https://sdtimes.com/ai/google-removes-waitlist-for-bard-highlights-recent-and-upcoming-improvements/ Fri, 12 May 2023 17:37:01 +0000 https://sdtimes.com/?p=51150 Earlier this year, Google announced Bard, a generative AI solution meant to compete with OpenAI’s ChatGPT. Previously the only way to use Bard was to get on the waitlist, but now the company is announcing that it is removing that waitlist and opening Bard up to all. With this announcement, Bard will be available in … continue reading

The post Google removes waitlist for Bard, highlights recent and upcoming improvements appeared first on SD Times.

]]>
Earlier this year, Google announced Bard, a generative AI solution meant to compete with OpenAI’s ChatGPT. Previously the only way to use Bard was to get on the waitlist, but now the company is announcing that it is removing that waitlist and opening Bard up to all.

With this announcement, Bard will be available in 180 countries and territories, and more will be added. 

Google also revealed that Bard now supports Japanese and Korean. Soon it will support 40 different languages.

RELATED CONTENT: Google announces updates to Android, Google Cloud, Workspaces, Google Play, and more at Google I/O

Since its initial launch, Google has also made some improvements to Bard, such as changing the large language model (LLM) to PaLM 2, which enables Bard to have more advanced math, reasoning, and coding skills. 

An upcoming update will add visuals to Bard. For example, the prompt “What are some must-see sights in New Orleans?” will provide images along with text. 

In addition to responses containing images, prompts will also be able to use images, with Google Lens being used to analyze photos. For example, you could upload a photo of your dog and ask Bard to write a funny caption for it, and it will analyze the photo, detect your dog’s breed, and write a few captions. 

Google will also be improving on the coding side of Bard, with new features like better citations that can be clicked through to see the source, dark mode, and an export button so that code can be run in Replit. 

It is also adding an export function to Gmail and Google Docs. “For example, let’s say — like me — you’re a die-hard pickleball fan. You can ask Bard to write an email invitation for your new pickleball league, summarizing the rules of the game and highlighting its inclusivity of all ages and levels. Just click the ‘draft in Gmail’ button so you can make those final tweaks before getting your pickleball league off the ground,” Sissie Hsiao, vice president and general manager for Google Assistant and Bard, wrote in a blog post

In the next few months, Google also has planned integrations with Adobe’s suite of products. It will integrate with Adobe Firefly, which is a set of generative AI models for image creation, and the results can be exported to Adobe Express. Other upcoming partners include Kayak, OpenTable, ZipRecruiter, Instacart, Wolfram, and Khan Academy.

“There’s a lot ahead for Bard — connecting tools from Google and amazing services across the web, to help you do and create anything you can imagine, through a fluid collaboration with our most capable large language models,” Hsaio wrote. 

The post Google removes waitlist for Bard, highlights recent and upcoming improvements appeared first on SD Times.

]]>
Getting ready for the generative AI wave https://sdtimes.com/ai/getting-ready-for-the-generative-ai-wave/ Thu, 27 Apr 2023 17:01:16 +0000 https://sdtimes.com/?p=51028 Even as late as December of last year, few were aware of generative AI. Then ChatGPT popped up, and Microsoft started putting it in everything including its developer tools. Now it’s currently the hottest thing in the market. It is also still immature, but it is working well enough that people are finding it surprisingly … continue reading

The post Getting ready for the generative AI wave appeared first on SD Times.

]]>
Even as late as December of last year, few were aware of generative AI. Then ChatGPT popped up, and Microsoft started putting it in everything including its developer tools. Now it’s currently the hottest thing in the market. It is also still immature, but it is working well enough that people are finding it surprisingly useful. This is very different than what happened with previous Microsoft products like Apple Newton and Microsoft Bob, both of which were released well before the underlying technology cooked enough for the general market.

Generative AI is a new way for people to interface with their technology, but it has some shortcomings. 

Let’s talk about this from a developer’s standpoint, and about why, once generative AI becomes commonplace, we’ll likely have a very different group of companies like we did with the introduction of the Web.

Generative AI’s promise

The promise for generative AI is that you can use your natural, spoken language to ask the computer to do something and the computer will automatically do it. In Microsoft Office, the initial implementation is very sub-product-centric. For instance, you can request Word to create a document to your specifications, but you’ll have to go to PowerPoint or Excel if you want the tool to create a blended document. I expect the next generation of this Microsoft offering will bridge those apps and other products to allow you to create more complex documents just by putting in information the AI asks for to strengthen the piece. 

This is going to make for a difficult evolution for firms that have apps that don’t currently integrate well because the user will want one interface, not multiple AIs that each require different command language or that use different language models. 

The generative AI problem

While developing your own generative AI may help, long-term integration with the platform’s generative AI will quickly be a differentiator focused on user satisfaction and retention. I point out that last because users who get frustrated working with multiple generative AI platforms will likely begin preferring products that interoperate and integrate with a major generative AI solution so that the user doesn’t have to train and learn multiple generative AI offerings.

In short, one of the bigger problems is integrating the app with the generative AI most likely to be found on it. Neither Apple nor Google have a cooked generative AI model, and neither company is as good as Microsoft in terms of bringing partners on board to better address their lack of a generative AI solution. 

Assuring quality

The other big trend in generative AI is putting the technology into development tools that will allow the AI to become a coding accelerator. But with code, errors tend to proliferate. While this initial instance of generative AI is very fast, it’s anything but infallible. If you don’t want a lot of mistakes, the initial focus of any generative AI user needs to be on quality over quantity. The error-checking capability of generative AI is still very young and often makes mistakes. That means coders who use generative AI need to focus more on quality than they currently do. You’ll be training the tool while you use it, and if you train it to make a mistake, that mistake has the potential to proliferate and create additional problems. So, when using development tools that make use of generative AI, the massive increase in speed needs to be tempered with an increased focus on quality. Otherwise, your quality is likely to degrade badly over time.

Wrapping up

Generative AI is a game changer. It allows people to increasingly interact with their smartphones, PCs, apps and cloud services as if they were people. To make this work optimal, applications will need to be able to integrate under a generative AI umbrella so that the user only needs to make a request and the relevant app(s) is launched to complete the request. With its announcements of generative AI for its developer tools and Office, Microsoft is arguably the farthest along this path, but we are still early days, and this leadership is likely to become dynamic in the future.

The path to success will be to adapt an existing generative AI tool tactically but work to create the hooks to better integrate your app with the platform’s most likely generative AI solution so that you can dictate once and the AI will move between tools to complete the task. We’re far from that point now, but that gives you time to figure out how to address it.

In short, we are at the front end of a massive generative AI change. Make your related decisions very carefully because you want to be standing when this AI trend reaches critical mass and users move from products that haven’t embraced it much like they did with GUIs and the Web. 

The post Getting ready for the generative AI wave appeared first on SD Times.

]]>
Google outlines four principles for responsible AI https://sdtimes.com/ai/google-outlines-four-principles-for-responsible-ai/ Wed, 29 Mar 2023 20:56:40 +0000 https://sdtimes.com/?p=50755 With all the uptake over AI technology like GPT over the past several months, many are thinking about the ethical responsibility in AI development. According to Google, responsible AI means not just avoiding risks, but also finding ways to improve people’s lives and address social and scientific problems, as these new technologies have applications in … continue reading

The post Google outlines four principles for responsible AI appeared first on SD Times.

]]>
With all the uptake over AI technology like GPT over the past several months, many are thinking about the ethical responsibility in AI development.

According to Google, responsible AI means not just avoiding risks, but also finding ways to improve people’s lives and address social and scientific problems, as these new technologies have applications in predicting disasters, improving medicine, precision agriculture, and more. 

“We recognize that cutting-edge AI developments are emergent technologies — that learning how to assess their risks and capabilities goes well beyond mechanically programming rules into the realm of training models and assessing outcomes,” Kent Walker, president of global affairs for Google and Alphabet, wrote in a blog post

Google has four AI principles that it believes are crucial to successful AI responsibility. 

First, there needs to be education and training so that teams working with these technologies understand how the principles apply to their work. 

Second, there needs to be tools, techniques, and infrastructure accessible by these teams that can be used to implement the principles.

Third, there also needs to be oversight through processes like risk assessment frameworks, ethics reviews, and executive accountability. 

Fourth, partnerships should be in place so that external perspectives can be brought in to share insights and responsible practices. 

“There are reasons for us as a society to be optimistic that thoughtful approaches and new ideas from across the AI ecosystem will help us navigate the transition, find collective solutions and maximize AI’s amazing potential,” Walker wrote. “But it will take the proverbial village — collaboration and deep engagement from all of us — to get this right.”

According to Google, two strong examples of responsible AI frameworks are the U.S. National Institute of Standards and Technology AI Risk Management Framework and the OECD’s AI Principles and AI Policy Observatory. “Developed through open and collaborative processes, they provide clear guidelines that can adapt to new AI applications, risks and developments,” Walker wrote.  

Google isn’t the only one concerned over responsible AI development. Recently, Elon Musk, Steve Wozniak, Andrew Yang, and other prominent figures signed an open letter imploring tech companies to pause development on AI systems until “we are confident that their effects will be positive and their risks will be manageable.” The specific ask was that AI labs pause development for at least six months on any system more powerful than GPT-4. 

“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall,” the letter states.

The post Google outlines four principles for responsible AI appeared first on SD Times.

]]>
What’s new in generative AI: GPT-4 | ChatGPT conversation history bug | ChatGPT plugins https://sdtimes.com/ai/whats-new-in-generative-ai-gpt-4-chatgpt-conversation-history-bug-chatgpt-plugins/ Wed, 29 Mar 2023 18:51:48 +0000 https://sdtimes.com/?p=50746 Since our last roundup, lots of new things have been happening around GPT and ChatGPT, and in particular OpenAI, the creator of the technology, has unveiled many new offerings.  Here are some of the highlights surrounding these new AI technologies from the past few weeks. GPT-4 launches Perhaps the biggest news was OpenAI unveiling GPT-4, … continue reading

The post What’s new in generative AI: GPT-4 | ChatGPT conversation history bug | ChatGPT plugins appeared first on SD Times.

]]>
Since our last roundup, lots of new things have been happening around GPT and ChatGPT, and in particular OpenAI, the creator of the technology, has unveiled many new offerings. 

Here are some of the highlights surrounding these new AI technologies from the past few weeks.

GPT-4 launches

Perhaps the biggest news was OpenAI unveiling GPT-4, which included significant improvements from GPT-3.5. An example of the improvements is that GPT-4 passes a simulated bar exam with a score that is in the top 10% of those who took the test, while GPT-3.5 was in the bottom 10% of scores when it took the test. 

GPT-4 can accept images as well as text as input. An example OpenAI shared is a user giving a photo of a phone with a VGA cable plugged into it instead of a normal charging cable and asking what is funny with the photo.

The response: “A smartphone with a VGA connector (a large, blue, 15-pin connector typically used for computer monitors) plugged into its charging port … The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port.”

Subscribers of ChatGPT Plus can use GPT-4 through chat.openai.com, currently with a usage cap that OpenAI will continue to adjust based on demand. The company says that eventually it will also offer GPT-4 queries to users who don’t have a paid subscription. 

Glitch in ChatGPT gives others access to conversation histories; fix applied

This first came to light when a Reddit user posted a screenshot of their ChatGPT window that showed conversations they’d never had.  

OpenAI CEO Sam Altman confirmed the issue in a  tweet and said it was fixed. “We had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released and we have just finished validating. a small percentage of users were able to see the titles of other users’ conversation history,” Altman said. “We feel awful about this.”

Altman also explained that ChatGPT conversation history was unavailable from 4 AM EST on March 22 to 1 PM that same day as they fixed the issue. 

GitHub Copilot X brings Copilot to command line, pull requests, and docs

GitHub Copilot X utilizes the new GPT-4 model and is a major upgrade to the Copilot product, adding new areas where it can be used and introducing chat and voice capabilities. 

The new chat capabilities are intended to provide a “ChatGPT-like experience” in the editor. It natively integrates into VS Code and Visual Studio and can recognize code that has been typed and what error messages are shown, enabling it to provide analysis on what the code blocks are intended to do, generate unit tests, and provide bug fixes. 

There is also a voice component to this that will enable developers to give prompts by speaking. 

Another part of Copilot X is that it will be able to generate descriptions of tags and descriptions for pull requests. The company is also working on a feature where it will warn developers if a pull request doesn’t have sufficient testing and then suggest potential tests. 

Copilot is also being integrated into documentation, with a chat interface that will allow developers to ask questions about the languages, frameworks, and technologies that their code is using. It has already created this functionality for React, Azure Docs, and MDN documentation, and plans to also bring this to internal documentation as well.  

WolframAlpha adds ChatGPT integration

WolframAlpha is a search engine for computations, and now ChatGPT users will be able to access its functionality through ChatGPT. 

The creator Stephen Wolfram first talked about the possibility of connecting the two technologies back in January, and the two companies have been working together since to make it happen. 

“Back in January, I made the point that, as an LLM neural net, ChatGPT—for all its remarkable prowess in textually generating material “like” what it’s read from the web, etc.—can’t itself be expected to do actual nontrivial computations, or to systematically produce correct (rather than just “looks roughly right”) data, etc. But when it’s connected to the Wolfram plugin it can do these things,” Wolfram wrote in a blog post

OpenAI adds other plugins for ChatGPT

The company has started to roll out a small set of plugins to a small set of users as it tests the functionality. 

The starting set includes plugins from Expedia, FiscalNote, Instacart, KAYAK, Klarna, Milo, OpenTable, Shopify, Slack, Speak, and Zapier, as well as the Wolfram plugin mentioned above. 

OpenAI also created two of its own plugins: a web browser and a code interpreter. The company has also open-sourced code for a retrieval plugin that searches for information within documents, which developers can self-host.

The post What’s new in generative AI: GPT-4 | ChatGPT conversation history bug | ChatGPT plugins appeared first on SD Times.

]]>
The ChatGPT API is a wakeup call to use these 5 key performance metrics https://sdtimes.com/ai/the-chat-gpt-api-is-a-wakeup-call-to-use-these-5-key-performance-metrics/ Tue, 28 Mar 2023 20:37:21 +0000 https://sdtimes.com/?p=50723 The release of the ChatGPT and Whisper APIs this month sparked a frenzy of creative activity among developers, allowing many companies to build generative AI capabilities into their apps for the first time. Numerous businesses have rushed to add generative AI features to their products, including Salesforce, HubSpot, ThoughtSpot, Grammarly, and others. This provides a … continue reading

The post The ChatGPT API is a wakeup call to use these 5 key performance metrics appeared first on SD Times.

]]>

The release of the ChatGPT and Whisper APIs this month sparked a frenzy of creative activity among developers, allowing many companies to build generative AI capabilities into their apps for the first time. Numerous businesses have rushed to add generative AI features to their products, including Salesforce, HubSpot, ThoughtSpot, Grammarly, and others.

This provides a great way for businesses to differentiate their software – but only if the user experience is a good one. The generative AI function is at the core of the new offerings, but also critical is the overall end user experience with the app. If you throw a quickly-developed new app in the hands of your users, how can you be sure the performance will be optimal?

For a reminder of the negative impact a poor user experience can have, just recall the problems Ticketmaster had in November when Taylor Swift’s concert tickets went on sale. Fans of the pop idol (including me!) swamped the Ticketmaster website and brought the system to its knees, resulting in frustrated customers and a wave of bad publicity for the company.

Ticketmaster may be on a whole other scale to your new generative AI feature, but the principle remains the same: If you’re releasing something people are likely to get excited about – which hopefully applies to everything you build – the user experience is critical. You need to be ready for a spike in demand if it proves popular, and be able to track the performance in real time to ensure the experience is a positive one.

Fortunately, developers can mitigate these problems with the right planning and practices in place. There are key performance metrics that developer teams should monitor at all times, especially in times of peak traffic, to avoid application crashes, long wait times — and burn-out for the teams who have to fix these problems when they arise.

Here are the five performance metrics developers should track for their applications. Whether it’s a brand new fancy AI product, an updated website or just a core application for your customers, these metrics are always important:

Web Vitals:

The Largest Contentful Paint, or LCP, measures the load speed of a webpage. Having a fast LCP is a clear sign that a customer’s experience is optimal as they toggle between pages. In tandem, the Cumulative Layout Shift, or CLS, scores how a user is experiencing unexpected layout shifts. This plays out on busy shopping days: typically, these pages have ad and sales notifications that affect a brand’s main page layout, potentially causing shoppers to experience unexpected shifts. This can reduce their ability to shop and becomes a hindrance if a developer cannot quickly access and address the issue.

Error Count vs. Error Rate:

Error counts typically increase alongside site traffic as more people flock to a website— that’s a natural occurrence. The error rate is more telling because it reveals if a greater proportion of users are experiencing issues with your application or website. Developers who see an increase in this metric should investigate and take action.

Mobile Monitoring:

More and more consumers are using their mobile devices to shop and manage all aspects of their lives. For example, mobile accounted for 45% of sales during the holiday shopping period last year from October to December. Mobile performance metrics are critical to understanding what is happening on your users’ devices during these busy times. Vitals like frozen and slow frames, or cold and warm starts when an app opens, provide visibility into how fast views are loading for your users. As became obvious with the Taylor Swift incident, a slow shopping experience can lead to an outcry and a lot of upset customers.

Outside of vitals related to mobile performance, it’s important to monitor mobile changes through application release health. Developers can see this in the rate of crash-free sessions and crash-free users as traffic to an application increases – which helps to identify abnormalities in the overall health of the app.

Slow Database and HTTP Ops

During sustained traffic spikes, database queries and HTTP requests that take too long to execute harm the user experience. Hitting a slow checkout process, or seeing the dreaded spinning beach ball after adding a product to the cart, confuses shoppers. They don’t know if they should refresh the page, and often they will abandon a purchase completely.

What’s more, if a developer is working on a backend framework, slow database and HTTP ops could also be a sign of an N+1 query problem. At that point, an application is making database queries in a loop and causing performance issues for the purchaser.

User Misery

A homegrown metric we use at Sentry, tracking User Misery helps developer’s understand a customer’s experience with an application. It is a ratio of unique users who have experienced load times at 4x a configured threshold in a Sentry project, and thus serves as a proxy for customer frustration. With this User Misery score, our developers can see which transactions have the highest negative impact on users and prioritize fixing them.

Providing a smooth end-user experience is table stakes for businesses throughout the year, but especially during high-pressure, culturally-driven spending moments. In any of these scenarios, developers behind the scenes often work long and odd hours to address performance issues and ensure the most seamless experience. You can’t prepare for everything that can possibly go wrong, but you can be equipped with the right tools and metrics to quickly identify and remediate any problems that occur.

The post The ChatGPT API is a wakeup call to use these 5 key performance metrics appeared first on SD Times.

]]>
The latest in generative AI: OpenAI releases API | Bing Chat lets you change tone | Elon Musk wants to create his own generative AI https://sdtimes.com/ai/the-latest-in-generative-ai-openai-releases-api-bing-chat-lets-you-change-tone-elon-musk-wants-to-create-his-own-generative-ai/ Wed, 08 Mar 2023 19:11:44 +0000 https://sdtimes.com/?p=50511 ChatGPT, and other generative AIs, have continued to be the talk of the development community over the last several weeks.  A number of things have happened with OpenAI’s ChatGPT, including a new API and more reactions stemming from interactions with Bing Search.  Here is a breakdown of things you may have missed in the last … continue reading

The post The latest in generative AI: OpenAI releases API | Bing Chat lets you change tone | Elon Musk wants to create his own generative AI appeared first on SD Times.

]]>
ChatGPT, and other generative AIs, have continued to be the talk of the development community over the last several weeks. 

A number of things have happened with OpenAI’s ChatGPT, including a new API and more reactions stemming from interactions with Bing Search. 

Here is a breakdown of things you may have missed in the last few weeks:

OpenAI releases API for ChatGPT

With this new API, developers will be able to integrate ChatGPT into their own products. 

The ChatGPT API uses the same model that the web version uses, gpt-3.5-turbo. The company has made some improvements to the model recently which has resulted in it being 10x cheaper to do computations. Currently it costs $0.002 for 1000 tokens. 

The API can be used to build applications that can do things like draft an email, answer questions about a set of documents, create conversational agents, tutor in a range of subjects, and more.

“We believe that AI can provide incredible opportunities and economic empowerment to everyone, and the best way to achieve that is to allow everyone to build with it. We hope that the changes we announced today will lead to numerous applications that everyone can benefit from,” OpenAI wrote in a blog post

Microsoft Bing Chat now lets you change its behavior

When the new Bing Chat first launched several weeks ago, it didn’t go as smoothly as planned. Some users were getting truly wild responses from the chatbot, such as NY Times reporter Kevin Roose who had a conversation with the chat persona named Sydney, which emerges once you start to have a longer conversation.  

“As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead,” Roose wrote in an article for the NY Times

Kevin Scott, chief technology officer at Microsoft, said that the conversation was part of the learning process and that the reason it went so off the rails was that with AI models “the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.”

In the weeks since, Microsoft has continued to make tweaks to the AI, possibly even killing the Sydney persona.

Users now have the option to select between three tones the chat function will take. The options are creative, which is imaginative in its responses; balanced, where answers are reasonable and coherent, blending together creative and precise modes; and precise, which is concise in its responses and focuses more on giving relevant and factual information. By default the chat is set to balanced. 

Elon Musk wants to build a ChatGPT alternative

Musk, who was actually one of the founders of OpenAI though is no longer actively involved in the organization after resigning from the board in 2018, has been criticizing ChatGPT for being too “woke.” 

OpenAI has safeguards built in to make ChatGPT “refuse inappropriate requests” or prevent it from outputting harmful information.  

The Information reported that Musk has approached AI researchers about building a new research lab to build a rival AI chatbot that would have fewer restrictions. One of the researchers he has tried to recruit is Igor Babuschkin, who previously worked for DeepMind and OpenAI. 

Salesforce creates its own generative AI

Einstein GPT will connect data from Salesforce Data Cloud with OpenAI’s models to generate content that adapts to continuously changing customer information. 

According to Salesforce, Einstein GPT can be used by salespeople to generate emails to send to customers, by customer service professionals to provide quicker responses, by marketers to generate targeted content, and by developers to automatically generate code. 

In addition to Salesforce Cloud, Einstein GPT will integrate with solutions like Tableau, MuleSoft, and Slack. 

“We’re excited to apply the power of OpenAI’s technology to CRM,” said Sam Altman, CEO of OpenAI. “This will allow more people to benefit from this technology, and it allows us to learn more about real-world usage, which is critical to the responsible development and deployment of AI — a belief that Salesforce shares with us.” 

The company also announced a $250 million Generative AI Fund through its investment arm Salesforce Ventures. The fund will be used to “bolster the startup ecosystem and spark the development of responsible generative AI,” according to Salesforce. 

GitHub Copilot for Business is now available

The new subscription, which is available for $19 per user per month, uses a more advanced OpenAI model and includes new capabilities to improve code suggestions. For example, a new Fill-In-the-Middle (FIM) paradigm was added to improve the quality of prompts by utilizing known code suffixes and leaving a gap in the middle for GitHub Copilot to fill. Previously the AI would only consider the prefix of the code in prompts.   

It also includes new vulnerability filtering to block insecure code suggestions like hardcoded credentials, SQL injections, and path injections. 

“In the coming years, we will integrate AI into every aspect of the developer experience—from coding to the pull request to code deployments—so developers can build their best in a world where all organizations will be more dependent on their success than ever. GitHub Copilot for Business is the first stride in this future, a future that will push the boundaries for all developers,” Thomas Dohmke, CEO of GitHub, wrote in a blog post

OpenAI announces collaboration with Presto

Presto is a company that provides AI assistants for drive-thrus, and by collaborating with OpenAI, they believe they will be able to make their voice assistants “more natural and human-like.”

According to Presto, their assistants integrate with restaurant menus and provide options for item combos, coupons, price variations, and seasonal items. It also learns as more customers use it, enabling it to incorporate new accents, alternative terms, and unique customer queries. 

ChatGPT will be used to create restaurant and region-specific knowledge bases; create test guest orders to represent different tones, personas, and order types; and make the responses to customer queries sound more natural. 

“We are thrilled about our collaboration with OpenAI since it will enable us to accelerate product innovation and further our mission of overlaying next-generation digital solutions onto the physical world,” said Rajat Suri, founder and CEO of Presto. “Both ChatGPT and Presto Voice represent cutting edge AI applications that can supercharge productivity and revolutionize the way humans work and think.”

The post The latest in generative AI: OpenAI releases API | Bing Chat lets you change tone | Elon Musk wants to create his own generative AI appeared first on SD Times.

]]>
OpenAI releases API for ChatGPT and Whisper https://sdtimes.com/ai/openai-releases-api-for-chatgpt-and-whisper/ Thu, 02 Mar 2023 21:10:13 +0000 https://sdtimes.com/?p=50459 Since OpenAI released ChatGPT in November 2022, developers have been speculating about when an API for the tool would be available.  Today, the company announced APIs for both ChatGPT and Whisper, which is a speech recognition system that was trained on 680,000 hours of voice data. This means that developers can now integrate these solutions … continue reading

The post OpenAI releases API for ChatGPT and Whisper appeared first on SD Times.

]]>
Since OpenAI released ChatGPT in November 2022, developers have been speculating about when an API for the tool would be available. 

Today, the company announced APIs for both ChatGPT and Whisper, which is a speech recognition system that was trained on 680,000 hours of voice data. This means that developers can now integrate these solutions into their own products. 

“We believe that AI can provide incredible opportunities and economic empowerment to everyone, and the best way to achieve that is to allow everyone to build with it. We hope that the changes we announced today will lead to numerous applications that everyone can benefit from,” OpenAI wrote in a blog post

The ChatGPT API uses the same model that the web version uses, gpt-3.5-turbo. The company has made some improvements to the model recently which has resulted in it being 10x cheaper to do computations. Currently it costs $0.002 for 1000 tokens. 

The API can be used to build applications that can do things like draft an email, answer questions about a set of documents, create conversational agents, tutor in a range of subjects, and more.

The Whisper API costs $0.006 per minute and is available through the transcriptions or translations endpoints. Transcriptions transcribes in the source language and translations transcribes into English. 

The API accepts a number of different formats, including M4A, MP3, MP4, MPEG, MPGA, WAV, and WEBM. 

According to OpenAI, data collected by the API won’t be used to improve service or train models, unless you opt into that specifically. 

Data can be retained for 30 days by default, but this can be extended for companies that need longer retention. 

OpenAI also stated that one of its main goals going forward is to make ChatGPT more stable since it has had issues with downtime since it was first released. “We know that ensuring AI benefits all of humanity requires being a reliable service provider. Please hold us accountable for improved uptime over the upcoming months,” the company wrote. 

A number of companies have already started using the API within their apps. Examples include:

  • Snapchat, which now includes a feature called My AI for Snapchat+ which features the chatbot
  • Quizlet, which now includes an AI tutor that adapts questions to students based on their study materials
  • Instacart, which now enables customers to ask food-related questions and get answers that may incorporate items that can be shopped for

The post OpenAI releases API for ChatGPT and Whisper appeared first on SD Times.

]]>
ChatGPT this week: ChatGPT + Bing | Google’s AI attempt doesn’t go as planned | Using ChatGPT in technical interviews? https://sdtimes.com/ai/chatgpt-this-week-chatgpt-bing-googles-ai-attempt-doesnt-go-as-planned-using-chatgpt-in-technical-interviews/ Fri, 10 Feb 2023 15:58:52 +0000 https://sdtimes.com/?p=50299 You may be familiar with the quote “software is eating the world,” but as of late it seems more like “ChatGPT is eating the world.” Ever since its public debut, it’s dominated front pages of the news, sparked many conversations about how AI will shape the future, and if you look at the trending page … continue reading

The post ChatGPT this week: ChatGPT + Bing | Google’s AI attempt doesn’t go as planned | Using ChatGPT in technical interviews? appeared first on SD Times.

]]>
You may be familiar with the quote “software is eating the world,” but as of late it seems more like “ChatGPT is eating the world.” Ever since its public debut, it’s dominated front pages of the news, sparked many conversations about how AI will shape the future, and if you look at the trending page of GitHub at any given moment, about half of the projects will be related to the tool. 

It’s a lot to take in and keep on top of, so here’s a review of ChatGPT-related news from the past week.

Microsoft rolls out ChatGPT-enabled version of Bing

Perhaps the biggest story of the past week was that Microsoft has officially incorporated ChatGPT into its search engine Bing. 

This comes shortly after the company had made a large multi-billion dollar investment in OpenAI, the company behind ChatGPT.

Microsoft says that integrating ChatGPT into Bing will help provide better search results, more complete answers, a new chat experience, and the ability to generate content. 

Search powered by ChatGPT will surface relevant information like sports scores, stock prices, and weather, and summarizes search results to provide comprehensive answers to complex queries too. For example, you would be able to ask how to substitute eggs in a recipe and get instructions on how to do so without actually having to search through multiple results yourself. 

Just like with ChatGPT, you can also converse with Bing in a new chat experience that allows you to keep refining your search until you are able to get the result you need. 

Google announces a new experimental conversational AI service

While not technically ChatGPT, Google also plans to incorporate more AI into Google Search. It announced Bard, a conversational AI service based on the LaMDA model. 

Bard is intended to foster the combination of knowledge with the power, intelligence, and creativity of Google’s language models. The AI service utilizes information from the web to offer users high-quality responses.

The team stated that Bard is initially being released with Google’s lightweight model version of LaMDA, which calls for less computing power so it can be scaled to a larger user base. 

The announcement wasn’t without flaws; An ad for Bard from Google’s Twitter account included an incorrect answer about the James Webb Space Telescope, which Reuters was first to point out. According to Reuters, after this incident, Alphabet (Google’s parent company) shares dropped $100 billion, with trading volumes during that day being about three times the 50-day moving average. 

Technical interviewing firm Karat now allows ChatGPT in interviews

The company had already allowed candidates to use tools like Google and Stack Overflow during the interview. 

According to Karat, there are still rules on what you can use these tools for. In a blog post, the company stated that reference materials like these can be used to answer syntax questions, look up language details, and interpret error output from compilers. 

“As working developers, we’ve all done those things; so why should we have to struggle without basic resources during an interview? But we ask that they not look for a full solution to the problem, or copy and paste code from elsewhere directly into their solution,” Jason Wodicka, principal developer advocate at Karat, wrote in a blog post

In the blog post, Wodicka also asked the question of whether it’s even a good idea to use ChatGPT in an interview. He explained that ChatGPT has no idea if the response it gives you is correct, which could be problematic in an interview. 

“If you ask it how to reverse a string in Ruby, it might provide a correct answer,” he said. “But it might also grab the method that you’d use in Javascript and seamlessly adjust the syntax around it to look more Ruby-like. In my experience using GPT-like tools, I’ve seen both of these scenarios happen. When you’re trying to answer a question, do you really want a guess from ChatGPT, when you can probably find more definitive documentation by using a search engine?”

Cybercriminals hacking ChatGPT to have it generate malicious responses

Researchers at security firm Check Point discovered an instance where attackers had used ChatGPT to alter the code of an Infostealer malware from 2019. 

They have also found hackers who are finding workarounds to ChatGPT’s restrictions on producing harmful content. 

On how ChatGPT handles potentially harmful inputs, OpenAI says: “While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior.”

According to Check Point, these hackers have discovered how to bypass these safeguards to allow it to create malicious content, like phishing emails and malware code. They say this is done by creating Telegram bots that use OpenAI’s API. 

“In conclusion, we see cybercriminals continue to explore how to utilize ChatGPT for their needs of malware development and phishing emails creation. As the controls ChatGPT implement improve, cybercriminals find new abusive ways to use OpenAI models – this time abusing their API,” Check Point concluded in its blog post. 

The Pentagon uses ChatGPT to write press release

This is according to a report by Motherboard, who linked to the article in question. The press release announced a new task force focused on countering the threats of Unmanned Aerial Systems.

The press release included the following disclaimer at the top: “The article that follows was generated by OpenAI’s ChatGPT. No endorsement is intended. The use of AI to generate this story emphasizes U.S. Army Central’s commitment to using emerging technologies and innovation in a challenging and ever-changing operational environment.” 

ChatGPT used by judge in court case

Vice also reported that Judge Juan Manuel Padilla Garcia used ChatGPT to help make a legal decision in a court case in Colombia. It is believed this is the first use case of AI in a court case, and using AI in court decisions is allowed in Colombia. 

According to Garcia, in this case ChatGPT was used to reduce the time spent drafting judgements, and he included the full responses from ChatGPT in the decision. 

The post ChatGPT this week: ChatGPT + Bing | Google’s AI attempt doesn’t go as planned | Using ChatGPT in technical interviews? appeared first on SD Times.

]]>
Microsoft rolls out ChatGPT-enabled version of Bing https://sdtimes.com/microsoft/microsoft-rolls-out-chatgpt-enabled-version-of-bing/ Wed, 08 Feb 2023 20:07:42 +0000 https://sdtimes.com/?p=50279 While it has been speculated for a few weeks that Microsoft had plans to integrate ChatGPT into its search engine Bing, the company has finally made it official. This comes shortly after the company had made a large multi-billion dollar investment in OpenAI, the company behind ChatGPT. Search rival Google also just announced its own … continue reading

The post Microsoft rolls out ChatGPT-enabled version of Bing appeared first on SD Times.

]]>
While it has been speculated for a few weeks that Microsoft had plans to integrate ChatGPT into its search engine Bing, the company has finally made it official.

This comes shortly after the company had made a large multi-billion dollar investment in OpenAI, the company behind ChatGPT. Search rival Google also just announced its own conversation AI service called Bard that it plans to integrate into Google Search. 

Microsoft says that integrating ChatGPT into Bing will help provide better search results, more complete answers, a new chat experience, and the ability to generate content. 

According to a blog post written by Yusuf Mehdi, corporate vice president and consumer chief marketing officer at Microsoft, 10 billion searches are made per day, with roughly half of them going unanswered because the questions are too complex. 

Search powered by ChatGPT will surface relevant information like sports scores, stock prices, and weather, and summarizes search results to provide comprehensive answers to complex queries too. For example, you would be able to ask how to substitute eggs in a recipe and get instructions on how to do so without actually having to search through multiple results yourself. 

Just like with ChatGPT, you can also converse with Bing in a new chat experience that allows you to keep refining your search until you are able to get the result you need. 

Content generation capabilities will help you with things like writing emails, creating an itinerary for a vacation, or prepare for a job interview. 

ChatGPT is also being incorporated into Microsoft Edge itself, with two new capabilities available now. The first is the ability to ask for a summary of information on a page, with key takeaways. The example the company gave is summarizing a long financial report and then using the chat function to ask for a comparison with another company’s financials. 

In addition to ChatGPT, the new Bing is also being powered by the Microsoft Prometheus model, which is a proprietary was of interacting with OpenAI’s language models; the application of the model to the core search algorithm itself, and a new user experience that reimagines how one interacts with search, browser, and chat.

To address concerns around ethical AI, the company also stated that it has been working with OpenAI to develop safeguards to protect users from harmful content. 

“Our teams are working to address issues such as misinformation and disinformation, content blocking, data safety and preventing the promotion of harmful or discriminatory content in line with our AI principles. The work we are doing with OpenAI builds on our company’s yearslong effort to ensure that our AI systems are responsible by design. We will continue to apply the full strength of our responsible AI ecosystem – including researchers, engineers and policy experts – to develop new approaches to mitigate risk,” Mehdi wrote. 

Currently the ChatGPT-enabled Bing can be tested as a limited preview on desktop. There is currently a waitlist to join, but Microsoft plans to scale it up to millions of users in the next few weeks. They are also working to add a mobile preview.

The post Microsoft rolls out ChatGPT-enabled version of Bing appeared first on SD Times.

]]>
Software intelligence is key to creating better applications https://sdtimes.com/software-development/software-intelligence-is-key-to-creating-better-applications/ Mon, 06 Feb 2023 15:51:22 +0000 https://sdtimes.com/?p=50247 Development teams are always on a mission to create better quality software, be more efficient, and please their users as much as possible. The introduction of AI into the development pipeline makes this possible, from software intelligence to AI-assisted development tools. Both can work hand in hand to reach the same goal, but there’s a … continue reading

The post Software intelligence is key to creating better applications appeared first on SD Times.

]]>
Development teams are always on a mission to create better quality software, be more efficient, and please their users as much as possible.

The introduction of AI into the development pipeline makes this possible, from software intelligence to AI-assisted development tools. Both can work hand in hand to reach the same goal, but there’s a difference between software intelligence and intelligent software.

AI-assisted development tools are products that use AI to do things like suggest code, automate documentation, or generally increase productivity. Vincent Delaroche, founder and CEO of CAST, defines software intelligence as tools that analyze code to give you visibility into it so you can understand how the individual components work together, identify bugs or vulnerabilities, and gain visibility. 

So while these intelligent software tools help you write better code, the software intelligence tools sift through that code and make sure it is as high quality as possible, and make recommendations on how to get to that point. 

“Custom software is seen as a big complex black box that very few people understand clearly,  including the subject matter experts of a given system,” said Delaroche. “When you have tens of millions of lines of code, which represent tens of thousands of individual components which all interact between each other, there is no one on the planet who can claim to be able to understand and be able to control everything in such a complex piece of technology.”

Similarly, even the smartest developer doesn’t know every possible option available to them when writing code. That’s where AI-assisted development comes in, because these tools can suggest the best possible piece of code for the application. 

For example, a developer could provide a piece of code to ChatGPT and ask it for better ways of writing the code. 

According to Diego Lo Giudice, principal analyst at Forrester, Amazon DevOps Guru serves a similar purpose on the configuration side. It uses AI to detect possible operational issues and can be used to configure your pipelines better.

Lo Giudice explained that quality issues aren’t always the result of bad code; sometimes the systems around the software are not configured correctly and that can result in issues too, and these tools can help identify those problem configurations. 

George Apostolopoulos, head of analytics at Endor Labs, further explained the capabilities of software intelligence tools as being able to perform simple rules checks, provide counts and basic statistics like averages, and do more complex statistical analysis such as distributions, outliers and anomalies. 

Software intelligence is crucial if you’re working with dependencies

Software intelligence plays a big role not only in quality, but in security as well, solving a number of challenges with open source software (OSS) dependency. 

These tools can help by evaluating security practices of development, code of the dependency for vulnerable code, and code of the dependency for malicious code. They use global data to identify things like typosquatting and dependency confusion attacks.

According to Apostolopoulos, there are a number of things that can go amiss when adding in new dependencies, updating old ones, or just changing code around. 

“In the last few years a number of attacks exposed the potential of the software supply chain for being a very effective attack vector with tremendous force multiplying effects,” said Apostolopoulos. “As a result, a new problem is to ensure that a dependency we want to introduce is not malicious, or a new version of an existing dependency does not become malicious (because its code or maintainer were compromised) or the developer does not fall victim to attacks targeting the development process like typosquatting or dependency confusion.”

When introducing new dependencies, there are a number of questions the developer needs to answer, such as which piece of code will actually solve their problem, as a start. Software intelligence tools come into play here by recommending candidates based on a number of criteria, such as popularity, activity, amount of support, and history of vulnerabilities.

Then, to actually introduce this code, more questions pop up. “The dependency tree of a modestly complex piece of software will be very large,” Apostolopoulos noted. “Developers need to answer questions like: do I depend on a particular dependency? What is the potentially long chain of transitive dependencies that brings it in? In how many places in my code do I need it?” 

It is also possible in large codebases to be left with unused and out-of-date dependencies as code changes. “In a large codebase these are hard to find by reviewing the code, but after constructing an accurate and up to date dependency graph and call graph these can be automatically identified,” Apostolopoulos said. “Some developers may be comfortable with tools automatically generating pull requests that recommend changes to their code to fix issues and in this case, software intelligence can automatically create pull requests with the proposed actions.” 

Having a tool that automatically provides you with this visibility can really reduce the mental effort required by developers to maintain their software. 

The software landscape is a “huge mess”

Delaroche said that many CIOs and CTOs may not be willing to publicly admit this, but the portfolio of software assets that run the world, that exist in the largest corporations, are becoming a huge mess. 

“It’s becoming less and less easy to control and to master and to manage and to evolve on,” said Delaroche. “Lots of CIOs and CTOs are overwhelmed by software complexity.”

In 2011, Marc Andressen famously claimed that “software is eating the world.” Delaroche said this is more true than ever as software is becoming more and more complex. 

He brought up the recent example of Southwest Airlines. Over the holidays, the airline canceled over 2,500 flights, which was about 61% of its planned flights. The blame for this was placed on a number of issues: winter storms, staffing shortages, and outdated technology.

The airline’s chief operating officer Andrew Watterson said in a call with employees: “The process of matching up those crew members with the aircraft could not be handled by our technology … As a result, we had to ask our crew schedulers to do this manually, and it’s extraordinarily difficult … They would make great progress, and then some other disruption would happen, and it would unravel their work. So, we spent multiple days where we kind of got close to finishing the problem, and then it had to be reset.”

While something as disruptive as this may not happen every day, Delaroche said that every day companies are facing major crises. It’s just that the ones we know about are the ones that are big enough to make it into the press. 

“Once in a while we see a big business depending on software that fails,” he said. “I think that in five to ten years, this will be the case on a weekly basis.”

Another area to apply shift-left to

Over the last years several elements of the software development process have shifted left. Galael Zino, founder and chief executive of NetFoundry, thinks that software analysis also needs to shift left. 

This might sound counterintuitive. How can you analyze code that doesn’t exist yet? But Zino shared three changes that developers can make to make this shift.

First, they should adopt a secure-by-design mentality. He recommends minimizing reliance on third-party libraries because often they contain much more than the specific use case you need. For the ones you do need, it’s important to do a thorough review of that code and its dependencies.

Second, developers should add more instrumentation than they think they will need because it’s easier to add instrumentation for analysis at the start than when something is already in production. 

Third, take steps to minimize the attack surface. The internet is the largest single surface area, so reduce risk by ensuring that your software only communicates with authorized users, devices, and servers. 

“Those entities still leverage Internet access, but they can’t access your app without cryptographically validated identity, authentication and authorization,” he said. 

What does the future hold for these tools?

Over the past six months Lo Giudice has seen a big acceleration in adoption of tools that use large language models. 

However, he doesn’t expect everyone to be writing all their code using ChatGPT just yet. There are a lot of things that need to be in place before a company can really bring all this into their software development pipeline. 

Companies will need to start scaling these things up, define best practices, and define the guardrails that need to be put in place. Lo Giudice believes we are still about three to five years away from that happening. 

Another thing that the industry will have to grapple with as these tools come into more widespread use is the idea of proper attribution and copyright. 

In November 2022, there was a class-action lawsuit brought against GitHub Copilot, led by programmer and lawyer Matthew Butterick. 

The argument made in the suit is that GitHub violated open-source licenses by training Copilot on GitHub repositories. Eleven open-source licenses, including MIT, GPL, and Apache, require the creator’s name and copyright to be attributed. 

In addition to violating copyright, Butterick wrote that GitHub violated its own terms of service, DMCA 1202, and the California Consumer Privacy Act. 

“This is the first step in what will be a long jour­ney,” Butterick wrote on the webpage for the lawsuit. “As far as we know, this is the first class-action case in the US chal­leng­ing the train­ing and out­put of AI sys­tems. It will not be the last. AI sys­tems are not exempt from the law. Those who cre­ate and oper­ate these sys­tems must remain account­able. If com­pa­nies like Microsoft, GitHub, and OpenAI choose to dis­re­gard the law, they should not expect that we the pub­lic will sit still. AI needs to be fair & eth­i­cal for every­one. If it’s not, then it can never achieve its vaunted aims of ele­vat­ing human­ity. It will just become another way for the priv­i­leged few to profit from the work of the many.”

The post Software intelligence is key to creating better applications appeared first on SD Times.

]]>