machine learning Archives - SD Times https://sdtimes.com/tag/machine-learning/ Software Development News Wed, 12 Apr 2023 16:53:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg machine learning Archives - SD Times https://sdtimes.com/tag/machine-learning/ 32 32 UserTesting announces friction testing capability https://sdtimes.com/software-development/usertesting-announces-friction-testing-capability/ Wed, 12 Apr 2023 16:53:20 +0000 https://sdtimes.com/?p=50860 UserTesting announced machine learning innovations to the UserTesting Human Insight Platform to help businesses gain the context needed to understand and address user needs. One update is friction detection powered by machine learning to visually identify moments in both individual video sessions, and across multiple videos, where people experience friction behaviors like excessive clicking or … continue reading

The post UserTesting announces friction testing capability appeared first on SD Times.

]]>
UserTesting announced machine learning innovations to the UserTesting Human Insight Platform to help businesses gain the context needed to understand and address user needs.

One update is friction detection powered by machine learning to visually identify moments in both individual video sessions, and across multiple videos, where people experience friction behaviors like excessive clicking or scrolling while using digital products, including prototypes, apps, and websites, according to the company in a post

Now, organizations have the ability to merge behavioral data with video feedback to obtain a comprehensive understanding of the challenge and enhance the likelihood of successful outcomes. This functionality is especially beneficial for product and design teams, as it empowers them to visualize the user’s journey, and subsequently refine processes before committing costly development resources.

The new update includes integration with Microsoft Teams which will allow users of UserTesting and Microsoft Teams to easily share videos and related content with colleagues without leaving the UserTesting platform.

UserTesting also introduced expanded capabilities for Invite Network to help teams gain access to more audiences with increased privacy. The company stated that it will soon offer an integrated login experience for customers when they access the UserTesting, UserZoom, and EnjoyHQ platforms.

The post UserTesting announces friction testing capability appeared first on SD Times.

]]>
Charmed Kubeflow 1.7 adds support for serverless ML workloads https://sdtimes.com/ai/charmed-kubeflow-1-7-adds-support-for-serverless-ml-workloads/ Thu, 30 Mar 2023 17:10:59 +0000 https://sdtimes.com/?p=50763 Canonical, the publishers of the Ubuntu operating system, have announced the latest version of Charmed Kubeflow, its open-source MLOps platform. Charmed Kubeflow 1.7 adds the ability to run serverless ML workloads, which increases developer productivity by reducing routine tasks and handling infrastructure for them. Another win for developers is that new dashboards will improve user … continue reading

The post Charmed Kubeflow 1.7 adds support for serverless ML workloads appeared first on SD Times.

]]>
Canonical, the publishers of the Ubuntu operating system, have announced the latest version of Charmed Kubeflow, its open-source MLOps platform.

Charmed Kubeflow 1.7 adds the ability to run serverless ML workloads, which increases developer productivity by reducing routine tasks and handling infrastructure for them.

Another win for developers is that new dashboards will improve user experience and make infrastructure monitoring easier. 

This release also introduces new AI capabilities, such as the addition of KServe for model serving and new frameworks for model serving, like NVIDIA Triton.

Support has been added for PaddlePaddle, which is a platform for developing deep learning models. 

The Katib component has also been updated with a new UI that reduces the amount of low-level commands that are needed to find correlations between logs. Katib also has a new Tune API, which makes it easier to build tuning experiments and simplifies how trial metrics can be accessed. 

“With these Katib enhancements, data scientists can reach better performance metrics, reduce time spent on optimisation and experiment quickly. This results in faster project delivery, shorter machine learning lifecycles and a smoother path to optimised decision-making with AI projects,” Canonical wrote in a blog post

Charmed Kubeflow 1.7 also includes support for statistical analysis of both structured and unstructured data. This opens up the platform to a new group of people and provides access to packages and libraries like R Shiny and Plotly. 

And finally, the company announced that the platform was recently certified as NVIDIA DGX-software. According to Canonical, this will allow companies to accelerate their “at-scale deployments of AI and data science projects on the highest-performing hardware.”

The post Charmed Kubeflow 1.7 adds support for serverless ML workloads appeared first on SD Times.

]]>
PyTorch 2.0 introduces accelerated Transformer API to democratize ML https://sdtimes.com/ai/pytorch-2-0-introduces-accelerated-transformer-api-to-democratize-ml/ Mon, 20 Mar 2023 17:04:46 +0000 https://sdtimes.com/?p=50610 The PyTorch team has officially released PyTorch 2.0, which was first previewed back in December 2022 at the PyTorch Conference.  PyTorch is a Linux Foundation machine learning framework that was originally developed by Meta.  This release includes a high-performance implementation of the Transformer API. It supports more use cases now, such as models using Cross-Attention, … continue reading

The post PyTorch 2.0 introduces accelerated Transformer API to democratize ML appeared first on SD Times.

]]>
The PyTorch team has officially released PyTorch 2.0, which was first previewed back in December 2022 at the PyTorch Conference. 

PyTorch is a Linux Foundation machine learning framework that was originally developed by Meta. 

This release includes a high-performance implementation of the Transformer API. It supports more use cases now, such as models using Cross-Attention, Transformer Decoders, and for training models. The goal of releasing this new API is to make training and deployment of Transformer models more cost effective and affordable, the team explained. 

PyTorch 2.0 also introduces torch.compile as the main API for wrapping models and returning a compiled model. This is a completely additive feature, helping to maintain backwards compatibility.  

Torch.compile is built on four other new technologies: 

  1. TorchDynamo, which uses Python Frame Evaluation Hooks to safely capture PyTorch programs
  2. AOTAutogram, which can be used to generate ahead-of-time backward traces
  3. PrimTorch, which condenses over 2,000 PyTorch operators down into a set of 250 that can be targeted to build a complete PyTorch backend, significantly reducing the barrier to entry
  4. TorchInductor, which is a deep learning compiler that makes use of OpenAI Triton.

“We have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile(),” the PyTorch team wrote in a blog post

This release also adds support for 60 new operators to the Metal Performance Shaders (MPS) backend, which provides GPU accelerated training on macOS platforms. This brings the total coverage to 300 operators to-date. 

AWS customers will see improved performance on AWS Graviton compared to previous releases. These improvements focus on GEMM kernels, bfloat16 support, primitive caching, and the memory allocator. 

This release also includes several beta updates to PyTorch domain libraries and other libraries like TorchAudio, TorchVision, and TorchText. 

There are also several features in the prototype stage across many features, including TensorParallel, DTensor, 2D parallel, TorchDynamo, AOTAutograd, PrimTorch and TorchInductor.

The post PyTorch 2.0 introduces accelerated Transformer API to democratize ML appeared first on SD Times.

]]>
TensorFlow announces its roadmap for the future with focus on speed and scalability https://sdtimes.com/software-development/tensorflow-announces-its-roadmap-for-the-future-with-focus-on-speed-and-scalability/ Fri, 21 Oct 2022 19:16:07 +0000 https://sdtimes.com/?p=49345 The team behind the machine learning model TensorFlow recently released a blog post laying out the ideas for the future of the project.  According to the TensorFlow team, the ultimate goal is to provide users with the best machine learning platform possible as well as transform machine learning from a niche craft into a mature … continue reading

The post TensorFlow announces its roadmap for the future with focus on speed and scalability appeared first on SD Times.

]]>
The team behind the machine learning model TensorFlow recently released a blog post laying out the ideas for the future of the project. 

According to the TensorFlow team, the ultimate goal is to provide users with the best machine learning platform possible as well as transform machine learning from a niche craft into a mature industry.  

In order to accomplish this, the team said they will listen to user needs, anticipate new industry trends, iterate APIs, and work to make it easier for customers to innovate at scale.

To facilitate this growth, TensorFlow intends on focusing on four pillars: make it fast and scalable, utilize applied ML, have it be ready to deploy, and keep simplicity. 

The team stated that it will be focusing on XLA compilation with the intention of making model training and inference workflows faster on GPUs and CPUs. Additionally, it will be investing in DTensor, a new API for large-scale model parallelism.

The new API allows users to develop models as if they were training on a single device, even when utilizing several different clients. 

The team also intends to invest in algorithmic performance optimization techniques such as mixed-precision and reduced-precision computation in order to accelerate GPUs and TPUs.

According to the TensorFlow team, new tools for CV and NLP are also a part of its roadmap. These tools will come as a result of the heightened support for the KerasCV and KerasNLP packages which offer modular and composable components for applied CV and NLP use cases. 

Next, TensorFlow stated that it will be adding more developer resources such as code examples, guides, and documentation for popular and emerging applied ML use cases in order to reduce the barrier of entry of machine learning. 

The company also intends to simplify the process of exporting to mobile (Android or iOS), edge (microcontrollers), server backends, or JavaScript as well as develop a public TF2 C++ API for native server-side inference as part of a C++ application.

TensorFlow also stated that the process for deploying models developed using JAX with TensorFlow Serving and to mobile and the web with TensorFlow Lite and TensorFlow.js will be made easier. 

Lastly, they are working to consolidate and simplify APIs as well as minimize the time-to-solution for developing any applied ML system by focusing more on debugging capabilities. 

A preview of these new TensorFlow capabilities can be expected in Q2 2023 with the production version coming later in the year. To follow the progress, see the blog and YouTube channel

The post TensorFlow announces its roadmap for the future with focus on speed and scalability appeared first on SD Times.

]]>
Domino 5.3 available to improve data science access https://sdtimes.com/data/domino-5-3-available-to-improve-data-science-access/ Thu, 06 Oct 2022 15:52:48 +0000 https://sdtimes.com/?p=49136 Domino 5.3 was released to improve how organizations can get the most of data science across any cloud or on-premises infrastructure.  The new version introduces a private preview of Domino Nexus hybrid and multi-cloud capabilities and an expanded suite of connectors to simplify and democratize access to critical data sources. On top of that, new … continue reading

The post Domino 5.3 available to improve data science access appeared first on SD Times.

]]>
Domino 5.3 was released to improve how organizations can get the most of data science across any cloud or on-premises infrastructure. 

The new version introduces a private preview of Domino Nexus hybrid and multi-cloud capabilities and an expanded suite of connectors to simplify and democratize access to critical data sources. On top of that, new GPU inference can help to productionize data science projects such as deep learning. 

“Modern enterprise data science teams need access to a wide variety of data and infrastructure across different clouds, regions, on-premises clusters and databases,” said Nick Elprin, co-founder and CEO of Domino Data Lab. “Domino 5.3 gives our customers the ability to use the data and compute they need wherever it lives, so they can increase the speed and impact of data science without sacrificing security or cost efficiency.”

Users have access to pre-built connectors that point to widely used data sources, advanced search capabilities, and integrated data versioning.

The platform, which can help train advanced deep learning models for AI, now can extend those advantages to model deployment with no DevOps skills required. Also, companies involved in the Nexus private preview can now restrict access to data by region and get more granular control over how to enforce compliance in very specific ways depending on the data location or sovereign regulations.

Additional details on Domino 5.3 are available here. 

The post Domino 5.3 available to improve data science access appeared first on SD Times.

]]>
OctoML launches new machine learning platform expansion https://sdtimes.com/ai/octoml-launches-new-machine-learning-platform-expansion/ Thu, 23 Jun 2022 15:07:06 +0000 https://sdtimes.com/?p=48066 Today OctoML, provider of a machine learning acceleration platform, released a major platform expansion in order to accelerate the development of AI-powered applications by eliminating bottlenecks in machine learning development.  This release is intended to enable app developers and IT operations teams to transform trained machine learning models into agile, portable, production-ready software functions that … continue reading

The post OctoML launches new machine learning platform expansion appeared first on SD Times.

]]>
Today OctoML, provider of a machine learning acceleration platform, released a major platform expansion in order to accelerate the development of AI-powered applications by eliminating bottlenecks in machine learning development. 

This release is intended to enable app developers and IT operations teams to transform trained machine learning models into agile, portable, production-ready software functions that integrate with their existing application stacks and DevOps workflows.

According to OctoML, the platform expansion will help to conquer the challenges of enterprise software development by abstracting out complexities, stripping away dependencies, and delivering models as production-ready software functions.

“AI has the potential to change the world, but it first needs to become sustainable and accessible,” said Luis Ceze, CEO of OctoML. “Today’s manual, specialized ML deployment workflows are keeping application developers, DevOps engineers and IT operations teams on the sidelines. Our new solution is enabling them to work with models like the rest of their application stack, using their own DevOps workflows and tools. We aim to do that by giving customers the ability to transform models into performant, portable functions that can run on any hardware.”

A few key features of this platform expansion include: 

  • Automation detects and resolves dependencies, cleans and optimizes model code, and accelerates and packages the model for any hardware product 
  • OctoML CLI brings users a local experience of OctoML’s feature set and integrates with SaaS capabilities to create accelerated hardware-independent models-as-functions 
  • Comprehensive fleet of over 80 deployment targets in the cloud and at the edge with accelerated computing, including GPUs, CPUs, NPUs, from NVIDIA, Intel, AMD, ARM, and AWS Graviton 
  • Expansive software catalog covering all major ML frameworks, acceleration engines, and software stacks from chip makers 

“NVIDIA Triton is the top choice for AI inference and model deployment for workloads of any size, across all major industries worldwide,” said Shankar Chandrasekaran, product marketing manager at NVIDIA. “Its portability, versatility and flexibility make it an ideal companion for the OctoML platform.”

The post OctoML launches new machine learning platform expansion appeared first on SD Times.

]]>
Data Profiler: Capital One’s open-source machine learning technology for data monitoring https://sdtimes.com/data/data-profiler-capital-ones-open-source-machine-learning-technology-for-data-monitoring/ Thu, 28 Apr 2022 14:33:31 +0000 https://sdtimes.com/?p=47371 With the move to the cloud, the amount of data that companies are able to manage has grown exponentially. This is why Capital One created Data Profiler, the open-source Python library that utilizes machine learning in order to help users monitor big data and detect information that should be properly protected.   Data Profiler brings users … continue reading

The post Data Profiler: Capital One’s open-source machine learning technology for data monitoring appeared first on SD Times.

]]>
With the move to the cloud, the amount of data that companies are able to manage has grown exponentially. This is why Capital One created Data Profiler, the open-source Python library that utilizes machine learning in order to help users monitor big data and detect information that should be properly protected.  

Data Profiler brings users a pre-trained deep learning model to ensure efficient identification of sensitive information, components to conduct statistical analysis of the dataset, as well as an API to build data labelers.

“In the future, we’re going to be seeing more synthetic data generation – it’s a crucial component of the model development process for explainability and training. So, we needed a way to understand the data we were working with and to do that we needed to do in-depth analysis of those datasets,” said Jeremy Goodsitt, a lead machine learning engineer at Capital One, “We ended up building out the Data Profiler and even extending on top of that… which is our data labeling component that does the sensitive data detection.”

He went on to explain that the deep learning model within the data labeler works to analyze the unstructured text of a dataset and then identifies what type of data is being represented in that specific dataset. 

“Our library has a list of labels of which a subset is considered non-public personally identifiable pieces of information… the data labeler is able to use that deep learning model to identify where that exists in a dataset… and calls out where that exists to that user that’s doing the analysis,” Goodsitt explained.

Data Profiler offers customers versatility. Whether the data is structured, unstructured, or semi-structured the library is able to identify the schema, statistics, and entities from the data. This flexibility allows models to be modified and makes it possible to run several different models on the same dataset with just a few lines of code.  

Goodsitt also discussed a possible use case where this sensitive data detection model can be used to sanitize datasets on a mobile device so that when they leave the customer’s device, the specific personal information is removed from the data, ensuring protection regardless of where that dataset goes. 

According to Nureen D’Souza, leader of the Open-Source Program Office at Capital One, the main reasons why the company chose to open-source Data Profiler are to facilitate collaboration with new talent, showcase the expertise of its data scientists, and give back to the open-source community.   

“We can now have others in a similar field contribute to this project and make Data Profiler greater than it is today,” she said, “We thought it would be good to open-source because it solves the problem that we are seeing, and we couldn’t find another open-source project that would.”

Goodsitt also stressed the benefits of Data Profiler’s reader capability. This works as a single command class that allows customers to point to different types of files or even a URL that is hosting a dataset and then automatically identify that dataset and read it for the user. 

“Users don’t have to go in and look at the file and try to understand it, they can just direct the data class at a file or a repository of datasets… so that’s really powerful,” he said. 

Data Profiler also allows users to parallelize, batch, or stream profiling a dataset so that the entire dataset doesn’t have to be profiled all at once. According to Goodsitt, prior to this release, this particular feature was not easily discoverable unless you were building your own statistical analysis. 

According to D’Souza, since its release back in 2021, Data Profiler has earned 54 forks on GitHub as well as over 700 stars, highlighting the way that this open-source technology is being revered throughout the community, with no sign of slowing down. 

Being a Python library, this open-source technology is set to be featured at PyCon 2022, the Python Conference, taking place from April 27 through May 3 in Salt Lake City. After being produced as a virtual event for two years, PyCon is back and in person, with several health and safety guidelines in place. 

To learn more about Capital One’s Data Profiler, visit the website.  


Content provided by SD Times and Capital One. 

The post Data Profiler: Capital One’s open-source machine learning technology for data monitoring appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: KServe https://sdtimes.com/ai/sd-times-open-source-project-of-the-week-kserve/ Fri, 26 Nov 2021 14:00:31 +0000 https://sdtimes.com/?p=45910 KServe is a tool for serving machine learning models on Kubernetes. It encapsulates the complexity of tasks like autoscaling, networking, health checking, and server configuration. This allows users to provide their machine learning deployments with features like GPU Autoscaling, Scale to Zero, and Canary Rollouts.  Created by IBM and Bloomberg’s Data Science and Compute Infrastructure … continue reading

The post SD Times Open-Source Project of the Week: KServe appeared first on SD Times.

]]>
KServe is a tool for serving machine learning models on Kubernetes. It encapsulates the complexity of tasks like autoscaling, networking, health checking, and server configuration. This allows users to provide their machine learning deployments with features like GPU Autoscaling, Scale to Zero, and Canary Rollouts. 

Created by IBM and Bloomberg’s Data Science and Compute Infrastructure team, KServe was previously known as KFServing. It was inspired when IBM presented the idea to serve machine learning models in a serverless way using Knative. Together Bloomberg and IBM met at the Kubeflow Contributor Summit 2019, and at the time, Kubeflow didn’t have a model serving component so the companies worked together on a new project to provide a model serving deployment solution. 

The new project first debuted at KubeCon + CloudNativeCon North America in 2019. It was moved from the KubeFlow Serving Working Group into an independent organization in order to grow the project and broaden the contributor base. At this point the project became known as KServe. 

KServe provides model explainability through integrations with Alibi, AI Explainability 360, and Captum. It also provides monitoring for models in production through integrations with Alibi-detect, AI Fairness 360, and Adversarial Robustness Toolbox (ART). 

The project has been adopted by a number of organizations, including Nvidia, Cisco, Zillow, and more.

The post SD Times Open-Source Project of the Week: KServe appeared first on SD Times.

]]>
Amazon releases new natural language query tool https://sdtimes.com/data/amazon-releases-new-natural-language-query-tool/ Mon, 27 Sep 2021 15:37:36 +0000 https://sdtimes.com/?p=45362 AWS announced the release of Amazon QuickSight Q, a natural language query tool for the Enterprise Edition of QuickSight.  It uses Natural Language Understanding (NLU) to discover the intent behind questions and is able to answer questions that refer to all data sources supported by QuickSight, according to AWS.  This includes data from all AWS … continue reading

The post Amazon releases new natural language query tool appeared first on SD Times.

]]>
AWS announced the release of Amazon QuickSight Q, a natural language query tool for the Enterprise Edition of QuickSight. 

It uses Natural Language Understanding (NLU) to discover the intent behind questions and is able to answer questions that refer to all data sources supported by QuickSight, according to AWS. 

This includes data from all AWS sources such as Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Aurora, Amazon Athena, and Amazon Simple Storage Service (Amazon S3)as well as third party sources & SaaS apps such as Salesforce, Adobe Analytics, ServiceNow, and Excel.

Q is powered by topics, which are generally created by QuickSight Authors for use within an organization. Topics represent subject areas for questions and are created interactively.

In addition to results, it gives access to explanatory information that can be reviewed to ensure that the question was understood and processed as desired. 

Additional details on the tool and its available locations and pricing model are available here.

The post Amazon releases new natural language query tool appeared first on SD Times.

]]>
Why machine learning models fail https://sdtimes.com/ai/why-machine-learning-models-fail/ Thu, 12 Aug 2021 16:44:13 +0000 https://sdtimes.com/?p=45006 Machine Learning is quickly becoming an important tool for automation, but failing models and improper background knowledge are creating more issues than they are solving.  “I think to build a good machine learning model… if you’re trying to do it repeatedly, you need great talent, you need an outstanding research process, and then finally you … continue reading

The post Why machine learning models fail appeared first on SD Times.

]]>
Machine Learning is quickly becoming an important tool for automation, but failing models and improper background knowledge are creating more issues than they are solving. 

“I think to build a good machine learning model… if you’re trying to do it repeatedly, you need great talent, you need an outstanding research process, and then finally you need technology and tooling that’s kind of up to date and modern,” said Matthew Granade, co-founder of machine learning platform provider Domino Data Lab. He explained how all three of these elements have to come together and operate in unity in order to create the best possible model, though Granade placed a special emphasis on the second aspect. “The research process determines how you’re going to identify problems to work on, find data, work with other parts of the business, test your results, and deliver those results to the business,” he explained. 

According to Granade, the absence of the essential combination of those aspects is the reason why so many organizations are faced with failing models. “Companies have really high expectations for what data science can do but they’re struggling to bring those three different ingredients together,” he said. This raises the question: why are organizations investing so much into machine learning models but failing to invest in the things that will actually make their models an ultimate success? According to a study conducted by Domino Data Lab, 97% of those polled say data science is crucial to long-term success, however, nearly as many say that organizations lack the staff, skills, and tools needed to sustain that success. 

Granade traces this problem back to the tendency to look for shortcuts. “I think the mistake a lot of companies make is that they kind of look for a quick fix,” he began, “They look for a point solution, or this idea of ‘I’m going to hire three or four really smart PhD’s and that’s going to solve my problem.’ “According to Granade, these types of quick fixes never work long-term because the issues run deeper. It is always going to be important to have the best minds on your team but they cannot exist independently. Without the best processes and best tech to back them up, it becomes a futile attempt to utilize data science. 

Domino Data Lab’s study also revealed that 82% of executives polled said they thought that leadership needed to be concerned about bad or failing models as the consequences of those models could be astronomical. “Those models could lead to bad decisions that produce lost revenue, it could lead to bad key performance indicators, and security risks,” Granade explained. 

Granade predicts that those companies that find themselves behind the curve on data science and machine learning practices will work quickly to correct their mistakes. Organizations that have tried and failed to implement this kind of technology will keep their eye on the others that have succeeded and take tips where they can get them. Not adapting to this practice isn’t an option in most industries as it will inevitably lead to certain companies falling behind as a business. Granade goes back to a comprehensive approach as the key to remedy the mistakes he has  seen. “I think you can say ‘we’re going to invest as a company to build out this capability holistically.’ We’re going to hire the right people, we’re going to put a data science process in place, and the right tooling to support that process and those people,’ and I think if you do that you can see great results,” he said. 

Jason Knight, co-founder and CPO at OctoML, believes that another aspect of creating a successful data science and machine learning model is a firm understanding of the data you’re working with. “You can think you have the right data but because of underlying issues with how it’s collected or annotated or generated in the first place, it can kind of create problems where you can’t generate a model out of it,” he explained. When there is an issue with the data that goes into generating a successful model, no matter what technique an organization uses, it will not work in the way it was intended. This is why it is so important not to skip steps when working with this kind of technology, assuming that the source data will work without properly understanding its details will spark issues down the line.

Vaibhav Nivargi, founder and CTO of the cloud-based AI platform Moveworks, also emphasized good data as being an essential aspect of creating a successful model. “It requires everything from the right data to represent the real world, to the right understanding of this data for a given domain, to the right algorithm for making predictions,” he said. The combination of these will help to ensure that the data going into creating the model is dependable and will create the desired results.

OctoML’s Knight also said that while certain organizations have not seen the success they had originally intended with data science and machine learning, he thinks the future is bright. “In terms of the future, I remain optimistic that people are pushing forward the improvements needed to give solutions for the problems we have seen,” he said.

The post Why machine learning models fail appeared first on SD Times.

]]>