With AI helping to solve so many business challenges, we think it’s important to give you a comprehensive wrap-up of all 29 announcements we made this year involving AI and machine learning at Google Next ‘19. We hope you can put some or many of these developments to use to help improve some of your business’s processes.
Here’s what happened:
Cloud AI solutions
1. Document Understanding AI
If your business requires any manual paperwork processing, you can use Document Understanding AI to classify documents, extract crucial information from scanned images, and apply industry-specific, custom analysis to automate your processing needs.
2. Contact Center AI
After announcing Contact Center AI at Google Next ‘18, today we’re making it available in beta, and announcing integrations provided by Cisco, Five9, Genesys, Mitel, Twilio, and Vonage.
3. Recommendations AI
If yours is a retail-oriented business, Recommendations AI helps you deliver highly personalized product recommendations to your customers at scale. A fully managed service, Recommendations AI puts all of your data to work to deliver high-quality, relevant, recommended products.
4. Visual Product Search
If you’re looking to deliver relevant products to your customers, Visual Product Search helps you match customer-generated images with images from your product catalog. These results reduce purchasing friction for your customers by prompting them with products based on their interests.
5. AutoML Natural Language custom entity extraction and sentiment analysis (beta)
These additions to AutoML Natural Language let you identify and isolate custom fields from input text and also train and serve industry-specific sentiment analysis models on your unstructured data, including customer feedback.
6. AutoML Tables (beta)
If you’re looking for a way to train models without coding, AutoML Tables lets you turn your structured data into predictive insights. You can ingest your data for modeling from BigQuery, Cloud Storage, and other sources.
7. AutoML Vision object detection (beta)
Surpassing its prior image classification abilities, AutoML Vision now helps you detect multiple objects in images, providing bounding boxes to identify object locations.
8. AutoML Vision Edge (beta)
If your business needs to run classifier models on edge devices, AutoML Vision Edge helps you deploy fast, high accuracy models at the edge, and trigger real-time actions based on local data. AutoML Video Edge supports a variety of edge devices where low latency is critical, including Edge TPUs for fast inference.
9. AutoML Video (beta)
For those who need custom video classification models with custom labels capabilities beyond the Video Intelligence API, AutoML Video now lets you upload your own video footage and custom tags, in order to train models that are specific to your business needs for tagging and retrieving video with custom attributes.
10. BigQuery Insights: BigQuery ML core (GA)
After releasing BigQuery ML in beta at Google Next ‘18, BigQuery ML is now generally available, and you can even call new model types with SQL code.
11. BigQuery: k-means clustering ML (beta)
K-means clustering helps you establish groupings of data points based on axes or attributes that you specify, and now you can establish groupings for your data via convergence, straight from Standard SQL in BigQuery.
12. BigQuery: Import TensorFlow Models (alpha)
A much-requested feature: you can now import your TensorFlow models and call them straight from BigQuery to create classifier and predictive models right from BigQuery.
13. BigQuery: TensorFlow DNN classifier
Deep neural networks (DNNs) can help you classify your data on a large number of features or signals. You can train and deploy a DNN model of your choosing straight from BigQuery’s Standard SQL interface.
14. BigQuery: TensorFlow DNN regressor
If a regression is more useful to fit your data than a classifier, you can design a regression in TensorFlow and then call it to analyze your data in BigQuery.
Data science platform
15. AI Platform—notebooks, data labeling, SDKs, and console interface (beta)
AI Platform, available in beta, provides a single location from which to select models, or train you own models and set them up to serve in production, whether you’re a data scientist or a software engineer. This includes a development environment, Jupyter notebooks, pre-built algorithms, customer containers, custom user code support for prediction, and 4-core support for prediction.
16. AI Platform Notebooks (beta)
If you’re eager to test out models and hyperparameter configurations in an interactive and shared environment, you can deploy JupyterLab iPython notebooks on a semi-managed service.
17. Cloud AI Data Labeling Service (beta)
Cloud AI now provides you with a paid service to request labelers to classify your uploaded business data for use in training models, or use automated tools that let you efficiently label your data at scale.
18. Hybrid SDK (alpha)
This is the underlying technology that helps users move their ML code from their on-premise cluster running on Kubeflow to GCP with almost no code changes.
19. AI Platform Online Prediction: User Code Support (beta)
AI Platform’s online prediction functionality now supports user-supplied custom code, which helps you pre-process your data in the way of your choosing, both at training time and at serving time.
20. AI Hub (beta)
AI Hub, available in beta, provides a single location from which your team can test out and share APIs, Google-provided models, third-party models, learning content, and data science notebooks, as you experiment and iterate your machine learning models.
21. Kubeflow 0.5
Kubeflow helps you orchestrate your machine learning training pipelines across on-prem and cloud-based resources. As a cloud-native platform that integrates Kubernetes with TensorFlow, you can now containerize your training and serving infrastructure.
Pre-trained machine learning API updates
22. Cloud Vision API—bundled enhancements (beta)
The Vision API can now operate on batches of images through batch prediction, and document text detection now supports online annotation of PDFs, as well as files that contain a mix of scanned (raster) and rendered text.
23. Cloud Natural Language API—bundled enhancements (beta)
Cloud Natural Language now includes support for Russian and Japanese languages, as well as built in entity-extraction for receipts and invoices.
24. Cloud Translation API V3 (beta)
Our third revision of the Translation API helps you to maintain and control your brand by defining the vocabulary and terminology you want to override within translations. You can then easily integrate your added brand-specific terms into your translation workflows.
25. Video Intelligence API—bundled enhancements (beta)
The Video Intelligence API lets content creators search for tagged aspects of their video footage. The API now supports optical character recognition (generally available), object tracking (also generally available), and new streaming video annotation capability (in beta).
26. Cloud TPU v3 (GA)
Our third-generation, liquid-cooled Tensor Processing Units provide some of the fastest training times when used at scale. These Compute Engine resources are now generally available to help you train your machine learning models faster.
27. NVIDIA Tesla T4 GPU for Compute Engine (GA)
NVIDIA’s Tensor Core-enabled GPU, the Tesla T4, is now generally available on Compute Engine. This GPU is primarily designed for runtime inference, but also enables lower-cost ML training, and visualization with new ray-tracing accelerations.These GPUs are now available in eight regions.
Dialogflow for the enterprise
28. Sentiment Analysis (GA) for Dialogflow Enterprise Edition
Sentiment analysis is now seamlessly integrated, and generally available in Dialogflow Enterprise Edition, which lets you model chat-oriented conversations and responses, to assist you as you build interactive chatbots.
29. Text-to-Speech (GA) for Dialogflow Enterprise Edition
Text-to-Speech is now also integrated and generally available in Dialogflow Enterprise Edition, letting your chatbots trigger synthesized speech for more natural user interaction.
Wow, that was a lot! As you can see, we’re constantly making updates to our APIs to better support developers, and we’re also launching new solutions to meet an ever growing breadth of business and industry needs. These changes can help you, especially if you’re looking to build off existing reference architectures rather than to re-invent how you integrate AI into your business from the ground up. Please also check out our recorded sessions, in case there was anything at Google Next ‘19 that you missed.
Source : Google