watson IBMJust a few months ago, IBM announced plans to assist in the rapid prototyping and commercialization of solutions based on cognitive computing and blockchain, convinced that in less than five years, disruptive technologies are expected to drive dramatic shifts in every industry, from healthcare, to financial services, to tourism.

To help clients of all sizes in the Asia Pacific region lead in shaping the future of their industries, IBM opened The Watson Centre at Marina Bay in June of this year, an incubator designed to bring together organizations, business partners and IBM experts to co-create business solutions that leverage IBM's cognitive, blockchain and design capabilities. For almost 5,000 IBM cognitive solutions professionals in the Asia Pacific region alone, The Watson Centre at Marina Bay acts now as a center of expertise and plays host to Asia Pacific clients looking to lead in a variety of markets and industries, using Watson technologies that reason, improve through learning, and discover insights hidden in large amounts of complex data.

IBM Watson pioneers a new era of computing

Without doubt, Watson-powered cognitive services kick-start a new era in computing, where systems understand the world the way humans do: through senses, learning, and experience. Watson continuously learns from previous interactions, gaining in value and knowledge over time.

Just now, IBM has unveiled new Watson-powered cognitive services for its Cloud Video technology that are designed to transform how organizations unlock data-rich insights for video content and audiences. The new services can help deliver differentiated, personalized viewing experiences for consumers.

Digital video is a booming area for content but remains largely untapped for insights as part of the more than 80% of data in the world that's unstructured, making it difficult to process. Applying cognitive technology is believed to be a critical next step for mining and analyzing the complex data in video so companies can better understand and deliver the content consumers want.

"Companies are creating video with vast amounts of valuable data, but they don't have a way to easily identify that information or audience reaction to it," said Braxton Jarratt, general manager, IBM Cloud Video. "Today's new services are a major step forward in using IBM's cognitive and cloud capabilities to help companies unlock meaningful information about their videos and viewers so they can create and curate more personalized content that matters to specific audiences."

Accessible through the IBM Cloud, these new services analyze video data that can otherwise be difficult and time-consuming to manually process. They include:

  • Live Event Analysis: Combines Watson APIs with IBM Cloud Video streaming video solutions to track near real-time audience reaction of live events by analyzing social media feeds.
  • Video Scene Detection: Automatically segments videos into meaningful scenes to make it more efficient to find and deliver targeted content.
  • Audience Insights: Integrates IBM Cloud Video solutions with the IBM Media Insights Platform, a cognitive solution that uses Watson APIs to help identify audience preferences, including what they are watching and saying, through social media.


These services are among the latest examples of IBM applying Watson to its Cloud Video platform since the formation of its Cloud Video unit in January 2016. The IBM Cloud Video unit brings together innovations from IBM's R&D labs with the cloud video platform capabilities of Clearleap and Ustream.

Watson has been applied to IBM Cloud Video for analysis of audience reaction to live events

With streaming video being used more and more often to broaden audiences for live events, IBM has combined the Watson Speech to Text and AlchemyLanguage APIs with its IBM Cloud Video technology for a new service that tracks consumer feedback while the event is happening. The new experimental technology is designed to process the natural language in the streaming video and simultaneously analyze social media feeds to provide word-by-word analysis of audience sentiment to a live event.

This capability, now in the demonstration phase with clients, could be used by companies to gauge and adjust to audience reaction before a speaker has even left the stage. At a product unveiling, for example, viewer enthusiasm might rise or fall when specific features are mentioned, providing valuable insights on aspects of the product that are important to consumers and should be stressed in the future.

Cognitive capabilities to help understand and segment video into scenes

IBM also has piloted a new service that can provide a deeper understanding of the content in video. Today, technology exists in the market that can be used to segment videos based on simple visual cues, such as a change in camera shots. However, content providers continue to search for effective ways to distinguish more subtle shifts that require understanding conversations and context.

The new pilot project from IBM Research uses experimental cognitive capabilities, including technology designed to understand semantics and patterns in language and images, to identify higher-level concepts, such as when a show or movie changes topics. This can be used to automatically segment videos into meaningful chapters, instead of potentially arbitrary breaks in action. For example, the service could automatically create chapters of video clips based on different topics in a lecture, instructions for different cooking recipes or house-hunting scenes for individual neighborhoods. This level of detail would normally require a person to watch and manually categorize every piece of the video.
A leading content provider is already piloting this service as a potential way to improve categorization of videos, indexing of specific chapters and searches for relevant content. This is a first step to providing a basis for richer metadata services that can be used to help create highly-specific content pairings for viewers down to the segment, increasing engagement and time-spent.

Watson Cognitive Technology combined with IBM Cloud Video Platform to deliver more relevant content to viewers

IBM also plans to integrate its cognitive technologies with the IBM Cloud Video platform to provide deeper insights on audience preferences and sentiment. IBM Media Insights Platform, an IBM Media and Entertainment solution, is being added to IBM Cloud Video's existing Catalog and Subscriber Manager and Logistics Manager products to provide customers detail into consumer viewing habits – such as other shows or networks watched, devices used for viewing and other interests for specific audiences.

The new service, planned for release later this year, is designed to use the new Media Insights Platform to analyze viewing behaviors and social media streams to identify complex patterns that can be used to help improve content pairings and find new viewers interested in existing content. The Media Insights Platform uses several Watson APIs, including Speech to Text, AlchemyLanguage, Tone Analyzer and Personality Insights.

These new services can be used by both media and entertainment companies focused on content creation (Scene Detection Technology) as well as organizations across all industries using video to connect with employees or customers (Live Event Analysis)

With the help of Watson, organizations can harness the power of cognitive computing to transform their respective industry do a better job and solve important challenges. Find out more about IBM Watson.