No Summer Breaks at Wazee Digital!
Announcing New Features and Enhanced AI

New Features and Enhanced AI | Core Insiders: June 2018

Ben Howell June 28, 2018

We’ve been hard at work this summer, adding new features to Core while migrating the remaining parts of our business to AWS (also affectionately known around here as THE Cloud).  

During the months of May and June, we completed our migration to AWS. Users didn’t notice anything different about Core or the way they access it. You can read about our partnership with AWS here and our release notes on the migration here.

New Features

We are happy to announce the following new Core features, which are available now.  

Comments that exist on Bin items are now visible in the Screening Room, which allows a user to view all information on one screen. 

The Advanced Player has replaced the basic player in Basic Data tab, Advanced Data tab, and Single Asset Workflow view. 

 

Core users who are publishing content to YouTube and who have activated the monetization feature can now add a Match Policy. Now the YouTube-configured Match Policies populate in a drop-down window if the asset has Monetization and Usage Policy options associated with it.  

Core + Veritone Integration

The new Core + Veritone integration allows for metadata extracted by Veritone’s AI engines to be brought into Core automatically and stored within a timeline, eliminating the need for manual entry of metadata. 

A user selects the content (long-form or short-form) from Core that needs Veritone-created metadata. One or more of Veritone’s 180+ AI engines shares and processes the content and adds the extracted metadata to the content timeline metadata fields. In addition to more traditional applications of AI, such as speech to text, this type of AI application is particularly useful for those whose content requires metadata for various other reasons — such as for regulatory compliance (Children’s Act, language, etc.), analytics, and metadata identification around objects or logos (see example above). The integration can be run on content of any age.

Veritone AI Engines Currently Supported by the Integration:

  • Facial Recognition – matches faces to names in a shared database or a customer-curated database
  • Facial Detection – detects faces to tag manually
  • Object Recognition – produces keywords 
  • Speech to Text – transcribes audio into text

Coming Soon:

  • Logo Recognition – matches logos to database
  • Optical Character Recognition – transcribes words on screen

AI IRL (In Real Life)

The product team could have asked the development team to use AI to create a new personal assistant, but Alexa, Cortana, and Siri seem to have ordering toilet paper, playing Jeopardy, and adding songs to our playlists well in hand (if they had hands, that is).  

This foray into AI wasn’t the first time we had looked at it as a product team or as a company. While listening to each AI company discuss why it was the biggest and best, we were impressed by the various AI engines. We discovered that most of them have found a niche in one area or another, be it media and entertainment, the public sector, or government. The issue we had when deciding on a technology partner was that a product such as Core would benefit most from multiple AI engines. It quickly became clear that one vendor and AI engine would not be able to provide all of the different types of metadata processing that Wazee Digital needed. Selecting multiple vendors and providing a selection of metadata-processing engines would have its own support issues. Each vendor would need its own development time (API-driven or not), testing, and ongoing support. Within a short time, it would become difficult and cost-prohibitive to keep up several vendors and respective AI engines given the rapid advancements that are happening. 

Then we met Veritone. What intrigued us about working with Veritone was the number and variety of AI engines we could offer our clients for metadata enhancement without having to support each one individually. That’s because Conductor, Veritone’s orchestration service layer, readily does it for us.  

We’re all waiting to see if AI brings about humankind’s demise or infinitely extends minds by merging human and machine. In the meantime, Wazee Digital decided to start working on a couple use cases that apply AI-generated metadata to video content for cataloging and regulatory compliance. In both use cases, the tasks are completed today by human eyes, brains, and fingers. Because we specialize in asset management and monetization, there aren’t many things we like more than making our clients’ lives better and providing ways to create a more efficient workflow for metadata creation and augmentation. It ranks right up there with breakfast burritos and cold refreshments (read: beer!).

Wazee Digital has been running facial recognition software, phonetic indexing, and associated metadata creation for nearly a decade. We accomplished those functions with a combination of partner and proprietary software that have been built and refined over the years. It was time for a refresh, we wanted to advance this feature and make it better for our customers. During a proof of concept with Veritone, the hypothesis was verified that indeed AI engines have their own talents: Some are great at identifying people in dim lighting; some are terrific at identifying talking heads close up; while others excel at facial recognition in a group scene. Through the Veritone orchestration layer, it was easy to find a combination of engines to accommodate the various types of content we want to feed it: long-from movies, short-form programs, news/archival content, and sports. After working with our partners at Veritone to discover and learn which engines and settings were appropriate for different types of content, we are happy to say that facial recognition and phonetic indexing are part of an integration we offer customers today!

Everyone in television does it, but no one wants to talk about it — regulatory compliance. To comply with the Children’s Television Act of 1990, which was enacted and enforced by the Federal Communications Commission (FCC), one of our clients must identify the talent (hosts or program characters) used in the advertisements scheduled to run during children’s programs. The aim is to avoid showing ads that contain the same talent as the adjacent programs, even in the face of program changes and traffic scheduling/sales. The best way to identify talent conflict is to match metadata within the programs and commercials. Almost all media companies are doing this job manually, which is time-consuming and carries the risk of error. The solution in this use case is to send programs and commercials to Veritone, which automatically creates metadata, sends it back to Core through our API, and places it into appropriate content timelines. The content, along with its metadata, is then available for searching, clip creation, publishing to social media, and content/metadata extraction/delivery. 

In addition to the above use cases, another real-life scenario of applying AI is across archived content, which can include facial recognition, phonetic indexing, and metadata identification around objects or logos. (For example, the AI engine can identify all the content with a lamp and chair, and/or calculate how many times a sponsorship logo appeared.) It can also help determine ratings based upon language, nudity, or other inappropriate content.

There will still be a time and place for human oversight and augmentation of metadata, but reviewing and augmenting machine-created metadata is more efficient than creating it manually. 

The summer will continue to heat up for us – literally and figuratively. To stay up-to-date on all of our Core happenings, be sure to stay connected on social (links below) and check in on our Knowledge Base where we publish release notes.