As the IAMB reports, networked media operations open up access to resources and processing power that were not available with fixed connections. This carries out enormous opportunities for the broadcasting industry, which is already leveraging innovative technologies to its favor. One of these opportunities is the introduction of Artificial Intelligence (AI) to day-to-day workflows to reduce production time and increase the quality of information. While working with intelligent robots may not be appealing to everyone, delegating tasks to machines might help to free time in hectic schedules. Our fifth techtacles is in charge of empowering the newsrooms of the future by providing integrations with AI.  

“Robo-journalism” has frequently made headlines portraying the replacement of human attributes such as intuition and creativity in journalistic practices. Yet, AI and machine learning may be used to supplement rather than replace the work of journalists. The news production cycle contains repetitive and tedious tasks that consume a significant percentage of the creative’s time, where instruments to sort out these activities can be assigned. Thus, it is feasible to increase efficiency in newsrooms by employing AI-powered tools in daily tasks. 

How to use AI in the newsroom? 

There are plenty of scenarios where AI-driven tools can assist the news-making process. Let’s look at transcripts, for instance. To search for content useful for production, journalists rely on descriptions and tags that have been assigned to videos according to the people, topics, or concepts these are covering. Yet, videos’ metadata is sometimes limited, and these labels do not encompass all the information stored in a clip. Thus, production workflows often rely on transcripts. These are manually obtained as journalists play back the video and type what they hear. As journalists need to scour through long hours of videos to find a certain phrase, transcripts facilitate searching for content as they can be scanned and found using keywords. Writing down scripts is time-intensive and is one of the scenarios where AI-driven tools can make a difference. Octopus integrates with AI providers to offer the speech-to-text feature to save time and resources that would be otherwise spent on transcribing interviews or other footage. This integration opens up journalists’ schedules allowing them to do more high-value work, as the research phase will be facilitated. They can visually audit and search inside tracks fast and easily, even if they have not watched them. Transcripts are also useful to generate faster-written content, speed up information gathering, and even streamline captioning and subtitling. Similarly, breaking news speech-to-text makes it easier to capture the required information from footage that is feeding in real-time.  

A similar situation occurs when browsing for people or objects inside clips. To make a story, reporters need to sift through dozens of hours of footage to find the person or object they are covering in their story. Even more when a news network is covering a large event, take, for example, the recent COP26. For the event’s coverage, journalists may have been entrusted to follow the attendance and participation of certain delegates. This would entail watching many videos where the delegate is present until something newsworthy is picked out. Octopus’ integration with AI technology introduces the feature of facial recognition to assist journalists in identifying the presence of a character inside a video, even before they have had a chance to watch it. Using AI, the reporter can easily identify where and when the person appears on a clip and even use this information for on-screen graphics with the actual timecode when the person or object appears in the video. The same is true with objects. The object recognition feature facilitates the search for content comprising certain elements inside videos and images. This helps editors and producers to track down clips containing an object, including public places, buildings, or even fire. All these features eliminate the tedious and expensive step of manually associating metadata with the content of multimedia files (performed by metadata loggers) as AI-powered tools will use their knowledge to classify media easily and quickly to find video content. 

A 24/7 news cycle requires vast resources to ensure the delivery of the latest news. Evidently, artificial intelligence tools can be reliable collaborators for speedier content creation. Yet, we need more than that to match the pace of information. Check out how our next techtacle offers integrations for playout and studio automation to step up to the challenge of breakneck reporting. 

 

Read the next article!

Playout and Studio Automation

Share this article on: