Skip to content
Go backGo back

General

News consumption in the artificial intelligence era

The complexity and interconnectedness of the global economy means that organisations today need to be far more attuned to developments around the world than they’ve needed to in the past. Risks and opportunities can appear from the periphery, and while decision makers have never had so much information available at the swipe of a phone screen, understanding which events will impact their business is an increasingly noisy task.

Although the hype surrounding artificial intelligence has gained recent momentum from the latest wave of so-called “generative” AI applications such as ChatGPT, machine learning and techniques such as Natural Language Processing (NLP) have been used for some time across a range of sectors to tackle issues related to information overload. Such approaches aren’t without their problems however, and there can be a degree of trial and error in getting the best results.

This post points to recent FT analysis to highlight some of the issues associated with running large content sets through AI, and how the FT can help address these in helping organisations get greater clarity on their external environment.

Can we place our trust in ‘the machine’?

In his Big Read piece looking at the potential impacts of a new era of generative AI and machine learning, FT West Coast editor Richard Waters also flags the major challenges facing those looking to build or apply this sort of technology.

What Richard calls 'the reliability problem' refers to the way in which computers and AI-generated answers can sound very believable, but can’t be completely trusted. As their answers are based on processing troves of data, the outputs that AI algorithms spit out are only as good as the quality of their inputs.

Training AI systems can in itself be something of a balancing act. A supervised approach whereby humans train the AI directly is often not as effective at finding the best answers as letting it learn by itself, say OpenAI, the creators of ChatGPT.

Other proponents of the technology question whether the reliability problem is really worth worrying about - humans have after all become used to using internet search engines, which can surface both useful and misleading results. In a professional environment however, the need for accuracy is heightened and there isn’t always time to validate the integrity of the information source. This can lead to business decisions being taken without questioning the AI model at all.

The challenge therefore remains - how best to leverage the power of AI to help organisations extract insights that matter from out of the noise? One potential answer could be through the careful curation of quality content sets.

The computers may come up with believable-sounding answers, but it’s impossible to completely trust anything they say. They make their best guess based on probabilistic assumptions informed by studying mountains of data, with no real understanding of what they produce.

Richard Waters, West Coast Editor, Financial Times
AI models require high quality data inputs in order to learn
AI models require high quality data inputs in order to learn
AI models require high quality data inputs in order to learn
AI models require high quality data inputs in order to learn
AI models require high quality data inputs in order to learn
AI models require high quality data inputs in order to learn
AI models require high quality data inputs in order to learn
AI models require high quality data inputs in order to learn
AI models require high quality data inputs in order to learn
AI models require high quality data inputs in order to learn

Quality inputs yield better results

The Financial Times is renowned as one of the world’s leading news organisations and regularly tops lists of the most trusted business publications for senior executives. For companies looking to understand the macro environment they’re operating in, the FT is one of the core news sources they need to be across. However, even the most voracious FT reader would struggle to read every single article in order to be sure they weren’t missing anything of importance.

Increasingly businesses are looking to technology to provide solutions to information overload, and ensure that relevant content is surfaced in a timely manner. Organisations that identify the Financial Times as an important intelligence source are applying machine learning and NLP techniques to extract key insights in a systematic way. Asset managers are identifying trending topics, quants are building trading algorithms, and corporates and central banks are monitoring market sentiment all because of how rich, accurate and widely consumed FT content is.

Through a datamining licence from FT Integrated Solutions, full-text FT articles and metadata are available in a machine readable format via APIs and an extensive 5+ year archive. Specific rights then enable organisations to generate sentiment scores and counts, as well as data visualisations and other forms of analysis.

FT content as a dataset

Organisations that are developing their own AI and NLP functions, rather than relying on third party platforms, are doing so in order to ensure the outputs align with their own priorities and view of the world (i.e how do we define what constitutes a macro risk to our business?).

The FT dataset contains hundreds of thousands of articles containing metadata that’s been validated by the FT's editorial newsroom, enabling greater confidence in the tagging when compared with solely automated topic classification. This human element enhances the accuracy of the metadata in that it enables articles to be tagged with harder-to-define topics and storylines.

The example below demonstrates how models can be directed to listen to metadata fields for highly specific topics. A storyline such as ‘Asia maritime tensions’ that the FT’s global network of reporters has been covering, can be combined with other tags such as ‘scoops’, which can suggest a higher propensity to move markets or signal that a certain event has yet to be reported in the mainstream.

Direct AI models to listen for highly specific topics
Direct models to listen for highly specific topics and novel content
Direct AI models to listen for highly specific topics
Direct models to listen for highly specific topics and novel content
Direct AI models to listen for highly specific topics
Direct models to listen for highly specific topics and novel content
Direct AI models to listen for highly specific topics
Direct models to listen for highly specific topics and novel content
Direct AI models to listen for highly specific topics
Direct models to listen for highly specific topics and novel content

While interest surrounding the new ‘search wars’ looks set to fuel further experimentation and investment in AI tech, organisations that are mindful of the need for high quality inputs will in all likelihood enjoy the best outcomes. FT content is not available in a machine readable form from any other third party API providers, so get in touch with our team to explore what challenges a datamining licence can help you address, regardless of how advanced your existing AI and NLP capabilities are.

Datamining licences from FT Integrated Solutions enable organisations to integrate coverage into workflow for better discovery of critical information as well as quicker identification of risks and opportunities.

For more information about how you can leverage Financial Times journalism for smarter decision making, please get in touch with our FT Integrated Solutions team.

You might also like