Categories

Natural Intelligence: The Power of Swarm Learning

By Tom Alford, Deputy Editor, TMI

Swarm Learning is a new decentralised AI technology co-invented by Hewlett Packard Labs. Eng Lim Goh, PhD, Senior Vice President, Data & AI, Hewlett Packard Enterprise (HPE), explains how it works.

Diverse industries, from healthcare, agriculture and retail to public transportation, aerospace and finance, create vast quantities of data every hour of every day. The full power and value of that data can be unleashed only through deep and timely analysis. But there are challenges associated with existing approaches to understanding data that until now have imposed limitations on what is achievable with this vast resource.

Most techniques of deep learning rely on aggregation of huge pools of data and the application of ML to detect patterns. Whole industries could benefit from global knowledge exchange. The issue – and it is a major stumbling block – is that the dissemination of data increasingly faces technical and socioeconomic challenges that make global sharing difficult or impossible.

Eng Lim Goh
PhD, Senior Vice President, Data & AI, Hewlett Packard Enterprise

Rules around data sovereignty, security and privacy – including GDPR – can create immovable barriers to transferring and aggregating the large volume of data required to train ML models. What’s more, the cost of establishing and maintaining a central infrastructure to host and process the aggregated data can be prohibitive, possibly to the point where no one wants to take it on. Factor in the likely carbon footprint of any serious data aggregator, and the whole idea becomes untenable.

But Goh believes that these issues can all be overcome using the nature-inspired notion of ‘swarm learning’. This is modelled on the kind of individual and independent behaviours seen, for example, in ant or bee colonies that combine exchanges to yield a ‘global’ or swarm intelligence.

Multiple use cases

Formally announced in April 2022, HPE says its new model is essentially a decentralised AI-based approach that unites the latest edge computing paradigm – where data is processed closer to where it’s being generated to enable greater processing speeds and volumes – with blockchain-based peer-to-peer networking and co-ordination. It’s model that, says Goh, enables enterprises to harness the power of distributed data “while protecting data privacy and security”.

HPE has already tested its model with hospitals and healthcare researchers around the world. In practice, Goh says it has enabled participants to develop disease classifiers for leukaemia, Covid-19 and tuberculosis. In its peer reviewed paper in Nature, the firm confirms that the classifiers “outperformed those developed at individual medical facilities”.

It’s clear, says Goh, that HPE’s Swarm Learning technology overcomes the issue that all healthcare researchers around the world face. “Until now they have not been allowed to share patient data, even though by doing so they share the learnings of different hospitals in identifying a range of diseases because each sees different patients and discovers different health indicators.”

HPE’s Swarm Learning has since been extended to enable competing banks and credit card companies, which otherwise have no appetite for sharing customer data, to freely exchange the learnings from their data in the fight against fraud. By accelerating analytics and preserving privacy in this way, the prospect of real-time fraud detection becomes a reality, comments Goh.

HPE has also enabled manufacturing sites to benefit from what it calls predictive maintenance. This, Goh notes, allows technicians “to gain insight into equipment repairing needs and address them before they fail and cause unwanted downtime”. In this context, the swarm approach leverages the collective learnings from sensor data across multiple manufacturing sites.

Swarm basics

To learn globally, and unite research and understanding, where data cannot normally be shared, the idea of HPE’s Swarm Learning is a practicable solution because, as Goh states, “there is no sharing of data at this level”.

Every participant – be it hospital, bank, credit card company, manufacturer, or member of any other group with a mutual interest – is provided with a ML model, which it applies to its own private data. From this analysis it will begin to detect localised patterns. Periodically, a central co-ordinator compiles these pattern learnings – not the underlying data – using APIs to automate the collection process.

The co-ordinator analyses the submitted local pattern recognition data, and then creates a global average impression, which it returns to each participant for its own local consumption. The new set of local learnings can be added to and augmented with new private data by each participant before the next round of returns to the central co-ordinator are collected, processed, and the global patterns once again are returned for local consumption. And thus it should continue.

However, admits Goh, there was still one issue with this approach, “and that was who takes the role of the central co-ordinator”. While the responsibilities of collecting the private data are removed, aggregation of pattern recognition capability remains key to the success of this model. “The solution we found is to take the decentralisation capability one step further, replacing the central co-ordinator with a private permissioned blockchain.”

Now, with no central co-ordinator, Goh explains that each participant in their different countries and companies, can still execute ML across their own data locally, but once the participants are ready to share their learnings, a private permissioned blockchain uses an embedded smart contract to assign the central co-ordinator role to a single participant of the group.

For that allocated cycle only, the appointed central co-ordinator will be required to average every participants’ input, and return that data to them. The participants continue to learn in their own private environment, and in the next round of sharing, the smart contract assigns another participant to the role of central co-ordinator and the cycle continues.

“For each swarm, there needs to be agreement from the outset on how the smart contract in the blockchain assigns the next aggregator of the learning, but blockchain technology is fully able to remove that final hurdle, doing so, for example, either randomly or by round-robin,” comments Goh. “With no one participant more senior than another, the process enables all to benefit from the equitable sharing of global learning.”

The big question now for TMI’s readership is how this technology could be best used to distil the understanding of global events and responses, and create a new source of intelligence that could optimise treasury practice without sacrificing competitive advantage.

For a detailed appraisal of this approach, read HPE’s technical whitepaper, Swarm Learning: turn your distributed data into competitive edge.