|Predecessor||IBM Watson Health|
|Founded||June 30, 2022|
|Headquarters||Ann Arbor, Michigan|
Number of employees
Merative L.P., formerly IBM Watson Health, is an American medical technology company that provides products and services that help clients facilitate medical research, clinical research, real world evidence, and healthcare services, through the use of artificial intelligence, data analytics, cloud computing, and other advanced information technology. Merative is owned by Francisco Partners, an American private equity firm headquartered in San Francisco, California. In 2022, IBM divested and spun-off their Watson Health division into Merative. As of 2023[update], it remains a standalone company.
Thomson Healthcare was a division of Thomson Corporation until 2008, when, following Thomson's merger with Reuters, it became the healthcare unit of Thomson Reuters. On April 23, 2012, Thomson Reuters agreed to sell it to Veritas Capital for US$1.25 billion. On June 6, 2012, the sale was finalized and the new company, Truven Health Analytics, became an independent organization solely focused on healthcare.
IBM Corporation acquired Truven Health Analytics on February 18, 2016, and merged it with IBM's Watson Health unit. Truven Health Analytics provided healthcare data and analytics services. It provided information, analytic tools, benchmarks, research, and services to the healthcare industry, including hospitals, government agencies, employers, health plans, clinicians, pharmaceutical, biotech and medical device companies. Truven is a portmanteau of the words "trusted" and "proven".
In January 2022, IBM announced the sale of part of the Watson Health assets, including Truven to Francisco Partners for a reported $1 billion. On June 30, 2022, Francisco Partners announced the completion of acquiring Watson Health and launched a healthcare data company named Merative.
Watson's natural language, hypothesis generation, and evidence-based learning capabilities are being investigated to see how Watson may contribute to clinical decision support systems, and the increase in artificial intelligence in healthcare for use by medical professionals. To aid physicians in the treatment of their patients, once a physician has posed a query to the system describing symptoms and other related factors, Watson first parses the input to identify the most important pieces of information; then mines patient data to find facts relevant to the patient's medical and hereditary history; then examines available data sources to form and test hypotheses; and finally provides a list of individualized, confidence-scored recommendations. The sources of data that Watson uses for analysis can include treatment guidelines, electronic medical record data, notes from healthcare providers, research materials, clinical studies, journal articles and patient information. Despite being developed and marketed as a "diagnosis and treatment advisor", Watson has never been actually involved in the medical diagnosis process, only in assisting with identifying treatment options for patients who have already been diagnosed.
In February 2011, it was announced that IBM would be partnering with Nuance Communications for a research project to develop a commercial product during the next 18 to 24 months, designed to exploit Watson's clinical decision support capabilities. Physicians at Columbia University would help to identify critical issues in the practice of medicine, where the system's technology may be able to contribute. And also, physicians at the University of Maryland would work to identify the best way that a technology like Watson could interact with medical practitioners to provide the maximum assistance.
In September 2011, IBM and WellPoint (now Anthem) announced a partnership to utilize Watson's data crunching capability to help suggest treatment options to physicians. Then, in February 2013, IBM and WellPoint gave Watson its first commercial application, for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center.
IBM announced a partnership with Cleveland Clinic in October 2012. The company has sent Watson to the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, where it will increase its health expertise and assist medical professionals in treating patients. The medical facility will utilize Watson's ability to store and process large quantities of information to help speed up and increase the accuracy of the treatment process. "Cleveland Clinic's collaboration with IBM is exciting because it offers us the opportunity to teach Watson to 'think' in ways that have the potential to make it a powerful tool in medicine", said C. Martin Harris, MD, chief information officer of Cleveland Clinic.
In 2013, IBM and MD Anderson Cancer Center began a pilot program to further the center's "mission to eradicate cancer". However, after spending $62 million, the project did not meet its goals and it has been stopped.
On February 8, 2013, IBM announced that oncologists at the Maine Center for Cancer Medicine and Westmed Medical Group in New York have started to test the Watson supercomputer system in an effort to recommend treatment for lung cancer.
On July 29, 2016, IBM and Manipal Hospitals (a leading hospital chain in India) announced the launch of IBM Watson for Oncology, for cancer patients. This product provides information and insights to physicians and cancer patients to help them identify personalized, evidence-based cancer care options. Manipal Hospitals is the second hospital in the world to adopt this technology and first in the world to offer it to patients online as an expert second opinion through their website. Manipal discontinued this contract in December 2018.
On January 7, 2017, IBM and Fukoku Mutual Life Insurance entered into a contract for IBM to deliver analysis to compensation payouts via its IBM Watson Explorer AI, this resulted in the loss of 34 jobs and the company said it would speed up compensation payout analysis via analysing claims and medical record and increase productivity by 30%. The company also said it would save ¥140m in running costs.
It is said that IBM Watson will be carrying the knowledge-base of 1000 cancer specialists which will bring a revolution in the field of healthcare. IBM is regarded as a disruptive innovation. However, the stream of oncology is still in its nascent stage.
Several startups in the healthcare space have been effectively using seven business model archetypes to take solutions[buzzword] based on IBM Watson to the marketplace. These archetypes depends on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders).
In 2019, Eliza Strickland calls "the Watson Health story [...] a cautionary tale of hubris and hype" and provides a "representative sample of projects" with their status. A 2021 post from the Association for Computing Machinery (ACM) titled "What Happened To Watson Health?" described the portfolio management challenges of Watson Health given the number of acquisitions involved in the division creation in 2015, and its near-total emphasis on the "Blue Washing" process over acquisition customer-base needs.
On January 21, 2022, IBM announced that it would sell Watson Health to the private equity firm of Francisco Partners.
The subsequent motive of large based health companies merging with other health companies, allows for greater health data accessibility. Greater health data may allow for more implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As the amount of data increases, AI decision support systems become more efficient. Numerous companies are exploring the possibilities of the incorporation of big data in the health care industry.
IBM's Watson Oncology is in development at Memorial Sloan Kettering Cancer Center and Cleveland Clinic. IBM is also working with CVS Health on AI applications in chronic disease treatment and with Johnson & Johnson on analysis of scientific papers to find new connections for drug development. In May 2017, IBM and Rensselaer Polytechnic Institute began a joint project entitled Health Empowerment by Analytics, Learning and Semantics (HEALS), to be explored using AI technology to enhance healthcare.
Some other large companies that have contributed to AI algorithms for use in healthcare include:
Microsoft's Hanover project, in partnership with Oregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells.
Google's DeepMind platform is being used by the UK National Health Service (NHS) to detect certain health risks through data collected via a mobile app. A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.
Intel's venture capital arm (Intel Capital) recently invested in startup Lumiata, which uses AI to identify at-risk patients and develop care options.
Artificial intelligence in healthcare is the use of complex algorithms and software to emulate human cognition in the analysis of complicated medical data. Specifically, AI is the ability for computer algorithms to approximate conclusions without direct human input.
What distinguishes AI technology from traditional technologies in health care is the ability to gain information, process it and give a well-defined output to the end-user. AI does this through machine learning algorithms. These algorithms can recognize patterns in behavior and create its own logic. In order to reduce the margin of error, AI algorithms need to be tested repeatedly. AI algorithms behave differently from humans in two ways: (1) algorithms are literal: if you set a goal, the algorithm can't adjust itself and only understand what it has been told explicitly, (2) and algorithms are black boxes; algorithms can predict extremely precise, but not the cause or the why.
The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes. AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. Medical institutions such as The Mayo Clinic, Memorial Sloan Kettering Cancer Center, and National Health Service, have developed AI algorithms for their departments. Large technology companies such as IBM and Google, and startups such as Welltok and Ayasdi, have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI solutions[buzzword] to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs. Companies are developing predictive analytics solutions[buzzword] that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.
The following medical fields are of interest in artificial intelligence research:
The ability to interpret imaging results with radiology may aid clinicians in detecting a minute change in an image that a clinician might accidentally miss. A study at Stanford created an algorithm that could detect pneumonia at that specific site, in those patients involved, with a better average F1 metric (a statistical metric based on accuracy and recall), than the radiologists involved in that trial. The radiology conference in Radiological Society of North America has implemented presentations on AI in imaging during its annual meeting. The emergence of AI technology in radiology is perceived as a threat by some specialists, as the technology can achieve improvements in certain statistical metrics in isolated cases, as opposed to specialists.
Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.
In 2018, a paper published in the journal of Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.
There are many diseases out there but there are also many ways that AI has been used to efficiently and accurately diagnose them. Some of the diseases that are the most notorious are Diabetes, and Cardiovascular Disease (CVD), which are both in the top ten for causes of death worldwide, and have been the basis behind a lot of the research/testing to help get an accurate diagnosis. Due to such a high mortality rate being associated with these diseases, there have been efforts to integrate various methods in helping get accurate diagnosis.
An article by Jiang, et al. (2017) demonstrated that there are multiple different types of AI techniques that have been used for a variety of different diseases. Some of these techniques discussed by Jiang, et al. include: Support vector machines, neural networks, decision trees, and many more. Each of these techniques are described as having a “training goal” so “classifications agree with the outcomes as much as possible…”.
To demonstrate some specifics for disease diagnosis/classification, there are two different techniques used in the classification of these diseases which include using "Artificial Neural Networks (ANN) and Bayesian Networks (BN)”. From a review of multiple different papers within the timeframe of 2008–2017, it was observed within them which of the two techniques were better. The conclusion that was drawn was that “the early classification of these diseases can be achieved by developing machine learning models such as Artificial Neural Network and Bayesian Network.” In another conclusion, Alic, et al. (2017) was able to draw was that between the two; ANN and BN is that ANN was better and could more accurately classify diabetes/CVD with a mean accuracy in “both cases (87.29 for diabetes and 89.38 for CVD).
The increase of Telemedicine, has shown the rise of possible AI applications. The ability to monitor patients using AI may allow for the communication of information to physicians if possible disease activity may have occurred. A wearable device may allow for constant monitoring of a patient and also allow for the ability to notice changes that may be less distinguishable by humans.
Electronic health records are crucial to the digitalization and information spread of the healthcare industry. However logging all of this data comes with its own problems like cognitive overload and burnout for users. EHR developers are now automating much of the process and even starting to use natural language processing (NLP) tools to improve this process. One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response at baseline. Meaning that using an AI tool that scans EHR data would pretty accurately predict the cause of disease in a person.
Improvements in Natural Language Processing led to the development of algorithms to identify drug-drug interactions in medical literature. Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken. To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature. Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms. Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were. Researchers continue to use this corpus to standardize the measure of the effectiveness of their algorithms.
Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports. Organizations such as the FDA Adverse Event Reporting System (FAERS) and the World Health Organization’s (WHO) VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.