‘Care bots’ on the rise to replace human caregivers | Alexandra Mateescu and Virginia Eubanks

IIf you Google “care bots,” you’ll see an army of robot butlers and nurses, taking vital signs in hospitals, handing out red roses to patients, serving juice to the elderly. For the most part, these are just science fiction fantasies. The healing robots that already exist come in a different form.

These healthcare robots look less like robots and more like invisible pieces of code, webcams and algorithms. They can control who takes what test in the doctor’s office or how many hours of care a person receives on Medicaid. And they are everywhere. Increasingly, human caregivers are working through and alongside automated systems that make recommendations, manage and monitor their work, and allocate resources.

They emerge because the United States has chronically underinvested in health care infrastructure, relying heavily on informal family support and an industry supported by low-paid workers – largely immigrants and women of color. These workers have a median annual salary of $ 25,000 and nearly a quarter of the workforce lives below the federal poverty line. Yet the demand for their work is skyrocketing. In the United States, more than 50 million people are over 65, and that number is expected to almost double by 2060. The question arises: who will take care of them?

There is a growing belief that technology can fill this gap by rapidly creating large-scale healthcare systems, using artificial intelligence and remote monitoring. Exhausted and understaffed nursing home workers might have sensors and webcams to help them keep tabs on the health and well-being of residents. The growing “AgeTech” industry could help older people age in place in the comfort of their own homes.

As The Guardian reports today, for example, a company called CarePredict has produced a watch-like device that alerts caregivers if repetitive meal movements are not detected as expected, and one of its patents states that ‘he can deduce if someone is “using the toilet”. Another company has created technology that observes when someone has fallen asleep and if they have bathed.

Questions and answers

What is AI?

Show

Artificial intelligence (AI) refers to computer systems that do things that normally require human intelligence. While the holy grail of AI is a computer system that is indistinguishable from a human mind, there are several specialized, but limited, forms of AI that are already part of our daily lives. AI can be used with cameras to identify a person based on their face, to power virtual companions, and to determine if a patient is at high risk for disease.

AI should not be confused with other types of algorithms. The simplest definition of an algorithm is that it is a series of instructions necessary to accomplish a task. For example, a thermostat in your home is equipped with sensors to sense the temperature and instructions to turn it on or off as needed. It’s not the same as artificial intelligence.

The deployment of AI today has been made possible by decades of research into topics such as computer vision, which enables computers to perceive and interpret the visual world; natural language processing, allowing them to interpret the language; and machine learning, a way for computers to improve when they encounter new data.

AI allows us to automate tasks, gather information from huge datasets, and complement human expertise. But a rich body of research has also begun to document its pitfalls. For example, automated systems are often trained on huge treasures of historical digital data. As numerous high profile cases show, these datasets often reflect past racial disparities, which AI systems learn and reproduce.

Additionally, some of these systems are difficult for outsiders to interpret due to an intentional lack of transparency or the use of genuinely complex methods.

Thank you for your opinion.

Some of the uses of healthcare technology are valuable and valuable. But these tools can also hide human costs.

Automated decision-making and AI can undermine the autonomy of the very people these systems are meant to help. Home cameras, facial recognition systems, wearable motion trackers, and risk prediction models can make the elderly and disabled feel pressured to turn their homes into a retirement home. This undermines the focus on dignity and self-determination at the heart of independent living and community care.

Automated decision-making systems can also strengthen policies that treat the poor, the elderly, the disabled, the immunocompromised and communities of color as disposables. In healthcare, technology is increasingly used to screen patients, attract the attention of nurses and support clinical judgments. But these systems often reproduce – and even exacerbate – biases, as the data they use reflects inequalities that are already entrenched in health care. For example, Zaid Obermeyer and colleagues reported in Science in 2019, that a system used to allocate health care to 200 million people per year in hospitals across America significantly underestimated the medical needs of African Americans.

In some states, governments have adopted automated decision-making tools to assess eligibility for Medicaid services, often without much public debate and little transparency about how decisions are made. For example, an algorithm in Arkansas aimed to more evenly distribute the hours of care allocated to people receiving home and community services. But it has faced a wave of surveillance to dramatically reduce the working hours of people who depend on personal care assistants for basic activities of daily living such as washing, eating and using the toilet.

Surveillance in the name of care raises sensitive questions about the privacy and autonomy of those in need of support. Technologies like Electronic Visit Verification (EVV) were introduced to monitor care delivery inside homes using features like GPS location tracking, but they left service recipients disabled and elderly and their workers feel chained to an ankle monitor.

Many efforts to create healing robots are driven by a genuine desire to mend cracks in a strained and fragmented system. The devastation wrought by the Covid pandemic has made our need for better care evident, not only in hospitals and clinics, but in our homes, schools and streets. As the director of the National Alliance of Domestic Workers, Ai-jen Poo, urged us to recognize, the care industry was a “house of cards on the verge of collapse” long before the pandemic.

The pandemic and decades of grassroots organizing have encouraged the Biden administration to focus on investing in care jobs, sparking a new public conversation about care as critical public infrastructure. The Biden plan proposes to invest $ 400 billion to provide health and personal care services to seniors in their homes. While the plan usefully places large public investments at the heart of a revitalized health care system, it does not reconcile the more thorny issues – surveillance, erosion of autonomy and prejudices – that accompany the government’s inevitable use of management technologies. care.

The healthcare robots are already here. But their forays must not lead to techno-dystopia. Our future visions of a caring society must be based on justice and fairness, dignity and autonomy, not just efficiency and scale. The most essential aspects of caring – presence, compassion, connection – are not always easy, or even possible, to measure. The rise of care bots risks creating a system in which we only value the parts of care that can be turned into data.


Source link

About Man Bradshaw

Man Bradshaw

Check Also

Survivor Advocacy Program Boosts Support for Student Survivors with New Virtual Support Group

Members of the Ohio University campus and community participate in the ‘Take Back the Night’ …

Leave a Reply

Your email address will not be published. Required fields are marked *