The problem with AI | bare capitalism

Yves here. Richard Murphy provides a good, comprehensive treatment of some of the inherent limitations of AI, particularly in practice (he focuses on accounting and taxation but similar arguments apply to medicine and law). The main thing, which I suggested many years ago as data mining greatly reduced the number of entry-level jobs, was that junior scut work like legal research trained new professionals in the nuts and bolts of their job. Exceeding that meant they would be trained badly. I saw that in the rocky times of my youth. I was in the last group of Wall Street freshmen who prepared spreadsheets by hand and got data from hard copies of SEC filings and annual reports. I found that my juniors, who downloaded the sometimes flawed but never adjusted data from Computstat had a very poor understanding of how the company’s finances worked.

By Richard Murphy, part-time Professor of Accounting Practice at Sheffield University Management School, director of the Corporate Accountability Network, member of Finance for the Future LLP, and director of Tax Research LLP. Originally published on Fund the Future

Summary

I believe that although AI is powerful, it cannot replace human judgment and skills in many professions, including teaching, medicine, and accounting.

AI may perform certain tasks for itself, but it lacks the ability to interpret meaningless clues and understand complex real-world problems.

Experts need experience and training to provide human solutions, and AI’s limitations make it an unsuitable substitute for deep human interaction and expertise.

Gaby Hinsliff of The Guardian said in a column published yesterday:

The idea of ​​using technology as a kind of magic bullet that allows the country to do more with less is increasingly central to Labour’s plans to revamp Britain’s public services in what Rachel Reeves suggests will be a painfully tight budget. In a series of school interventions this week, Keir Starmer promised to “move forward to harness the full potential of AI”, while the science secretary, Peter Kyle, argued that doing some mundane tasks, such as marking, could be liberating. important time for teachers to teach.

You are right: this is the tendency of workers. This campaign appears to be coming from the Tony Blair Institute, whose leader has a notorious reputation and a long history of misreading the volume of technology, little of which he seems to understand.

A specific issue he was referring to was the use of AI for educational purposes. AI enthusiasts think it offers an opportunity to create a program that is tailored to each child. As Gaby Hinsliff points out, the idea is failing, so far.

Yes, I am aware of the fact that many inventions must fail before they succeed: that is, in essence, how these things work. It would be foolish as a result to say that because AI hasn’t solved this problem yet it won’t. But, even as someone who actively embraces AI in my workflow, I see huge problems with much of what employees and others are doing.

The immediate reaction of the labor market to AI will appear to be to reduce the level of those who are in demand now as employers think that AI will reduce the demand for those skills in the future. And yes, you heard that right: the assumption is that special skills will be replaced by AI in more and more areas. Graduates are being hit hard by this attitude right now.

For an accountant, for example, this is because it is assumed that much less tax expertise will be needed as AI will be able to answer difficult questions. Similarly, it is assumed that AI will take over the production of complex accounts, such as consolidated accounts of corporate groups.

Those who make such assumptions are incredibly stupid. Even if AI can do some parts of these processes, there will be major problems created as a result, the biggest of which is that no one will have the skills left to know if what AI has done is correct.

The best way to pay taxes is to learn more about them; by writing more about it (usually advising a client); and properly correct your work when someone above you says you didn’t get it right. There is a deep repetitive process in human learning.

Employers seem to think right now that they can get rid of most of this. They do this because those who decide that the training gaps are likely to be completed and pass them by, as a result, have acquired the skills to understand their subject. They know, in other words, they know that AI has to be done. But when those few people who will now be recruited reach the same level of authority, they will not know what AI is doing. They will have to think it is right because they will lack the skills to know if that is true or not.

The logic of AI proponents, in that case, is the same as that used by people like Wes Streitng when they advocate the use of physician assistants, which are decidedly semi-trained doctors who now work in the NHS, and even perform tasks, without having anything like the depth of knowledge required to perform the tasks they have requested. They are trained to answer the questions they are given. The problem is that you may be asked the wrong question, and both of them will go astray and cause harm.

The same is true for AI. It answers the question it is given. The problem is how to solve a problem that can’t be asked – and rarely does a client ask the right question when it comes to tax. The real professional skill comes from, first, working on what they really want, second, finding out if what they want is smart, and third, reframing the question to be one that can address their needs.

The problem with doing that is that this is a problem about human interaction, but also requires that all the technical aspects of the issues under consideration (which often involve a lot of taxes, as well as accounting and some law in general) are understood as such. that the necessary reorganization can be done, all of which require great judgment.

Do I think AI is close enough to do that job yet? No, I don’t want to.

Am I sure AI can do that job? I also doubt that, as I doubt its ability to deal with many medical and other technical problems.

Why is that? That’s because answering such questions requires the ability to read the client – including all their non-verbal cues and more. The technical aspects are a small part of the work, but without knowing the technical aspect, the expert – in any field – and I include all the skills of all kinds in that category – does not have the opportunity to frame their question correctly, or to know whether the answer they give is correct or not.

In other words, if a young professional is denied the opportunity to make every mistake in the book, as would happen if AI replaces them, then the chance that they will actually be able to solve real-world problems caused by real-world people. is very low, not least because almost no one is looking for help from any professional looking for a technical answer to any question.

They want the lights to work.

They want the pain to end.

They want to pay the right amount of tax without the risk of error.

They want to divorce with less stress.

The professional’s job is not to tell them how to do these things. It is to bring human solutions to human problems. And they won’t do that if they don’t understand the person in front of them and the technical problem. Use AI to do the technical part, and what’s left is a warm, empty, and meaningless smile that offers no comfort to anyone.

I’m not saying we shouldn’t use AI. I know we will. But anyone who thinks it can replace large parts of human interaction is sorely mistaken: I don’t believe it can, simply because people ask completely nonsensical questions that require someone to figure out what they mean.

And that’s why I think Gaby Hinsliff is right to say that AI can have a limited role in the classroom when he concludes:

It is true that AI, managed well, has great potential for good. But as Starmer himself goes on to say, there are no easy answers in politics – even, it turns out, if you ask ChatGPT.


Source link