Secondary Evaluation of Unintended Effects

In my previous post, I talked about how we should think about the unintended consequences of our actions, and how we should expect those consequences to play out. If you haven’t read that post, you may need to check for background. Having said that, let’s set the scene for what I think is the issue at hand.

Let’s say I find myself in the presence of someone who has a medical problem, but with whom I cannot communicate clearly (perhaps he is upset because of illness, or he speaks a different language). I can see evidence of various symptoms – the patient is obviously stressed and in pain, sweating, running a high fever, and many other indications of trouble. However, no matter what happens, the mad philosopher has locked me and this person together in a room that happens to be the largest medical warehouse in the world. Every possible medicine and type of medical device imaginable is available to me. So here’s the question – should I try to use as many resources as I have to provide this person with treatment?

The case in favor: clearly something is wrong. This person is sick, injured, and suffering. If I can help them, I should – it would be wrong of me to simply ignore the problem when I could do something to help.

The case against it: despite looking at the significant amount of this House, MDI’m not a doctor. I don’t have enough knowledge to intervene properly. I can see what the different symptoms are – the presence of fever and vomiting is evident, their heart rate is racing, etc., but I have no reliable way to determine what is causing these symptoms. And I have no way of knowing which, if any, of the medications available to me would be helpful. And I don’t have any insight into this person’s medical history and complications. Maybe they are already on some type of medication that could have a bad interaction with something else I might give them. I just have no way of knowing what the results of my efforts might be.

Now, one might suggest at this point that since I have no way of knowing what the results of my intervention will be, I also have no way of knowing whether the result will be better or worse. Technically, that’s true – I can’t know that. But in this case, do I have good reason to think that my efforts are very small or unlikely to do harm or good?

It seems pretty obvious in this case that I would do more harm than good if I intervened. Michael Huemer has explained explores the same thought, where he shows that for most of human history, doctors have often done more harm than good. This is because for most of human history, we knew nothing about how the body works. Huemer talks about how George Washington was given ineffective treatment by the doctors of his day that were meant to help him, and that probably contributed to his death. As he puts it, “Washington doctors were respected experts, and they used standard medical procedures. Why did they fail to help him? Simply put, they couldn’t help because they didn’t know what they were doing. The human body is a very complex system. Fixing it often requires a detailed and precise understanding of that process and the nature of the disorder it’s causing – information that no one else had at the time. Without such understanding, almost any significant intervention in the body will be dangerous. ” That is, when you make a state of ignorance in making a medical intervention, technically it is possible that the unknown results of your intervention. strength be positive, but much more it is possible that the result will be bad.

This is because there are more ways to harm the human body than to heal it. In the same way, and for the same reasons, there are many more ways to increase disorder in a complex system than to increase order. There are more ways to disrupt the natural balance of an ecosystem than to stabilize it. This is why most new ideas are bad. When you intervene in a complex dynamic system that you don’t understand, the valence of unintended consequences may be negative rather than neutral or positive.

But, you might say, not everyone shares my ignorance of medicine. How about a trained medical professional, with many years of experience? Wouldn’t medical intervention be a good idea if they were the ones to intervene?

That changes things. Obviously the intervention of such a person would be justified. Of course, this does not depend on seeking a doctor complete knowledge and their efforts guaranteed to be successful – an absurdly high standard. Doctors can still make mistakes, and sometimes unexpected complications occur that they could not reasonably have anticipated. The standard here is not perfection. What makes the difference is that the doctor knows believe accordingly that their intervention more likely than not to help the patient recover. They won’t get it right in every case, but they will get it right more often than not.

However, at the risk of testing the reader’s patience, there is another layer I can put on this thought experiment. Although I am not a medical professional, I know at least a few things about basic first aid. Nothing fancy, but things I can use if needed. I can, for example, tie a wound to stop bleeding, or clear a blocked airway – simple things like that. That’s an intervention I can’t reasonably be involved in – but if I try to go beyond that I can inject a patient with a large amount of warfarin and melt their skin off because hey, because I’m not know if the result of using this drug will be bad or good, everything is permanent so there is no reason not to try!

The important question here is whether experts, politicians, and policy makers are like competent medical professionals who treat a patient whose condition and medical history they understand well, or if they are in a situation very similar to mine locked in a warehouse with an imaginary patient. , or George Washington’s doctors.

Michael Huemer argues that policy makers are “in the position of medieval doctors. They carry simplistic, empirical theories about social functioning and the causes of social problems, from which they find various remedies—almost all of which prove ineffective or harmful. Society is a complex system whose repair, if possible, would require a precise and detailed understanding of the kind that no one has today.” I think this is more than a crime. I would say that policy makers are more like me in a warehouse with a patient than medieval doctors. That is, there are really only a few basic things that are understood well enough to be implemented – things at the level of common law like the protection of property rights, a stable legal system, the prevention of violent crime, etc.

These kinds of basic, general rules are the equivalent of my ability to provide basic first aid. But technology policy advocates see themselves more like skilled medical professionals with a detailed understanding of their patient, who are able to perform complex interventions in a complex system in a reliable way that produces beneficial results.

That concept isn’t new, of course — that level of overconfidence has always been around. And that very thought is part of what terrified Edmund Burke of the ideas that fueled the French Revolution. Burke, too, used the analogy of a sick and needy person, and thought that our way of dealing with social problems should reflect how we can deal with “father’s wounds, with godly fear and trembling.” And he saw those who were motivated by the hypocrisy of their perceived knowledge like me rushing to the patient with an injection full of warfarin, describing such people as “the children of their country who are quick to hack that elderly parent to pieces. in the cauldron of these magicians, in the hope that by their poisonous weed and speaking harsh words they might revive his father’s constitution and revive his father’s life.”


Source link