AI and Automation

AI and Automation

We are quickly entering into a world where automation and AI are becoming larger parts of our lives. Automation is essentially using a machine to replace the work of a human. There is already automation in manufacturing, and a lot of other product processes which increase production. But if we automate too many jobs, there would be severe job loss. Self driving cars are also an example of automation, and as we can see, they’re not perfect yet. Many people would like to be able to choose to make sure their babies would not have genetic diseases, or choose their own eye color or improve their intelligence, or even something out there like changing your genes so that you grow wings. On a more serious note, surgeries can be performed by machines being controlled by a surgeon nearby. It makes smaller incisions and lets the patient recover faster. Experts also argue that super soldiers and AI to help medics would make us feel safer, and our country would have a stronger military. At the same time, the police would be able to read someone’s facial expressions to know if they are hostile, and potentially avoid unnecessary shootouts. As great as these benefits sound, it comes with some risks, especially if we rush into this technology. Some things we can do with automation we shouldn’t necessarily undertake lightly. Unlike some of the other automating humans have done, changing genes in humans, or mosquitos to get rid of malaria, sounds nice and all, but if we mess up that is stuck in the gene pool. If the mosquitos were to be changed in a way that affects how other animals eat them, that could mess up that ecosystem. Or, if we changed a baby’s genes intending to make that baby more intelligent but we messed up, you can’t exactly go back on that. That kid would then pass on that genetic alteration to its children and it would be in the gene pool. Additionally, super soldiers sound like something out of a superhero movie, but if we have them won’t everyone else? And if so, that would change the way war is done, and the weight of war that comes with people dying on your battlefield would be lost. Some of the top experts are concerned that we could lose our grip on this technology as it advances further and further. Even though there are a lot of positives to automation and eventually we could get to a point where we worked more of it out, we should slow down and take this change slow. Once we decide to introduce it, there’s almost no way of going back.          

There are a lot of benefits to automating labor, but there are also some big concerns about job losses. A MIT article talks about just how many jobs would be lost due to automation. It says, “for every robot added per 1,000 workers in the U.S., wages decline by 0.42% and the employment-to-population ratio goes down by 0.2 percentage points — to date, this means the loss of about 400,000 jobs.” That is a lot of jobs being lost, and this could suggest that unemployment as a whole would increase which is an issue. It also suggests that existing jobs that are not taken by robots would have an impact on their wages which is not good either. There are also different types of jobs that are pretty diverse like truck drivers to doctors, which is concerning how wide the range of jobs that are affected are. But, in an APNews article it talks about a positive that automation of making houses would bring. It says, “Printing houses rather than nailing them together could save huge quantities of scrap wood, metal and other discarded construction materials that are dumped into landfills every year.” This would also potentially make housing cheaper and more available to people who are homeless in addition to probably being better for the environment with less waste materials being produced in the production of each house. This example, however, would decrease a lot of jobs in the making of materials for the houses as well as people who construct them and drivers who bring the various materials to a construction site. So yes, there are some really great benefits of automation labor, but the amount of job losses is impactful enough so that it would be a net negative.

Additionally, many world leaders believe we should ban autonomous weapons. A main concern is that we could lose control of them the further they advance. In an IEEE article written by Mark Gubrud he talks about how automating weapons could be dangerous because of the fact that we could lose control of them. He references a ‘red line’ which is autonomizing weapons and that it would block an arms race. He says,  “I also knew that unless we resolved not to cross that line, we would soon enter an era in which, once the fighting had started, the complexity and speed of automated combat, and the delegation of lethal autonomy as a military necessity, would put the war machines effectively beyond human control.” This suggests that because of the speed machines make decisions and how complex their combat would be, it would put them farther than humans could remain in control. We could also potentially become reliant on machines for defense. In an NBC article it talks about additional issues with autonomous weapons such as the possibility that the weapons may not be able to handle a situation if it’s different from the predicaments they’ve been coded to know how to deal with. It says, “This is what researchers mean when they call such models “brittle”: They tend to crack when faced with a scenario slightly different from the conditions introduced during the model’s construction. This is doubly true when talking about warfare, since battlefield conditions are rarely predictable, and the enemy will always try to find a way to exploit this weakness.” This is a big concern because it is probably impossible to prepare the machines for every possible situation, and the battlefield is hardly something you can predict what will happen. It seems it wouldn’t be hard for the enemy to do something that the weapon isn’t prepared for. Cracking and control are big concerns of automating weapons. 

On the contrary, there are benefits to automating medicine and biology including changing genes to fix diseases, as well as reducing the need for physical drug testing. In a Bill of Health article it talks about AI being used to predict possible outcomes and problems without testing the drug on someone or something. It says, “AI is also being used to reduce the need for physical testing of candidate drug compounds by enabling high-fidelity molecular simulations that can be run entirely on computers (i.e., in silico) without incurring the prohibitive costs of traditional chemistry methods.” This can potentially fix problems with new medicines faster, making new and better drugs be produced faster. It can also make the new drugs maybe even better than they would have been developed if it were humans that were deciding how to fix them. This could also prevent people or animals from being harmed by experimental drug trials. In a National Library of Medicine article, it talks about how nanobots are being used to fix genes that could have hereditary diseases, specifically congenital nervous system malformations. It says, “CRISPR-Cas9 is mainly used to repair single gene in CNSM…CRISPR-Cas9 corrected one of the pathogenic genes, finally relieved GM2 ganglioside, and even finally reversed the brain development damage.” So not only did it repair and fix the gene, but it also fixed some of the developmental damage on the person’s nervous system. This could be developed further and if someone’s cancer is genetic then hypothetically, that could be a way to cure some cancers. This is a very beneficial development on its own, and the ways we could expand its use is a wide range. These are some really useful benefits to automating medicine and biology. 

On the other hand, there are concerns about using CRISPR to genetically engineer animals and humans because there could be problems if we start making changes we don’t fully understand. If there are, then it’s unlikely we could reverse it since once that changed gene is in the gene pool, you can’t take it out. Another concern is ‘super soldiers’ and how ethnically ambiguous that is, and how dangerous. A Vox article talks about gene drive which is when a gene is changed and is passed on and is no longer able to be taken out. It talks about some of the problems that could potentially happen that we may not be able to prevent. It says, “We might wipe out an entire species only to learn later that it was vital in some unforeseen way. We might modify a pest only to find out that it emerges stronger than ever. Once a gene drive starts spreading throughout a species, it’s hard to stop.” These are big concerns, considering the scale we could mess up. Whole species would be affected and whole environments could be totally changed without intent. A NBC article talks about how China is starting to try and create super soldiers, and talks about how the US is worried about staying militarily superior. It says, “There are no ethical boundaries to Beijing’s pursuit of power.” What makes this concerning is that the US would have to keep up with this super soldier if we wanted to stay on top in terms of our military power in the world. Since there is not a perceivable boundary of ethics to what they are willing to do to achieve ‘power.’ It is possible they’d be willing to go beyond super soldiers which may bring into question how far we would go to remain superior. Those are some real concerns of using CRISPR to genetically engineer animals and humans.  

Also, despite good sides to automating surveillance and policing, there are bad sides as well. A negative is the accuracy of the algorithms being used to teach AI how to predict. A NPR article talks about how trends of over policing neighborhoods with people of color would produce more crime there, and that data could make the AI think there is generally a higher crime rate in that area. It says, “historical crime data is not an objective record of all crimes committed in a city. It is a record of crimes that the police know about. And given the sort of historic pattern of overpolicing minority communities, overpolicing poor communities, these programs run the risk of essentially calcifying past racial biases into current practices.” This suggests that the AI would be predicting what crime police would notice and see based on where they have historically patrolled the most, which may make the predictions less accurate. On the other hand, there is technology being made to read facial expressions to know if someone will act aggressively or cooperate. A LEB article talks about how teams are learning what facial expressions might let them tell if someone will act violently, and that there will be technology to do this for them. It says, “U.S. Navy SEAL teams participate in studies to identify what changes in the face these warriors focus on to make friend-or-foe decisions.6 This data will help scientists “teach” AI components how to quickly discern threatening gestures and expressions.” This could potentially stop police from getting harmed by aggressive individuals, and could prevent shootouts and other dangerous situations. It could prevent police from shooting citizens unnecessarily in tense situations, like if someone is having a mental health crisis. But, if the basic data that the AI is running off of isn’t accurate, then the AI isn’t going to be as effective. So even though there are great potential benefits, the accuracy of the data we give them needs to be improved before we start to use them in helpful ways.

There are a lot of positives to automation, and with enough work we could potentially get to a place where it would be safer to ease into embracing some of these positives, but we should slow down and take this change cautiously and make sure we dont rush into it without knowing fully what we are doing. Once we decide to introduce it, if we mess up there’s almost no way of taking it back. Some of the risks are existential, meaning they could potentially mess up our existence. Messing with ecosystems is something humans are known for, but this could be on a much larger scale and irreversible. Having super soldiers may progress enough so that they are out of reach, and we could lose control over them which could lead to a large death toll, or even extinction. War is already complex, and if we introduce super soldiers, conflicts would likely happen more often, and if something were to go wrong it may be out of human hands. A lot of change as a whole is dangerous if we don’t understand the gravity of our decisions, especially if it’s a change we can’t come back from.  

More to Discover