While artificial intelligence has started on its rise, so has the impact of technology. A wide variety of industries are impacted by various devices and algorithms, making it crucial that we consider the effects of AI on all these sectors of the economy.
The vice president of Intel AI stated that very soon the agriculture industry will be completely transformed by Artificial Intelligence and the AI tools will be helping farmers to get most of the profit from every acre of their farm. The population is growing at a rapid rate while, at the same time, natural resources are degrading everyday. In the future, we must produce more with less available resources. Apart from the improvement in quantity, the quality of farm products has also been improved by AI. If we look at the agriculture industry, AI has already dominated it with major benefits. Beginning from robots that were used to spray the exact amount of herbicides and pesticides on weeds, to the introduction of harvesting robots in the agriculture industry, that can do work of around 30 farmers in a single day, the agriculture industry has come so far. To supervise and improve the health of crops and soils, various applications are being upgraded. Moreover, various models integrated with machine learning are on their way which will be helping in predicting the impact of environmental changes on crop yields. AI is planning to develop special drones that will help farmers to improve the quality of crop yields. With the help of FAIA, the AI used within the agriculture industry will be used only for good. Meaning that capitalists will not have the ability to use anything, such as chemicals, that can harm humans as a source of profit. Undoubtedly, the opportunities of AI in the agriculture field are massive, and in the future, the world will experience an extensive growth of yields annually.
The biases from AI in the criminal justice system are astounding… The current system in place is the “Correctional Offender Management Profiling for Alternative Sanctions” or “COMPAS”. It’s main goal is to analyze convicts and determine if they are going to offend again, and if they are, are they going to be nonviolent or violent offenders? On the surface, this seems like a valuable system. However, there is low accuracy, especially when it comes to convicts of color. For example COMPAS is only 61% effective with reoffending nonviolent convicts and 20% effective with reoffending violent convicts. With the FAIA, we are hoping to fix the ineffectiveness by doing the following: make sure that the program can be corrected for bias. There will always be some amount of bias because humans, the creators, will always have their own biases. We also plan on using humans as a “second check” to make sure there is very limited bias. The FAIA will not use any programs until it is as unbiased as possible. This will ensure that the biases found in the criminal justice system will be kept to a minimum.
AI has taken charge of the customer service and marketing industries. Natural language processing algorithms have found their way into customer-facing helplines through chatbots, which collect information about a customer’s issues and enable increased efficiency. Another example is customized search results and personalized ads which curate content to each individual user. The need for bias testing and governance is extremely important in this field due to the overwhelming interaction between people and algorithms. Through bias testing we can ensure that each customer is treated equally and with the appropriate tone and diction. Governance will allow there to be regulations in place for the customer’s welfare.
Artificial Intelligence algorithms are popping up throughout the education and learning world - especially due to the rise in remote learning over the past year. They are changing the ways that we teach and learn through increased customization and adaptation. We see them on the rise for learning, self-correction, and reasoning. This AI learning can be classified as assisted intelligence, augmented intelligence, autonomous intelligence, and/or automation. While AI does allow for personalized education, the production of smart content, task automation, and increased accessibility, it is important to make sure these systems are acting with some consciousness as they are interacting with people of all ages. Biases are present with regards to intelligence levels and different backgrounds and we don’t want algorithms to associate certain races, genders, or ages with capabilities. Without proper regulation, the introduction of AI into education can potentially further the rich-poor gap. These systems should not make such inaccurate and harmful assumptions.
During recent years, there has been a rapid introduction of AI systems into the healthcare sector. Such technologies are taking on astounding tasks, ranging from routine chores in medical practice to managing patients and medical resources. While these feats are impressive, developers must consider the risks associated with the mass introduction of these products. Proper government regulation of such AI systems can assist in mitigating such risks. AI models use data and statistics to train their models to complete given tasks, but lack of accurate and representative data is a problem which continues to persist in the healthcare industry. AI systems learn from the data on which they are trained, which can be biased. For instance, if the data available for AI are principally gathered in academic medical centers, the resulting AI systems will know less about—and therefore will treat less effectively—patients from populations that do not typically frequent academic medical centers. Those with other backgrounds are unlikely to receive the same quality of care as their urban counterparts. Quality and abundance of data varies starkly from region to region, often disenfranchising the less fortunate. In order to counteract these issues, our government agency can use bias testing to ensure such issues do not affect patients across the country, and AI governance will push for regulation of infrastructural resources within healthcare systems. This will mean quality oversight of AI products to prepare them for market entry.
Artificial Intelligence algorithms are constantly used in remote, self-driving, and autonomous cars. Self-driving cars have already made their way into mainstream companies like Tesla, and even Uber. Further, AI-based driving has applications in deliveries, goods transportation, and public transportation due to its cost efficiency. Soon, this technology will advance enough to allow humans to take the position of a supervisor, who will only be required for monitoring purposes. Driving using machine learning has its benefits, but the safety of everyone on the road must be put first. We need regulation to ensure that transportation systems are rigorously tested and new regulation is put into place for checks and balances. These vehicles also need to be continuously checked and updated when bugs are faced or new situations arise. Companies need to be held responsible for the expensive equipment they are putting out on the road to ensure that everyone around the vehicle is still safe.