Jump to ContentJump to Main Navigation
Fundamentals of Machine Learning$

Thomas P. Trappenberg

Print publication date: 2019

Print ISBN-13: 9780198828044

Published to Oxford Scholarship Online: January 2020

DOI: 10.1093/oso/9780198828044.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 28 July 2021

Artificial intelligence, the brain, and our society

Artificial intelligence, the brain, and our society

(p.233) 11 Artificial intelligence, the brain, and our society
Fundamentals of Machine Learning

Thomas P. Trappenberg

Oxford University Press

Abstract and Keywords

The concluding chapter is a brief venture into a more general discussion of machine learning, how it relates to artificial intelligence (AI), and the recent impact of this on society. It starts by discussing the relations of machine learning models in relation to the brain and human intelligence. The discussion then moves to the relation between machine learning and AI. While they are now often equated, it is useful to highlight some possible sources of misconceptions. It closes with some brief thought on the impact of machine learning technology our society.

Keywords:   artificial intelligence (AI), brain and mind, society, concerns, opportunities, safety and reliability

Machine learning is now basically equated with artificial intelligence (AI), and AI is aving an increasing impact on our society. This brief final chapter outlines the relation between machine learning and AI, the brain, and our society. We want to clarify some possible misconceptions, and to highlight some legitimate concerns as well as opportunities and unavoidable shifts in our society.

I hope that this book has given a sense of the amazing achievements in the machine learning community that will facilitate a new chapter in automation. Machine learning has already gained a considerable foothold in several industries, and it seems we have only just scratched the surface. Current applications are based primarily on the ability to learn to detect complex patterns in high-dimensional data. While this can help with many applications, such as computer vision and speech recognition, it also runs the risk of compromising privacy or misinterpreting data in data mining. In addition, automation on a large scale will have considerable influence on our economy and how wealth is created. It is therefore important to evaluate the impact of new technologies for our society.

The popular notion of equating machine learning with AI is clearly based on the progress with applications that have been difficult to tackle with traditional computer systems in te past and which seems to mimic more human abilities. A typical example is a computer vision system based on deep learning that can outperform humans, or machine assistants based on recurrent networks that respond to natural language. While it is good that these technologies have come to a wider attention in our society, the labeling itself might include the risk of mistinterpretation of what this technology can or cannot do.

In this context, it is interesting and appropriate to study the relations of machine learning with human abilities.We can even start to compare machine learning methods directly with possible analogies in brain processing.We will start this brief exploration by outlining the relation of machine learning and the brain before discussing the relations of machine learning and AI in more general terms. We will then close with some thoughts on the impact of machine learning onto our society.

11.1 Different levels of modeling and the brain

What stands out from the studies of machine learning is a new way of approaching modeling. In the introductory chapter, we outlined that the meaning of modeling is to describe a system in a simplified way which allows us to make predictions and thus encapsulates in some sense the essence of our knowledge. In this sense (p.234) , machine learning goes beyond simply providing convenient algorithms to solve some automation problems. Learning to represent or extract meaningful entities might not be equivalent to meaning itself, but it is certainly related.

To illuminate further what we mean here, let us discuss different types of describing a system by looking at an example of modeling a natural phenomenon, that of a falling leaf from a tree. In order to understand and describe such a situation we might think back to our physics lessons and digest the situation with a description embedded in physical laws. If we begin with an apple for convenience, we can start with gravity and understand that the apple falls straight down. Treating the apple as a point mass, we can even quantitatively predict the timing of the trajectory with high precision. Going back to the leaf, it gets a bit more complicated. We now need to take the airflow into account, which turns out to be a much more complex endeavor, requiring the study of flow dynamics. Analytically solving this problem is extremely complicated and maybe impossible in some practical applications. However, going back to the first principles of physics is still an excellent way to go about describing the situation of a falling leaf when using approximations to make numerical predictions with a computer.

Let us now bring a human perspective into the picture. A human is observing the scene and wants to catch the falling leaf. What must the human do to do achieve this? In essence, we need to decode sensory information, mostly from the eyes in this situation, to get information about the dimensions of the object and combine this with prior knowledge about typical falling patterns of leaves. Also, it might be important to take other information into account such as the amount of wind from tactile sensors in the face. In other words, we have a situation as discussed in this book where we want to make predictions from high-dimensional sensory data based on previous observations from which we learn. Thus, the point here is that there is a role for different type of modeling, either from physical principles, modeling with considering stochastic factor as in Bayesian networks, or building predictive models with deep neural networks. Physical modeling will provide us with the best accuracy of predictions if we get everything right, although the solution is highly specific to this particular situation. Humans are able to catch a leaf even though we are not using physical modeling every time we do so. There is some evidence that humans are able to provide some optimal reasoning in the Bayesian sense by taking appropriate factors into account such as priors of the probabilities of common outcomes. However, the fact that Bayesian models have been successful in describing some human behavior in cognitive science does not necessarily mean that Bayesian mathematics is implemented in our brains verbatim. Given that neural networks are general function approximators and have the ability to approximate a lot of theoretical models, it is interesting to ask how such Bayesian functions can be approximated and implemented in neural tissues.

What does seem clear is that the brain is set up perfectly for the type of modeling that is captured by deep learning. The brain is a deep neural network in the sense that it is a structure of processing elements with several stages of processing that form a layered hierarchical structure. In addition, the brain has the ability to change network parameters such as synaptic efficacies. Also, it has been shown recently that representational learning seems to capture some of the representational organization found in the brain. A good example was mentioned in Chapter 2, that of the existence of receptive fields that can be approximated by Gabor functions in the early visual (p.235) system. It has by now been shown many times that such filters emerge in early layers of neural networks when trained on natural images, at least when taking some additional constrains into account such as encouraging a sparse representation. There is now even more evidence that deep CNNs can capture a lot of the statistics in functional brain imaging data deeper in the visual system. Even without direct experimental evidence, it is clear that there is the need, in human information processing, to transform high-dimensional sensory spaces into higher-level descriptors and possibly a semantic latent space that can be used for advanced reasoning. The ability to implement such capabilities with networks of simple elements and the ability of the brain to learn such parameters alone offers sound evidence that studying machine learning in neuroscience is a good idea.

In addition, neural networks in the brain are not only feed-forward. It is well established that there are many recurrencies in the brain. This starts in the peripheral nervous system such as the retina in our eyes. The retina itself not only comprises of sensory cells such as rods and cones but it also has several neural processing layers that include collateral connections. Most sensory signals then pass through a midbrain structure called the thalamus that includes some regions with inhibitory collateral connections. It is also known that information goes back and forth between the thalamus and the neocortex. There are many other examples of system wide recurrencies in the brain. We outlined in Chapter 9 how recurrent neural networks can be used for advanced temporal processing. Thus, there is plenty of evidence that the brain exhibits factors of deep learning.

Artificial intelligence, the brain, and our society

Fig. 11.1 Outline of the human brain that shows some of the structures. The neocortex that is often identified as the brain wraps around many nuclei in the center of the brain and the upper brain stem.

However, it is also useful to recognize that there are many elements and structures in the brain that go beyond the descriptions of neural networks as machine learning (p.236) algorithms. To start with, the brain has advanced structures that seems to go beyond the architectures typically discussed in machine learning. Let us just point to some of the structure in the brain as illustrated in Fig. 11.1. A prominent part of the brain is the neocortex which is the wrinkly layer of tissue that covers most of the brain. Even in this fairly homogeneous-looking tissue there is a lot of structure. There are anatomical differences such as the ratio of different neuron types or the thickness of layers in different parts of the neocortex. The neocortex itself is made up of layers and sublayers, and the thickness of these layers varies in different parts of the neocortex.

There are also a number of parts of the brain. For example, the cerebellum is a very different structure compared to the neocortex and contains actually the largest number of neurons in the brain. The midbrain is surrounded by the neocortex. This brain area has many distinguishable clusters called nuclei, a collection of which form the basal ganglia.We mentioned this structures in conjunction with evidence of temporal difference learning in Chapter 10. Clearly, there seems to be some functional organization in the brain that have not thus far been paralleled by deep networks.

Moreover, it is unclear if the mode of electrical activation of neurons is the only information-processing machinery. There are many other aspects of potential information-processing abilities in the brain that have been identified. For example, neurons are not the only cells in the brain. There are others such as glia, that can form networks in which information can be processed. Also, there are extensive chemical networks within neurons. Some of these networks can be identified via their role in forming memories, but there are potential other consequences. Research about the role of information transmission in the dendrites is evolving beyond the traditional view of them acting only as passive conductors. There are indications of how more intricate subthreshold computations, backpropagating action potentials, and calcium waves could have important roles in human information processing. The point here is simply that there is a large source of complexity in the brain that we do not see at this point replicated in machine learning systems.

In summary, deep learning helps us to understand the potential of information processing in neuronal networks. However, we can not claim that we understand all the building blocks of minds; “real intelligence” can, at this point, not simply be reduced to deep learning.

11.2 Machine learning and artificial intelligence

AI is now discussed frequently in the media. Many of these reports seem to focus largely on concerns about and potential dangers of this technology, or the imagined technologies. It is important for any scientific discipline to discuss the relationship new technologies and sciences may have with our society. New technologies have always forced us to reflect on this, and it is not limited to AI. Inventions such as explosives, genetic interventions, and computers, have profound consequences for our society and environment. Discussion of the impact of technology should cover as broad a social span as possible.

It might be timely now to point out that this new discussion of AI seems mainly to be fueled by the advancements in machine learning. It is this aspect of AI that I want to discuss here. While AI is certainly the more recognized term in the broader public, it (p.237) might lead to some overestimation of what we achieved with machine learning. There are now many concerns that AI might lead to the development of machines that evolve to harm human deliberately. I would like to argue that there is already a substantial danger in machines and technologies used by humans now, but the concerns are largely overstates when it comes to machine learning.

Artificial intelligence, the brain, and our society

Fig. 11.2 The thinker robot. Human shape and pose can be deceiving.

AI is a diverse field of study. Most of it is about strategies and technologies to enable applications that require advanced control. AI is sometimes divided into two principle approaches, that of symbolic AI and that of sub-symbolic AI. Symbolic approaches are concerned with reasoning systems based on pre-defined knowledge representations that are encapsulated in symbols, hence symbolic AI. Such symbolic systems can use some form of an explicit logic method for inference to derive some conclusions. This type of AI has dominated much of the AI field at least since the 1970s, and is now often called “Good Old-Fashioned AI,” or GOFAI for short.

In contrast to GOFAI, machine learning is a main area of sub-symbolic AI that underlies most of the recent advancements that have brought AI to public attention. More specifically, machine learning focuses on methods to use data to built models that can then classify or forecast data that have not been seen before. This is a form of anticipation. These forecasts are based on the generalizations which the models learned form the training data and some form of regularization, including the assumptions build into the structure of the model. We have argued that building models in this way has advanced considerably to the point where it is thought that we can even learn meaning or semantic knowledge from data that would help to build the symbolic knowledge that underlies symbolic AI. Hence, there is the possibility that the traditional distinct area of AI will become linked to each other through machine learning, although this part of AI is still in its infancy.

An important implication of intelligence is that there is some form of reasoning involved. There might be some form of looking at processed data that can be viewed as (p.238) some form of reasoning, but for the most part, the machinery used in today’s machine learning lacks reasoning capabilities. There is often an attempt to make sense of the learned representation of a specific instance of machine learning or to extract some rules that summarize the functions in human-readable forms. Such descriptions might sound like human reasoning, though one needs to be careful in distinguishing here human post-hoc analysis from the abilities of machines to use reasoning capabilities to form new solutions. Reasoning is until now practically absent from machine learning, at least in the large areas of applications that are commonly discussed in machine learning. There are now examples of machines that produce visual art and music, an area that is now termed creative machine learning. Recurrent networks and stochastic sampling from latent spaces are behind much of these achievements. Exploring the consequences of machine learning in such creativeways is very inspiring and could lead to new developments. However, such creations should not be mistaken for representing logical reasoning systems. At least, this is not part of the mainstream machine learning.

An important discussion when it comes to defining advanced human abilities is the question of consciousness. Consciousness is certainly an important factor that underlines many of our deepest questions. It has been discussed at length in some philosophical and scientific circles. From these discussions, it is clear that a difficult question is about the “hard problem,” that of understanding how consciousness feels to others. However, on the more mechanistic side it seems that some form of selfawareness is an integral part, or even a prerequisite of consciousness. It has been argued that some form of recurrency or “re-entry” can facilitate self-awareness, so that we might already have some machinery for this within recurrent neural networks. However, at this point it seems that most deep learners are reflective systems that basically learn to represent density functions of world states but have little machinery for reasoning based on their own reflection.

In summary, while a discussion of AI and the potential of learning machines is important, there seems to be some misconception of the abilities of machine learning when it comes to using reasoning for their own advancement. We do not know at this point how such systems could work, and while this alone can raise some concern, the more outlandish depictions of the thread posed to society by AI are clearly unfounded.

11.3 The impact of machine learning technology on society

While these thoughts delve deep into philosophical questions about the machineries of the mind, it might be good to end this chapter with a brief discussion about the more direct impact that machine learning currently has on our society and what it could have in the near future.

There can be no doubt that there is already a strong influence of machine learning on our society which is reflected by a wave of new start-ups. Speech recognition and natural language processing has advanced to a level where we now have electronic personal assistants. While such electronic personal assistants might merely be fancy toys or minor conveniences for some people, they can dramatically improve the quality of life for others. Of course, there are many potential problems included in this technology, such as the potential for providing unintentional access to personal information (p.239) via sending speech thought the Internet and processing it remotely. Thus, while the machine learning components are important enablers of such technology, it is difficult to reconcile the fact that some of the worries attributed to AI are instead related to web technology. Indeed, machine learning might be part of the solution here, as the speech recognition and natural language processing aspect could be run locally instead of using a backend that requires that information is send into the cloud.

Even the discussion regarding which part of technology is to blame seems to be missing the point. What is needed is a more frank discussions and evaluation of what role technologies play within our society. To this end, we need to recognize that technology in general already enacts a strong influence on our society. Modern humans would barely be able to survive without the aid of technology for staying warm and getting food. Furthermore, certain technologies now have a much deeper impact on our society and our personal interactions through technologies such as social media. There is an increasing realization that our innate sensitivities in communications can be negatively affected by communications through social media. Also, while technology is often developed to help with mundane tasks, this automation has not led to a decrease of working hours as was originally thought. Instead, it has largely shifted the balance in the workforce; safer working environments fall on the positive side of this shift, falling employment in many parts of the workforce on the negative. New technologies can have drastic and immediate consequences. There is an increase in fatal car accidents caused by texting while driving, and even the risk of being exposed to new man-made pollutants can in part be attributed to new technologies. Thus, there is a real need to consider the impacts of technologies on our society.

With regards to machine learning specifically, there are new capabilities that can be used to solve problems as well as potential applications that create new concerns. A real problem is that machine learning methods can be used to aggregate information in a way that can be compromising for individuals. Traditionally, data are anonymized by simply removing personal identifiers such as names or social insurance numbers. However, machine learning has the capability to link data that are in isolation not informative enough to be linked to individuals. For example, data collected from what seems to be simple daily tasks such as shopping can now be used to target individuals via advertising. While such individualization of services can be helpful for some, it generally brings the danger of reinforcing prejudice in categorizations of common targets.

Another area of concern is that machine learning methods often have problems with reproducibility and a clear understanding of their generalization abilities to previously unexplored areas of their state space. These difficulties are a direct consequence of building high-dimensional non-linear models. It is now well recognized that reproducing results of machine learners can be difficult. While such systems are robust in many ways, it seems difficult to develop a full understanding of the impact of all hyperparameters in a model and all the consequences of a specific training set. Furthermore, it is difficult at this time to understand what the networks have learned and how to evaluate their performance, say, in different domains that have not been covered by the training data. Understanding the robustness of machine learners is now an emergent research topic in machine learning.

While there are certainly many areas of concern, we should not forget that machine (p.240) learning can help to solve many problems. Progress in autonomous systems can help operations of robots in danger zones like deep oceans or disaster zones. Or, while surveillance cameras can be useful in reducing crime, there is a legitimate concern about privacy when human operators watch the footage. Machine learning models can be trained to recognize behaviours with safety implications, often even more reliable than human operators. Such systems remove the need of human operators in such safety systems, and this alone may be enough to alleviate would remove some privacy concerns.

Another interesting discussion is with regard to self-driving cars. A lot of the progress in autonomous systems and robotics has been made due to the advancements in machine learning, with the increased abilities in computer vision and localization techniques. Cars can be built with many cameras covering all directions and additional sensors that outperform human sensors. Machine learning gives us the capabilities to integrate such sensor information for advance recognition systems that will ultimately increase the safety on our streets. Of course, the evaluation of the robustness of these systems and how their operation fits in with the current legal system regarding culpability if things go wrong are important factors that will need more deliberation. Thus, the problem does not lie in the technology per se but rather how we as a society decide to use it. For example, we could build redundant systems that surpass human abilities in pedestrian recognition, though the added cost of building them might prevent its implementation. The popularity of AI and machine learning in recent years has now opened this discussion, an important one to have.

While we focused here on safety concerns, it is important to consider the impact that new technology will bring to our economy and therefore our society as a whole. Automation has been an essential part in the development of our economy and hence society at least since the Industrial Revolution, although we could argue that technologies such as farming equipment had significant impacts much further back into the past. Automation has contributed to globalization though the economy of scales; large factories with cheap labor could produce goods that would otherwise be costprohibitive. Unfortunately, such globalization and scales of production also lead to a huge impact on our environment. Building sustainable and resilient communities is now increasingly viewed as important for the future of humankind. This is where the automation and individualization capabilities enabled though machine learning and other technological advancements bring new opportunities of a more refined economy that caters to local needs with local productions.

For example, 3-dimensional printers are able to produce some parts locally that were once produced using only specialized machines and often be shipped half way around the world. This availability of specific, local solutions to local economies, such as sustainable farming in different climates, is to be greatly welcomed. Technological advancements in farming have always led the way in advancements in technology. While the trend has been to use larger machines and flooding of chemicals to foster high yields from small areas, precision agriculture now seeks to optimize operations. For example, recognizing weeds and enabling spot spraying reduces the amount of herbicides use. Ideally, with physical weeding, we could eliminate the need for chemical solutions. Moreover, such forms of automation allow the farming of areas that have been too costly to farm in the past; enabling low-density farming or mixed crop (p.241) operations.

Artificial intelligence, the brain, and our society

Fig. 11.3 Some author with a prototype of a weeding robot developed by Nexus Robotics.

Machine learning is advancing rapidly. There are areas that use learned models to extrapolate to predict domains that have not been covered by the training examples. Such uses of machine learning can be viewed as providing “artificial creativity”. The results of such applications of machine learning can inspire, as demonstrated by some interesting applications of machine learning to art.

Preventing misguided use of machine learning and AI, as with any other technology, is a strong responsibility placed on our society. It is clear and widely accepted that we must devote more time as a society to reflect on the path we wish to take. In order to do so, we need education in this area, and possibly legislation and technology that can prevent some misuse. However, I believe that the greatest challenge comes from the changes in our economies and society brought forward by increased automation. Labor that has dominated wealth creation in the pre-industrial and industrial age will be replaced by automation where labor is replaced by energy. The challenge we face as a society is how to distribute created wealth within the society. Such shifts in our society are unavoidable, and it is up to us and our chosen societal structure as to how to use technology for the common good. These are issues that go far beyond machine learning in itself. (p.242)