Skip to content
Publicly Available Published by De Gruyter November 6, 2019

Artificial Intelligence in Basic and Clinical Neuroscience: Opportunities and Ethical Challenges

  • Philipp Kellmeyer

    Dr. Philipp Kellmeyer is a board-certified neurologist at the University Medical Center Freiburg, Germany. He studied medicine at the Universities of Heidelberg and Zurich and received a Master of Philosophy from the University of Cambridge (UK) on a full scholarship. He currently works as a clinical neuroscientist on a brain-computer interface to restore communication in severely paralyzed neurological patients. He is also a scientific member of the Cluster of Excellence (“Exzellenzcluster”) BrainLinks-BrainTools, an interdisciplinary research consortium on neurotechnological research at the University of Freiburg. In his neuroethical work he is particularly interested in the ethical challenges of emerging neurotechnologies, applications of big data and machine learning in clinical neuroscience, as well as ethics issues in disorders of consciousness and neurodegenerative diseases. He is also an affiliated researcher at the Institute for Biomedical Ethics and History of Medicine at the University of Zurich where he also teaches biomedical ethics. He is the leader of the Emerging Issues Task Force of the International Neuroethics Society and member of the Advisory Committee of the Neuroethics Network. In 2017, he received the “Förderpreis Bioethik” (Bioethics Prize) of the MTZ-Foundation for his neuroethics work. Since 2018, he is a member of the research focus “Responsible Artificial Intelligence” and Internal Senior Fellow at the Freiburg Institute for Advanced Studies (FRIAS) at the University of Freiburg.

    ORCID logo EMAIL logo
From the journal Neuroforum

Abstract

The analysis of large amounts of personal data with artificial neural networks for deep learning is the driving technology behind new artificial intelligence (AI) systems for all areas in science and technology. These AI methods have evolved from applications in computer vision, the automated analysis of images, and now include frameworks and methods for analyzing multimodal datasets that combine data from many different source, including biomedical devices, smartphones and common user behavior in cyberspace.

For neuroscience, these widening streams of personal data and machine learning methods provide many opportunities for basic data-driven research as well as for developing new tools for diagnostic, predictive and therapeutic applications for disorders of the nervous system. The increasing automation and autonomy of AI systems, however, also creates substantial ethical challenges for basic research and medical applications. Here, scientific and medical opportunities as well ethical challenges are summarized and discussed.

Zusammenfassung

Die Analyse großer Datenmengen (big data) mit künstlichen neuronalen Netzen für tiefes Lernen (deep learning) ist die treibende Technologie hinter neuen Systemen der künstlichen Intelligenz (KI) für alle Bereiche der Wissenschaft und Technik. Diese KI-Methoden haben sich aus Anwendungen in der automatisierten Bilderkennung (computer vision) entwickelt und umfassen heute Methoden zur Analyse multimodaler Datensätze, die Daten aus vielen verschiedenen Quellen kombinieren, darunter biomedizinische Geräte, Smartphones und allgemeines Nutzerverhalten auf Apps und im Netz. Für die Neurowissenschaften bieten diese zunehmenden Ströme persönlicher Daten und Deep Learning viele Möglichkeiten für die grundlagenorientierte Forschung sowie für die Entwicklung neuer diagnostischer, prädiktiver und therapeutischer Anwendungen bei Erkrankungen des Gehirns. Die zunehmende Automatisierung und Autonomie von KI-Systemen erzeugt aber auch erhebliche ethische, rechtliche und gesellschaftliche Herausforderungen. In dieser Arbeit werden die neurowissenschaftlichen und medizinischen Chancen sowie ethischen Herausforderungen zusammengefasst und diskutiert.

Introduction

Artificial intelligence (AI) seems to be everywhere now. From navigational tools, digital assistants, and self-driving vehicles, to social robots, autonomous weapons, analytic and predictive tools in science to decision-support systems in medicine and many other domains and applications.

This development is in large parts a result of a particular technological convergence in recent years: the concomitant rise of big data, advanced methods of machine learning (e. g. deep learning) and increasing computing power and efficiency. This perfect technological storm drives a large-scale techno-social transformation across all sectors in society: work, health, research and technology and the social domain; which is often indiscriminately referred to as digitalization.

But what is AI exactly and why does it capture the imagination so vividly and often disquietingly? What is the current and future impact of AI for neuroscience and the clinical fields occupied with treating brain diseases and mental health disorders? What are the ethical, legal, social and political tensions and challenges that emerge from this techno-social constellation?

Here, I will first provide short and succinct background information on the technological aspects of the current wave of AI methods and contextualize these developments in terms of their putative current and future applications in neuroscience. This will provide the basis to then discuss important ethical, legal and social challenges. The focus in that regard will be on the question of how societies can benefit from the many promising applications of AI in neuroscience and neuromedicine while ensuring the responsible design, development and use of this transformative technology.

Background: Artificial intelligence, big data, machine learning and neurotechnology

According to the latest analysis of the innovation dynamics of emerging technologies from 2018—the Gartner®[1] Hype Cycle for Emerging Technologies—artificial neural networks (ANNs) for deep learning are currently located at the very “peak of inflated expectations”. This represents a snapshot of the cacaphonous media buzz and hype surrounding the putatively transformative power of AI for all sectors of society. As a basis for our discussion here, we need to recognize that the main driving force of what is usually referred to as AI today is the convergence of several technological innovations and components[2]:

  • Ubiquitous data-collecting technology: in the environment (e. g. public closed-circuit television), in machines (e. g. cars), in personal devices (e. g. smartphones for collecting personal data on user behavior, movement, geolocation and many other parameters), as well as the traditional arenas in biomedicine such as medical centers and research institutions.

  • The, mostly cloud-based, server infrastructure to store and process large amounts of these personal data (big data);

  • High-performance analyses on these data with graphics processing units (GPUs), particularly with

  • Machine learning (ML) methods, particularly artificial neural networks for deep learning,

  • Dynamic user interfaces to facilitate human-AI interaction

These infrastructural and technical components provide the basis for many applications of AI in research, technology development and clinical medicine. One illustrative and highly dynamic translational research area is the field of neurotechnology. Figure 1 illustrates how many of the components mentioned above can be fully integrated to build an AI-based brain-computer interface that could provide a paralyzed individual with the means to operate a computer-based communication system. But neurotechnology is not confined to the assistive treatment of relatively rare neurological disorders, such as severe paralysis / locked-in syndrome, but has recently also entered the consumer-market with various devices for neurofeedback-based relaxation or well-being applications (Ienca et al., 2018; Kellmeyer, 2018).

Current and future applications of AI for basic and clinical neuroscience

In neuroscience, as in most other research areas, AI systems based on artificial neural networks have a wide spectrum of applications. As we have discussed, machine learning with ANNs has proven particularly successful in computer vision tasks. Therefore, the primary domain of application in neuroscience will also be the processing and classification of a large amounts of images. Examples are the classification of histopathological images (Litjens et al., 2016), the segmentation of tumors in brain MRI images (Pereira et al., 2016) and many other processing applications in neuroimaging (Akkus et al., 2017; Milletari et al., 2017; Kleesiek et al., 2016). In addition to such computer vision task, however, AI methods based on ANNs are also successfully used in the analysis of bioelectric and hemodynamic brain signals, particularly electroencephalography (EEG) (Schirrmeister et al., 2017a; Schirrmeister, et al., 2017b). In that research area, EEG signal analysis with deep learning could be used, inter alia, to operate an autonomous robot via a brain-computer interface (Burget et al., 2017), classify EEG recordings as normal or pathological (Schirrmeister et al., 2018). Another emerging machine learning method, generative adversarial networks (GANs), have recently been applied in neuroscience to generate naturalistic EEG signals (for data augmentation purposes) (Hartmann et al., 2018), and other applications (Wang et al., 2019).

Figure 1: Example of an AI-based brain-computer interface that integrates ❶ intracranial electroencephalography (iEEG) to sense bioelectric brain activity and ❷ transmit large amounts of brain data to a ❸ computer-based processing unit with a ❹ high-end GPU that uses deep learning to analyze the brain data which in turn is used to operate a ❺ dynamic user interface, e. g. for communication.
Figure 1:

Example of an AI-based brain-computer interface that integrates ❶ intracranial electroencephalography (iEEG) to sense bioelectric brain activity and ❷ transmit large amounts of brain data to a ❸ computer-based processing unit with a ❹ high-end GPU that uses deep learning to analyze the brain data which in turn is used to operate a ❺ dynamic user interface, e. g. for communication.

Apart from these applications in data analytics in neuroscience, a comprehensive and high-impact review (Hassabis et al., 2017) of how neuroscientific knowledge and methods can inspire AI methods, and vice versa (Marblestone et al., 2016), has shown that ANNs for deep learning have contributed substantially to the understanding of complex cognitive functions, such as attention, memory and learning, at the level of regional and inter-regional brain networks.

In clinical neuroscience, understood here as the overlapping domains of clinical research and clinical provision in neurology and psychiatry, these AI methods also provide fertile ground for new applications in diagnosing, predicting and treating brain diseases and mental health disorders.

To highlight a few developments here: (a) in the area of diagnostics, the AI-based image processing methods could obviously be used for various groundbreaking applications—e. g. the differentiation between healthy and pathological brain images, the segmentation of tumor tissue from brain MRI images, the diagnosis and sub-classification of neurodegenerative movement disorders from tracer-based imaging. (b) In the area of prediction, the same methods could be used to predict the onset of dementia, or the likelihood / risk of epileptic seizures from implanted cortical electrodes, predict the fluctuations of disabling movement symptoms in Parkinson’s disease from deep brain electrodes, and of course many other applications. In the area of therapy, deep learning with ANNs could be used to develop new targeted drugs (Popova et al., 2018; Gawehn et al., 2016)[16, 17], e. g. based on antibodies and fusion proteins (“biologicals”), e. g. for treating neuroimmunological diseases such as multiple sclerosis; or for closed-loop control of impending epileptic seizures via a real-time cortical monitoring and electrostimulation system (Berényi et al., 2012).

The breadth of the actual and potential applications of AI methods can only be sketched here; the reader’s imagination is trusted, however, to visualize the full extent and importance of this development for neuroscience and medicine in general, which have also been treated comprehensively by other authors, see e. g. (Topol, 2019). Such a profound and cross-cutting socio-technological change, one might say paradigm shift, of course, creates substantial ethical legal and social challenges, some of which shall be highlighted here.

Ethical challenges of human-AI interaction in basic and clinical neuroscience

In this section, I highlight some of the most widely discussed current ethical concerns and tensions in neuroethics, neurolaw and related disciplines that engage with these issues. As a disclaimer, given the limited scope here, I neither aim to provide a complete overview nor anything other than my own subjective view on these issues–for a selection of further recent contributions and views please also see (Ienca et al., 2018; Amadio et al., 2018; Illes, 2017; Yuste et al., 2017; Ienca et al., 2017; Mittelstadt et al., 2016).

Shared agency and autonomy in human-AI interaction

In the context of very close human-AI interaction, for example in a closed-loop brain-computer interface in an epilepsy patient, the degree to which the underlying AI system is granted decision-making capacity and—conversely—how much the human subject is kept in the loop in these interactions may lead to new hybrid forms of human-machine or human-AI actions.

Imagine, for example, Maria, a 45 year old woman with severe motor paralysis of the upper and lower limbs who has been implanted with a closed-loop electrode system that allows her to use her brain activity to operate a service robot that can reach, grasp and bring her objects.

Now, any particular action sequence by Maria, say for example fetching a cup and drinking tea, is only realizable by decoding her brain activity and having the robot perform the required tasks. Suppose further that the robot itself also has some degree of autonomy in terms of how it realizes this goal, for example it may have the capacity to freely roam the room and grasp the cup any way that is optimal for realizing the set goal. In such a scenario, it would be reasonable to consider the human-robot interaction necessary for realizing Maria’s goals as requiring a form of shared agency (and autonomy) between Maria and the service robot.

This may seem perfectly fine for all instances in which the interaction works as intended by Maria and her goals a fully realized without significant deviations from the robot. But what happens in cases of unintended yet substantial failures—what if, for example, the robot spills the hot tea and injures a third person or Maria herself? Such interactions gone awry lead to questions of responsibility and accountability in human-AI interaction that are difficult to navigate both ethically and legally.

Accountability, responsibility and the question of trust

Without having the space to provide a detailed and philosophically grounded conceptual analysis here, I urge the reader to consider the difference—ethically and legally—of the concepts of accountability and responsibility. Both denote the ascription to an individual (or active claim by an individual) of some kind of causal agency in a particular action or sequence of actions; for example: “Margret was responsible for writing the letter to the president.” or “The policeman is accountable for explaining his use of his service weapon.”.

In cases of very close, shared or even hybrid actions that are performed in concert by a human and an AI system, however, one might encounter a gap in our ability to unequivocally ascribe responsibility and/or accountability to particular actions. This “accountability gap” (Kellmeyer et al., 2016) may arise in many situations in which decision-making capacity is relegated to an AI system—e. g. a deep-learning-based brain implant or a self-driving car—whose internal learning dynamics and decision-making processes we cannot sufficiently infer: the so-called “black box” aspect of AI (Castelvecchi, 2016). In the ethical and legal domain, we do not yet have effective and resilient norms to ascribe (let alone adjudicate) responsibility in cases of system failures for such black box systems.

Therefore, the topic of → interpretability of machine learning algorithms, particularly ANNs for deep learning, is not only of great interest for computer scientist and engineers, but also an indispensable prerequisite to be able to provide a reasonable ethical understanding and precise legal instruments to adjudicate future cases of liable human-AI interactions.

Intrusive AI and the protection of brain data, mental privacy and personal identity

Today, our methods for observing brain activity, mainly EEG, functional MRI and related methods have inherent limit in the ways in which they can measure the temporal, spatial and frequency-related characteristics of brain signals. This limits the amount and quality of information that we can extract from these signals with our current analytical methods, yet we already see how emerging machine learning methods, specifically deep learning, improve our information extraction capabilities substantially (Akkus et al., 2017; Milletari et al., 2017; Schirrmeister et al., 2017a).

If this progress in data analysis will be complemented by substantial improvement in our measurement methods, for example with intracortical microelectrode grids that measure EEG directly from the cortical surface or, as yet unproven methods such as “neural dust” (a system of intracortical nanoparticles and ultrasound) (Neely et al., 2018), we can expect substantial further progress in the types and amounts of information that can be extracted from neurotechnological measurements.

The practical limits on the amount and specificity of information that can be extracted from brain signals at the individual level—now and in the near future—mean that scenarios involving “reading” the “mind” or “thoughts” will remain elusive for the time being. This will not deter scientists in public (or private-public) research institution nor researchers in technology companies (that have invested substantially in their own neuroscience and neurotechnology research in recent years (Kellmeyer, 2018; Strickland, 2017; Regalado, 2017; Clark, 2017), however, to use brain data as an interesting class of personal data in multimodal deep learning analysis frameworks. In this scenario, we do not yet know whether: a) the combination of many different classes of data (e. g. user behavior, geolocation data, data from devices, brain data etc.) allows for hitherto unprecedented inferences on an individual’s first-person subjective (i. e. “mental”) experience and/or her personal identity (Kreitmair et al., 2017); or, b) the aggregation of such multimodal data from an unprecedented number of individuals (e. g. in a large-scale “experiment” on an internet platform in which a company outfits thousands of users with a consumer neurotechnology device, e. g. a dry-cap EEG, for measuring and uploading brain data to their company servers) would allow to identify particular groups of individuals based on social, biological or other markers—which would raise concerns regarding the protection of group privacy (Ienca et al., 2018; Taylor et al., 2017).

In the neuroethics community and other research fields, these concerns have precipitated a discussion around whether brain data should be treated as a special class of data that needs extra protection in data protection guidelines and regulations (such as genetic data e. g.) (Kellmeyer, 2018), perhaps even “neurorights” that refer to basic human rights (Ienca and Andorno, 2017), or whether in fact the question of what actually constitutes biomedical or health-related data becomes increasingly meaningless as AI methods can make health-related inferences on many different types of data (and their combination and aggregation) that would have previously not been considered to be special health-related data (e. g. your movement patterns from your mobile phone, or your user behavior on the web).

Bias in human and artificial intelligence in interaction

The propensity to take “mental shortcuts” (also known as a → heuristic) for judgement is an inherent feature of human cognition and serves important purposes in everyday life decision-making. If these heuristics, however, produce systematic skews in our decision-making, they are called biases which, if accumulated over time, can produce substantial distortions of knowledge and behavior both at the individual and societal level. These individual and societal biases are also an important driver of creating and maintaining social injustices, e. g. rooted in prejudice, stereotyping, discrimination and other negative social attributions.

The data streams that power current large-scale AI systems, e. g. in translation engines, navigation systems or computer vision (e. g. face recognition technology) today are based on human-derived knowledge structures (ontologies) and most artificial neural networks for deep learning are trained with data that require the input of human experts (e. g. for selecting and labeling the data). Therefore, any bias that is engrained at the level of data selection, structuring, labeling and so forth, may be reproduced, inflated and disseminated by an AI system that is trained on these biased data. Many examples in recent years show, how this can lead to a perpetuation of social injustices and discriminations that are based on human biases, e. g. with respect to ethnicity, gender and other social markers (Knight, 2017; Baeza-Yates, 2016).

Now, there is no easy fix for this deeply entrenched and interlocked problem of human biases and their spillover effects into AI bias. For one, human cognitive biases are almost impossible to contain effectively at the individual level, i. e. most behavioral training methods for so-called de-biasing have failed to show substantial let alone sustainable effects in reducing cognitive biases in humans (Smith and Slack, 2015; Croskerry et al., 2013a; Croskerry et al., 2013b). Furthermore, there is also no straightforward way computationally to effectively de-bias AI systems, both in terms of reducing the technical aspect of bias in algorithms (see → bias) nor the human-derived biased data structures and ontologies (Krywko, 2017; Geman et al., 1992). On the bright side, many computer scientists and data scientists have now recognized the problem and are actively working on potential ways to mitigate the problem (Courtland, 2018).

Meanwhile, however, it is important for researchers in neuroscience and clinicians to be aware (and to raise critical awareness) that automated AI systems may contain biases in their decision-making procedures.

The problem of “perpetual ethics”, governance of AI systems and fair access

In the legal, political and regulatory sphere, these developments raise questions on whether existing regulatory and legal procedures suffice for ensuring the responsible research and effective governance of AI across all sectors in society, while preserving the innovation dynamics of the beneficial applications of this emerging technology (Voeneky and Neuman, 2018; Kaebnick et al., 2016)[42, 43]. Raising awareness and promoting a participatory societal discourse on the ethical issues around AI is commendable and necessary first step for achieving a more inclusive process of deliberation and technology governance.

At the same time, however, the inherent complexity of human-AI interaction and the many stakeholders in AI could also produce a potential problem of “perpetual ethics”—an infinite loop of inter- and transdisciplinary debate without a mechanism and route for democratically legitimized and evidence-based sociopolitical adaptations in the form of laws, rules and regulations.

We can already see how the big five technology companies are all too eager to participate in (some might say usurp) the ethical discourse around AI and neurotechnology (Murgia and Shrikanth, 2019; Hoffmann, 2017). A democratically grounded process of multistakeholder deliberation on the ethics of AI and neurotechnology, however, requires equal and fair access to the debate in the public sphere, rather than the oligopolization of ethical discourse by academia, experts and big companies. Importantly, researchers of all career levels involved in AI-related disciplines (whether from a developmental, computer science perspective or in applied areas such as medicine) can actively participate in exerting counterpressure to this domination of the ethical discourse by private companies by engaging in science communication and public outreach at their institutions.

Furthermore, apart from this bottom-up process of a participatory discourse in societies on equal terms, the transnational nature of technology governance also requires the involvement of supranational bodies (such as the EU) and international organizations (such as the UNESCO) in developing effective and adaptive instruments of governance (i. e. laws and regulations) that preserve the right and freedom to science while making sure that AI is used to nurturing human well-being and flourishing rather than feeding the revenue stream of big technology companies.

Conclusions and outlook

The comprehensive technological change associated with big data, deep learning and the expansion of the digital infrastructure offers many reasons to hope for groundbreaking progress in basic and clinical neuroscience.

At the same time, neuroscience and neurotechnology, as academic and professional fields, should actively work towards embedding and integrating research and conceptual analysis on the ethical tensions in human-AI interaction into their activities. Basic ethics curricula, at all levels of secondary education, in all professions engaged in neuroscience and neurotechnology research and development, should become the norm rather than the exception. We need the coming generations of neuroscientists, programmers, engineers and other specialists to add ethical thinking and analysis into to their methodological toolbox and professional capabilities.

To this end, the comparatively young academic fields of neuroethics (Kellmeyer et al., 2019) and neurolaw (Meynen, 2014) are emerging as particularly dynamic (and partly overlapping) research and teaching environments for addressing the manifold ethical, legal and social challenges from human-AI interaction in the arena of neurotechnology and neuroscience.

Ultimately, from a professional perspective, the engagement with the profound ethical challenges that are created by large-scale techno-social transformations such as AI (or gene editing) is not only adding value to our identity as researchers and/or clinicians in neuroscience but may, collectively, mitigate negative consequences of this rapid change for society.

About the author

Dr. Philipp Kellmeyer

Dr. Philipp Kellmeyer is a board-certified neurologist at the University Medical Center Freiburg, Germany. He studied medicine at the Universities of Heidelberg and Zurich and received a Master of Philosophy from the University of Cambridge (UK) on a full scholarship. He currently works as a clinical neuroscientist on a brain-computer interface to restore communication in severely paralyzed neurological patients. He is also a scientific member of the Cluster of Excellence (“Exzellenzcluster”) BrainLinks-BrainTools, an interdisciplinary research consortium on neurotechnological research at the University of Freiburg. In his neuroethical work he is particularly interested in the ethical challenges of emerging neurotechnologies, applications of big data and machine learning in clinical neuroscience, as well as ethics issues in disorders of consciousness and neurodegenerative diseases. He is also an affiliated researcher at the Institute for Biomedical Ethics and History of Medicine at the University of Zurich where he also teaches biomedical ethics. He is the leader of the Emerging Issues Task Force of the International Neuroethics Society and member of the Advisory Committee of the Neuroethics Network. In 2017, he received the “Förderpreis Bioethik” (Bioethics Prize) of the MTZ-Foundation for his neuroethics work. Since 2018, he is a member of the research focus “Responsible Artificial Intelligence” and Internal Senior Fellow at the Freiburg Institute for Advanced Studies (FRIAS) at the University of Freiburg.

Funding

This work was (partly) supported by the German Ministry of Education and Research (BMBF) (grant number 13GW0053D) to the Medical Center – University of Freiburg and the German Research Foundation (DFG), grant number EXC1086, to the University of Freiburg, Germany.

Glossary

Algorithm

Very generally, an algorithm is a procedure (e. g. a computation) for solving a particular problem by following a set of instructions step-by-step. In order to function properly important features of algorithms are that: the set of instructions must be definite and without contradictions; each step must be realizable; the description must be finite; the final step should produce a result; it should be determinate in the sense that when repeated under the exact same circumstances the result of the procedure should be the same and that at any given step in the procedure, there is only one option to proceed.

Artificial Intelligence (AI)

Artificial intelligence is an umbrella term that has many different definitions that to some degree also depend on the goals that an AI system is designed to achieve. Most commonly, it refers to a subfield of computer science that aims at creating computer programs that can perform tasks that under usual circumstances would require human intelligence; e. g. speech perception, facial recognition, navigation, or other tasks.

Artificial Neural Networks (ANN)

In the field of → machine learning, an artificial neural network is a computing program architecture that is inspired by the structure of neural networks in animal brains. An ANN in its most basic form consists of different layers of interconnected units (called nodes or artificial neurons)—e. g. an input layer, intermediate layer and output layer. The artificial neuron receives an input (a real number), performs a computation (using a non-linear function) and thus creates “weights” which can be used in various forms of learning. The performance of an ANN depends, among other factors, on the quality of the input data, the number of intermediate layers, the degrees of connectedness between the nodes and the type of learning scenario (e. g. reinforcement learning).

Bias

In everyday language, bias refers to systematically skewed decision-making that is often associated with discrimination and other forms of unfairness. More systematically, e. g. in the field of psychology, a cognitive bias refers to a systematic tendency in human decision-making that skews decisions in a particular way (Tversky and Kahneman, 1975). One example, from the groundbreaking work on cognitive biases by the Israeli psychologists Tversky and Kahneman, would be the “availability bias”, i. e. the tendency to use information that is readily at hand for judgement (rather than including information that needs some sourcing). Cognitive biases can be a useful and adaptive → heuristic under circumstances that require rapid action but may equally be maladaptive or irrational in situations that require deeper deliberation or reflection.

In → machine learning and statistics, in contrast, bias refers to the difference between a calculated expected estimate (or value) of a parameter and this parameter’s true value.

Big Data

There is no universally accepted definition on what parameters qualify a particular data set to be considered “big data” (Mauro et al., 2015). An early definition, still in use today in some form or another, by the technology consultancy Gartner® emphasized the aspects of “high-volume, high-velocity and/or high-variety information assets” (Gartner, 2003) as being characteristic of big data sets.

Black Box (aspect of AI / Deep Learning)

The black box aspect of AI is a concern which is often invoked in discussions around questions of opaqueness, transparency and → interpretability of → deep learning (and some other AI methods). Usually it refers to the inability to retro-infer the information content and processes that have occurred in a trained deep neural network. One reason is that, unlike your computer’s random access memory (RAM), information in deep neural networks is diffused throughout the layers and nodes that makes it next to impossible to extract. In analogy to the brain, information storage in ANNs is reflected in the strength of the connected units rather than in any particular set of nodes or layers. Many computer scientists are now working on opening this black box, but no general solution has been developed to the problem yet (Castelvecchi, 2016).

Brain Data

Data on the structure or function of the brain and its various components (networks, cells etc.), examples are MRI images, EEG recordings and other data types.

Convolutional Neural Network

A particular class of → artificial neural network based on deep learning (also: “deep neural network”) that is inspired by the connectivity patterns in the visual cortex. In a convolutional network, each node (a.k.a “neuron”) in one layer of a multilayer network is fully connected to all other nodes in the next layer.

Deep Learning

A → machine learning method in which an → artificial neural network (in that case also referred to as a “deep neural network”) with many dozens to hundreds of layers is used for data analysis

Emerging Technologies

General term that refers to technologies that have been demonstrated to function in a particular way, but are not yet fully developed and/or realized and are typically not available in the market place on a large scale. Current examples would be self-driving cars or brain-computer interfaces.

Generative Adversarial Networks (GANs)

A → machine learning method in which two → artificial neural networks contest with each other. One network, the generative network, produces data structures, for example human faces, and the other network, the discriminative network, evaluates the output with regard to certain set specifications (e. g. whether the faces resemble faces of famous people on which the discriminator network has been trained with large amounts of data). The generative network produces data (faces) until the discriminator network is unable to distinguish between real faces (that it has been trained on) and generated faces. This method is a powerful tool for increasing the amount of data for training neural networks (data augmentation) but can also be used to produce fake content such as images or videos (“deep fakes”).

Governance

The process of governing (by supranational bodies, regional or local authorities) a particular social organization unit or social system, e. g. a state, territory or community, via laws, regulation and power.

Graphics Processing Unit (GPU)

Parallelized circuits that are specialized for graphics and image processing and are generally more efficient than conventional centralized processing units (CPU).

Heuristic

In cognitive psychology, a heuristic describes a method for problem-solving, e. g. by an individual, that relies on immediate and highly automated patterns and/or actions, for example the application of guesses, “rule of thumbs” or other types of intuitive judgments.

Interpretability (of → Machine Learning)

The ability to correctly interpret the results of a machine learning analysis, both in terms of the distinctive classes or features that the machine learning program has produced when analyzing the data.

Machine Learning (ML)

The most widely used description of machine learning is “learning without being programmed”, which points to the fact that methods used for machine learning at the most general level of description enable a software / algorithm to discriminate patterns in data and/or make predictions by learning distinctive features from these data that are not part of the original set of programming instructions. There is now an ever growing variety of machine learning methods, of which → artificial neural networks for → deep learning are the most popular and successful in recent years.

Persuasive Technologies

A concept from human-technology interaction studies in which technologies by virtue of their particular design features and functions may make the interaction very persuasive for humans. Persuasiveness can have a positive connotation, in the sense that a device’s design enables a compelling and intuitive user experience, but can also be perceived as negative, in the sense of being overly manipulative or even deceptive (Fogg, 2003).

User Experience Design

An interdisciplinary research field at the intersection of industrial design, psychology and cognitive science that studies human-technology interaction from a user-centered perspective.

User Interface

A graphics display or other type of output device that lets a user interact with a computer system. It can have different features such as being static, dynamic, touch sensitive, or adaptive.

Technological Solutionism

The societal tendency to turn to technology first, rather than sociopolitical actions say, for solving complex problems in the social realm (Morozov, 2014). Examples would be to respond to shortages in human caregivers by implementing a large-scale program for care robots or to combatting social isolation and loneliness in elderly people with a program of free virtual reality headsets (with an accompanying virtual platform for online interaction).

References

Akkus, Z., Galimzianova, A., Hoogi, A., et al. (2017). Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J Digit Imaging 30, 449–459. https://doi.org/10.1007/s10278-017-9983-410.1007/s10278-017-9983-4Search in Google Scholar PubMed PubMed Central

Amadio, J., Bi, G.-Q., Boshears, P.F., et al. (2018). Neuroethics questions to guide ethical research in the international brain initiatives. Neuron 100, 19–3610.1016/j.neuron.2018.09.021Search in Google Scholar PubMed

Baeza-Yates, R. (2016). Data and Algorithmic Bias in the Web. In: Proceedings of the 8th ACM Conference on Web Science. ACM, New York, NY, USA, S. 1–110.1145/2908131.2908135Search in Google Scholar

Berényi, A., Belluscio, M., Mao, D., Buzsáki, G. (2012). Closed-Loop Control of Epilepsy by Transcranial Electrical Stimulation. Science 337, 735–737. https://doi.org/10.1126/science.122315410.1126/science.1223154Search in Google Scholar PubMed PubMed Central

Burget, F., Fiederer, L.D.J., Kuhner, D., et al. (2017). Acting thoughts: Towards a mobile robotic service assistant for users with limited communication skills. IEEE, S. 1–610.1109/ECMR.2017.8098658Search in Google Scholar

Castelvecchi, D. (2016). Can we open the black box of AI? Nature News 538, 20. https://doi.org/10.1038/538020a10.1038/538020aSearch in Google Scholar PubMed

Clark, L. (2017). Elon Musk reveals more about his plan to merge man and machine with Neuralink. Wired UKSearch in Google Scholar

Courtland, R. (2018). Bias detectives: the researchers striving to make algorithms fair. In: Nature. http://www.nature.com/articles/d41586-018-05469-3. Accessed 29 Juni 201810.1038/d41586-018-05469-3Search in Google Scholar PubMed

Croskerry, P., Singhal, G., Mamede, S. (2013a). Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual Saf 22, ii58–ii64. https://doi.org/10.1136/bmjqs-2012-00171210.1136/bmjqs-2012-001712Search in Google Scholar PubMed PubMed Central

Croskerry, P., Singhal, G., Mamede, S. (2013b). Cognitive debiasing 2: impediments to and strategies for change. BMJ Qual Saf bmjqs-2012-001713. https://doi.org/10.1136/bmjqs-2012-00171310.1136/bmjqs-2012-001713Search in Google Scholar PubMed PubMed Central

Fogg, B.J. (2003). Persuasive Technology: Using Computers to Change what We Think and Do. Morgan Kaufmann10.1145/764008.763957Search in Google Scholar

Gartner (2003). What Is Big Data? – Gartner IT Glossary – Big Data. https://www.gartner.com/it-glossary/big-data/. Accessed 27 Mai 2019Search in Google Scholar

Gawehn, E., Hiss, J.A., Schneider, G. (2016). Deep Learning in Drug Discovery. Molecular Informatics 35, 3–14. https://doi.org/10.1002/minf.20150100810.1002/minf.201501008Search in Google Scholar PubMed

Geman, S., Bienenstock, E., Doursat, R. (1992). Neural Networks and the Bias/Variance Dilemma. Neural Computation 4, 1–58. https://doi.org/10.1162/neco.1992.4.1.110.1162/neco.1992.4.1.1Search in Google Scholar

Hartmann, K.G., Schirrmeister, R.T., Ball, T. (2018). EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signals. arXiv:180601875 [cs, eess, q-bio, stat]Search in Google Scholar

Hassabis, D., Kumaran, D., Summerfield, C., Botvinick, M. (2017). Neuroscience-Inspired Artificial Intelligence. Neuron 95, 245–258. https://doi.org/10.1016/j.neuron.2017.06.01110.1016/j.neuron.2017.06.011Search in Google Scholar PubMed

Hoffmann, A.L. (2017). A Chief Ethics Officer Won’t Fix Facebook’s Problems. In: Slate Magazine. https://slate.com/technology/2017/01/a-chief-ethics-officer-wont-fix-facebooks-problems.html. Accessed 28 Mai 2019Search in Google Scholar

Ienca, M., Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy 13, 5. https://doi.org/10.1186/s40504-017-0050-110.1186/s40504-017-0050-1Search in Google Scholar PubMed PubMed Central

Ienca, M., Kressig, R.W., Jotterand, F., Elger, B. (2017). Proactive Ethical Design for Neuroengineering, Assistive and Rehabilitation Technologies: the Cybathlon Lesson. Journal of NeuroEngineering and Rehabilitation 14, 115. https://doi.org/10.1186/s12984-017-0325-z10.1186/s12984-017-0325-zSearch in Google Scholar PubMed PubMed Central

Ienca, M., Haselager, P., Emanuel, E.J. (2018). Brain leaks and consumer neurotechnology. Nature Biotechnology 36, 805–810. https://doi.org/10.1038/nbt.424010.1038/nbt.4240Search in Google Scholar PubMed

Illes, J. (2017). Neuroethics: Anticipating the future. Oxford University Press10.1093/oso/9780198786832.001.0001Search in Google Scholar

Kaebnick, G.E., Heitman, E., Collins, J.P., et al. (2016). Precaution and governance of emerging technologies. Science 354, 710–711. https://doi.org/10.1126/science.aah512510.1126/science.aah5125Search in Google Scholar PubMed

Kellmeyer, P., Cochrane, T., Müller, O., et al. (2016). The Effects of Closed-Loop Medical Devices on the Autonomy and Accountability of Persons and Systems. Camb Q Healthc Ethics 25, 623–633. https://doi.org/10.1017/S096318011600035910.1017/S0963180116000359Search in Google Scholar PubMed

Kellmeyer, P. (2018). Big Brain Data: On the Responsible Use of Brain Data from Clinical and Consumer-Directed Neurotechnological Devices. Neuroethics. https://doi.org/10.1007/s12152-018-9371-x10.1007/s12152-018-9371-xSearch in Google Scholar

Kellmeyer, P., Chandler, J., Cabrera, L.Y., et al. (2019). Neuroethics at 15: The Current and Future Environment for Neuroethics. AJOB Neuroscience. https://doi.org/10.1080/21507740.2019.163295810.1080/21507740.2019.1632958Search in Google Scholar PubMed

Kleesiek, J., Urban, G., Hubert, A., et al. (2016). Deep MRI brain extraction: A 3D convolutional neural network for skull stripping. NeuroImage 129, 460–469. https://doi.org/10.1016/j.neuroimage.2016.01.02410.1016/j.neuroimage.2016.01.024Search in Google Scholar PubMed

Knight, W. (2017). Biased algorithms are everywhere, and no one seems to care. MIT Technology ReviewSearch in Google Scholar

Kreitmair, K.V., Cho, M.K., Magnus, D.C. (2017). Consent and engagement, security, and authentic living using wearable and mobile health technology. Nature Biotechnology 35, 617–620. https://doi.org/10.1038/nbt.388710.1038/nbt.3887Search in Google Scholar PubMed

Krywko, J. (2017). To fix algorithmic bias, we first need to fix ourselves. In: Quartz. https://qz.com/1055145/ai-in-the-prison-system-to-fix-algorithmic-bias-we-first-need-to-fix-ourselves/. Accessed 17 Aug 2017Search in Google Scholar

Litjens, G., Sánchez, C.I., Timofeeva, N., et al. (2016). Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Scientific Reports 6, 26286. https://doi.org/10.1038/srep2628610.1038/srep26286Search in Google Scholar PubMed PubMed Central

Marblestone, A.H., Wayne, G., Kording, K.P. (2016). Toward an Integration of Deep Learning and Neuroscience. Front Comput Neurosci 94. https://doi.org/10.3389/fncom.2016.0009410.3389/fncom.2016.00094Search in Google Scholar PubMed PubMed Central

Mauro, A.D., Greco, M., Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings 1644, 97. https://doi.org/10.1063/1.490782310.1063/1.4907823Search in Google Scholar

Meynen, G. (2014). Neurolaw: Neuroscience, Ethics, and Law. Review Essay. Ethic Theory Moral Prac 17, 819–829. https://doi.org/10.1007/s10677-014-9501-410.1007/s10677-014-9501-4Search in Google Scholar

Milletari, F., Ahmadi, S.-A., Kroll, C., et al. (2017). Hough-CNN: Deep learning for segmentation of deep brain regions in MRI and ultrasound. Computer Vision and Image Understanding 164, 92–102. https://doi.org/10.1016/j.cviu.2017.04.00210.1016/j.cviu.2017.04.002Search in Google Scholar

Mittelstadt, B.D., Allo. P., Taddeo. M., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society 3, 205395171667967. https://doi.org/10.1177/205395171667967910.1177/2053951716679679Search in Google Scholar

Morozov, E. (2014). To save everything, click here: the folly of technological solutionism. PublicAffairs, New YorkSearch in Google Scholar

Murgia, M., Shrikanth, S. (2019). How Big Tech is struggling with the ethics of AI. In: Financial Times. https://www.ft.com/content/a3328ce4-60ef-11e9-b285-3acd5d43599e. Accessed 28 Mai 2019Search in Google Scholar

Neely, R.M., Piech, D.K., Santacruz, S.R., et al. (2018). Recent advances in neural dust: towards a neural interface platform. Current Opinion in Neurobiology 50, 64–71. https://doi.org/10.1016/j.conb.2017.12.01010.1016/j.conb.2017.12.010Search in Google Scholar PubMed

Pereira, S., Pinto, A., Alves, V., Silva, C.A. (2016). Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Transactions on Medical Imaging 35, 1240–1251. https://doi.org/10.1109/TMI.2016.253846510.1109/TMI.2016.2538465Search in Google Scholar PubMed

Popova, M., Isayev, O., Tropsha, A. (2018). Deep reinforcement learning for de novo drug design. Science Advances 4, eaap7885. https://doi.org/10.1126/sciadv.aap788510.1126/sciadv.aap7885Search in Google Scholar PubMed PubMed Central

Regalado, A. (2017). Google’s health-care mega-project will track 10,000 Americans. MIT Technology ReviewSearch in Google Scholar

Schirrmeister, R.T., Springenberg, J.T., Fiederer, L.D.J., et al. (2017a). Deep learning with convolutional neural networks for EEG decoding and visualization. Human brain mapping 38, 5391–542010.1109/SPMB.2017.8257015Search in Google Scholar

Schirrmeister, R.T., Springenberg, J.T., Fiederer, L.D.J., et al. (2017b). A novel deep learning approach for classification of EEG motor imagery signals. J Neural Eng 14, 016003. https://doi.org/10.1088/1741-2560/14/1/01600310.1088/1741-2560/14/1/016003Search in Google Scholar PubMed

Schirrmeister, R.T., Gemein, L., Eggensberger, K., et al. (2018). P64. Deep learning for EEG diagnostics. Clinical Neurophysiology 129, e94. https://doi.org/10.1016/j.clinph.2018.04.69910.1016/j.clinph.2018.04.699Search in Google Scholar

Smith, B.W., Slack, M.B. (2015). The effect of cognitive debiasing training among family medicine residents. Diagnosis 2, 117–121. https://doi.org/10.1515/dx-2015-000710.1515/dx-2015-0007Search in Google Scholar PubMed

Strickland, E. (2017). Facebook Announces „Typing-by-Brain“ Project. In: IEEE Spectrum: Technology, Engineering, and Science News. https://spectrum.ieee.org/the-human-os/biomedical/bionics/facebook-announces-typing-by-brain-project. Accessed 22 Sep 2017Search in Google Scholar

Taylor, L., Floridi, L., Van der Sloot, B. (2017). Group privacy: new challenges of data technologies. Springer, Cham10.1007/978-3-319-46608-8Search in Google Scholar

Topol, E.J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine 25, 44–56. https://doi.org/10.1038/s41591-018-0300-710.1038/s41591-018-0300-7Search in Google Scholar PubMed

Tversky, A., Kahneman, D. (1975). Judgment under Uncertainty: Heuristics and Biases. In: Wendt, D., Vlek, C. (Hrsg.). Utility, Probability, and Human Decision Making. Springer Netherlands, S. 141–16210.21236/AD0767426Search in Google Scholar

Voeneky, S., Neuman, G.L. (2018). Human rights, democracy, and legitimacy in a world of disorder10.1017/9781108355704Search in Google Scholar

Wang, Z., She, Q., Smeaton, A.F., et al. (2019). Neuroscore: A Brain-inspired Evaluation Metric for Generative Adversarial Networks. arXiv:190504243 [cs, eess]Search in Google Scholar

Yuste, R., Goering, S., Arcas, B.A. y, et al. (2017). Four ethical priorities for neurotechnologies and AI. Nature News 551, 159. https://doi.org/10.1038/551159a10.1038/551159aSearch in Google Scholar PubMed PubMed Central

Published Online: 2019-11-06
Published in Print: 2019-11-26

© 2019 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 28.4.2024 from https://www.degruyter.com/document/doi/10.1515/nf-2019-0018/html
Scroll to top button