Keynote Speakers and Plenary Sessions

KEYNOTE SPEAKERS

Prof. Katja Kwastek

Prof. Katja Kwastek

Katja Kwastek is Professor of Modern and Contemporary Art History at the Vrije Universiteit Amsterdam. Prior to this, she has been teaching at the Ludwig-Maximilians-University (Munich). Her research focuses on processual, digital and post-digital art, media history, theory and aesthetics, and the digital and environmental humanities. In 2004, she curated the first international exhibition and conference project on “Art and Wireless Communication”. She has lectured internationally and published many books and essays, including her most recent “Aesthetics of Interaction in Digital Art” (MIT Press, 2013). 

The title of her talk is "E-CO ART? When Electronic and Ecological Arts Meet".

Abstract: At first sight, the conjunction of ecological and electronic arts might evoke, at maximum, ‘the chance meeting on a dissecting-table of a sewing-machine and an umbrella!'. However, if one digs deeper, the interrelations are striking. This lecture will explore these (contemporary, but also historic) interrelations along three lines of thought: the shared fascination of both ecological and electronic arts with processuality, the increasing impact of digital technologies on our concepts of the environment, and the biased attitude of both electronic and ecological arts towards questions of applicability versus artistic autonomy.

Prof. Dominic Lopes

Prof. Dominic McIver Lopes

The philosopher Dominic McIver Lopes writes on the nature and significance of art and the aesthetic. He has traced the aesthetic and epistemic value of images to how they extend the powers of human perception. In pioneering research on interactive computer art, he reveals how technology supports new kinds of aesthetic action. Urging caution about approaches to the aesthetic that centre on art, he is developing a theory of aesthetic values as guiding agents who are engaged in a huge range of aesthetic projects. Lopes, a Fellow of the Royal Society of Canada, teaches at the University of British Columbia. He has held a Leverhulme Trust Visiting Research Professorship, a Guggenheim Fellowship, and a Canada Council Killam Research Fellowship.

The title of his talk is "Aesthetic Value in the Network Era.”

Abstract: Traditional understandings of aesthetic value are inadequate: they fail to model how aesthetic values are embedded in social practices. In consequence, they also misunderstand the role of communication about aesthetic value. This talk argues that new information technologies open up new modes of communication that have a profound effect on our aesthetic practices.

Dr Ayanna Howard

Dr. Ayanna Howard

Ayanna Howard is the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing in the College of Computing at the Georgia Institute of Technology. She also holds a faculty appointment in the School of Electrical and Computer Engineering. Dr. Howard’s career focus is on the development of intelligent technologies that can function within a human centered world. Her work encompasses advancements in artificial technologies, assistive technologies, and robotics and ranges from the development of healthcare robots in the home to AI-powered STEM apps for children with diverse learning needs. Dr. Howard received her B.S. in Engineering from Brown University, and her M.S. and Ph.D. in Electrical Engineering from the University of Southern California.

To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being recognized as one of the 23 most powerful women engineers in the world by Business Insider. In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research and has released their first suite of STEM educational products to engage children of all abilities. Prior to Georgia Tech, Dr. Howard was a senior robotics researcher at NASA's Jet Propulsion Laboratory. She has also served as the Associate Director of Research for the Institute for Robotics and Intelligent Machines, Chair of the Robotics Ph.D. program, and the Associate Chair for Faculty Development in the School of Electrical and Computer Engineering at Georgia Tech.

The title of her talk is "The Human-Centered Design of Robotics for Social Impact".

Abstract: The Robots are coming! The Robots are coming! The Robots are already…. Here. In recent months, there has been an upsurge in the attention given to robots and artificial intelligence (AI) and their inevitable destruction of the human race if we are not watchful. Whether your opinion sits on one side or the other, the fact remains; robots have already become a part of our society and, in some cases, an integral part. No longer is a robot chauffer, i.e. an autonomous robot car that can drive an individual to work, a whimsical thought of a science-fiction movie director. No longer is a robot suit, i.e. a robot exoskeleton that can assist a paraplegic to walk, a fantasy story of a writer. Not to argue against being vigilant (because ethical considerations concerning the inclusion of new technology in society should always be a part of the discussion), but coupled with the doom-and-gloom messages of robots and AI, robots, with intelligence, are also being seen as beneficial, life-saving, machines for assisting us in our everyday lives. Telepresence robots are transforming health care delivery in the hospital setting and are being used in medical applications ranging from newborn care to stroke treatment.  Wearable robotic exoskeletons are helping paralyzed patients stand up and walk in the home environment. And a host of startup companies are working on the next generation of therapy robots for children. In this talk, Dr. Howard will discuss the domain of robots for real-world applications, with a focus on their human-centered design. She will give an overview of how these technologies can address real-life needs for improving our quality of life now and in the future.
 

Professor Ken Rinaldo

Prof. Ken Rinaldo

Ken Rinaldo is the Director of the Art and Technology Program in the Department of Art at Ohio State University. He is internationally recognized as a pioneer in interactive bio art and robotic installations that blurs the boundaries between the organic and the inorganic. His work interrogates these fuzzy boundaries and posts that as new machine and algorithmic species arise, we need to better understand the complex intertwined ecologies that these machinic semi-living species create.

Rinaldo's works have shown and commissioned by museums, festivals and galleries internationally such as: Hermitage Museum Russia, Nuit Blanche Canada, World Ocean Museum Russia, Ars Electronica Austria, CAFA Museum China, Lille International Arts Festival France, la Maison d’Ailleurs Switzerland, Vancouver Olympics Canada, Platform 21 Holland, Transmediale Berlin, AV Festival England, Caldas Museum of Art Colombia, Arco Arts Festival Spain, Te Papa Museum, Wellington New Zealand, The National Museum of China, Centro Andaluz de Arte Contemporaneo in Seville Spain, Kiasma Museum Finland, Museum of Contemporary Art Chicago, Pan Palazzo Delle Arti Italy, V2 DEAF Holland, Siggraph Los Angeles, Exploratorium San Francisco, Itau Museum Brazil, Biennial for Electronic Art Australia and the National Center for Contemporary Arts Russia.

Rinaldo was the recipient of an Award of Distinction in 2004 at Ars Electronica Austria for Augmented Fish Reality and first prize for Vida 3.0 an Artificial Life Competition in Madrid for his work Autopoiesis, which also won an honorable mention in Ars Electronica in 2001. Augmented Fish Reality is a trans-species artwork in which Siamese fighting fish can move their tanks under their control. In 2008 Rinaldo and Youngs were awarded a Green Leaf Award from The United Nations Environment Fund, for their Farm Fountain, an aquaponics project in which fish and bacteria feed plants, which humans then consume. Rinaldo is the recipient of three Battelle Endowment grants as well as a cultural Olympian for the Vancouver Olympics in 2009, where they commissioned three Paparazzi Robots that autonomously photographed attendees.

Rinaldo is a member of the Senior Academic Board for Antennae Magazine, and author of Interactive Electronics for Artists and Inventors and his work has been featured on radio and TV internationally including CNET, BBC, ORF, CNN, CBC & the Discovery Channel. Select publications; Art and Electronic Media by Edward Shanken, Evolution Haute Couture Art and Science in the Post Biological Age edited by Dmitry Bulatov, Art and Science Steve Wilson, Inside Art E Sciencia edited by Leonel Moura, Politics of the Impure V2 Publishing, Digital Art by Christiane Paul, NY Times, Information Arts, Contemporary Italy, NY Arts Magazine, Art Press Paris, Tema Celeste Italy and Wired Magazine.

Rinaldo is artist and Professor teaching contemporary art practices & technology within the College of Arts & Sciences specializing in robotics, 3D modeling, rapid prototyping/fabrication and 3D animation at The Ohio State University.

The title of his talk is "Trans-species symbiogenesis".

Abstract: The junctures where the machine, animal, plant, bacteria, and humans meet are where our futures exist. Three decades of creating interactive robotic art have taught Prof. Rinaldo that living systems provide the ultimate models of what technology can become. Communication is at heart of his work with a desire to break down and reveal behavior, processes, and patterns inherent in natural and now semi-living species. Prof. Rinaldo's work exposes the underlying beauty inherent in this intercommunication of all species (organic and machinic) at all scales. As anaerobic bacteria have receded to our stomachs 2.5 billion years ago, now symbiotically intertwined with our survival, so we too are receding into a comfortable embryonic sac, enveloped by our technologies. A new species, neither human nor machine is emerging and we are becoming, and have become symbiont. Still, technology presents social and environmental challenges and evolves more quickly than biologically intertwined natural living system can coevolve. This talk offers observations and solutions, on where we are heading, with technologies that at times seem more parasitic than symbiotic.

 

PLENARISTS

Mr. Gene Kogan

Mr. Gene Kogan

Gene Kogan is an internationally renowned American artist and a programmer who is interested in generative systems, computer science, and software for creativity and self-expression. He is a collaborator within numerous open-source software projects, and gives workshops and lectures on topics at the intersection of code and art. Gene initiated ML4A, a free book about machine learning for artists, activists, and citizen scientists, and regularly publishes video lectures, writings, and tutorials to facilitate a greater public understanding of the subject. He has recently offered courses on the subject of machine learning for artists at NYU, Tisch School of the Arts, Interactive Telecommunications Program. He also offered a Machine Learning Workshop at the School of Creative Media, City University of Hong Kong in May 2018.

The title of his talk is "The Neural Aesthetic".

Abstract: The talk explores the use of artificial intelligence for new media art. Recent advances in machine learning have made it possible to generate realistic images, sounds, and texts from models built on top of real-world data, inspiring a surge of creative works.

Mr. Kogan will review the field's state-of-the-art, present a selection of art projects and interactive installations from the past year, as well as speculate on future directions as the science and art rapidly converge.

Finally, a selection of educational resources will be presented for the curious people who'd like to experiment with the technology themselves.

Mr. Memo Akten

Mr. Memo Akten

Memo Akten is an artist working with computation as a medium, exploring the collisions between nature, science, technology, ethics, ritual, tradition and religion. Combining critical and conceptual approaches with investigations into form, movement and sound, he creates data dramatizations of natural and anthropogenic processes. Alongside his practice, he is currently working towards a PhD at Goldsmiths University of London in artificial intelligence and expressive human-machine interaction. His work has been shown and performed internationally, featured in books and academic papers; and in 2013 Akten received the Prix Ars Electronica Golden Nica for his collaboration with Quayola, ‘Forms’.

The title of his talk is "Intelligent Machines that Learn: What Do They Know? Do They Know Things?? Let's Find Out!".

Abstract: As machines become 'smarter', more autonomous and ubiquitous, how does this impact human creativity, and the role of the artist? In this talk, Mr. Akten will briefly cover some of my own meanderings in this area, particularly within the context of the recent developments in machine learning. This includes explorations in i) real-time, interactive computational systems to augment artistic, creative expression, ii) semi-autonomous systems for human-machine collaborative co-creation, and iii) reflecting on how we make sense of the world, projecting meaning onto noise.

Mr. Philippe Pasquier

 

 

 

 

 


Dr. Philippe Pasquier

Dr. Philippe Pasquier is an Associate Professor in the School for Interactive Arts and Technology and an Adjunct Professor in Cognitive Science at Simon Fraser University. He is both a scientist specialized in artificial intelligence and generative systems, a multidisciplinary artist, an educator, and a community builder. His contributions range from theoretical research in multi-agent systems, computational creativity and machine learning to applied artistic research and practice in digital art, computer music, and generative art. 

The title of his talk is "Advances in Creative AI and computer-assisted creativity".

Abstract: Computational Creativity, also known as Creative AI, brings together scientists and artists to design generative systems that partially or completely automate creative tasks. Dr. Pasquier will introduce and motivate these new developments of Artificial Intelligence and machine learning towards generative systems and computer-assisted creativity. He will illustrate our discourse with examples of systems designed and developed at the Metacreation Laboratory that compose music, automate sound design for film and video games, generate presets for sound synthesizers or generate animations of 3D characters. He will discuss how these systems are evaluated and deployed either in artworks or in the industry.


Ms. Jennifer Gradecki

Ms. Jennifer Gradecki

Jennifer Gradecki is an artist and theorist who aims to facilitate a practice-based understanding of socio-technical systems that typically evade public scrutiny. Using methods from institutional critique, tactical media, and information activism, she investigates information as a source of power and resistance. Her work has focused on Institutional Review Boards, social science techniques, financial instruments and, most recently, intelligence agencies and technologies of mass surveillance. She has published in Leonardo and Big Data & Society and has participated in numerous international exhibitions and conferences. Her work has been commissioned by Science Gallery Dublin and funded by the Puffin Foundation and she is currently an Assistant Professor at Northeastern University.

The title of her talk is "Automating the Mosaic: Machine Learning in Dataveillance Practices".

Abstract: The mosaic metaphor of intelligence analysis—the notion that seemingly insignificant pieces of information, when combined, can produce a revealing picture—has contributed to the current collect-it-all approach of intelligence agencies. The desire to construct a ‘complete’ picture drives the mass collection of data and produces information overload, which leads agencies to automate analysis. While machine learning algorithms can automate the process of intelligence analysis, mistakes in the data used to train the corpus will replicate erroneous judgments. This talk will discuss the techniques and technologies of automation in intelligence analysis, as well as the assumptions, metaphors, and modes of representation that underpin dataveillance practices. These topics will be discussed via the artistic research project, the Crowd-Sourced Intelligence Agency, a partial replication of an Open Source Intelligence processing system. In order for the public to question the use of statistical pattern recognition algorithms in place of human judgment, they need the technical literacy to understand how these systems work and the data they produce, as well as access to the data that intelligence agencies use to train their algorithms.

Mr. Derek Curry

Dr. Derek Curry 

Derek Curry is an Assistant Professor in the College of Arts, Media and Design at Northeastern University. His interdisciplinary practice combines artistic production with research techniques from the humanities, science and technology studies, natural language processing, artificial intelligence, and machine learning. He uses a practice-based research approach to create artworks and games that provide an experiential understanding of topics where information may be limited, such as automated decision-making systems used by algorithmic stock trading systems and Open Source Intelligence (OSINT) gathering practices. He has been published in Big Data & Society and his artwork has been widely exhibited at venues and festivals.

The title of his talk is "Hacktivism in the age of Automated Decision-Making". 

Abstract: The increase in automation and networked capabilities that has resulted in a pervasive surveillance by machines has also opened new spaces for creative disruption and intervention. For example, in 2013, a tweet made from the @AP twitter account after it was hacked by the Syrian Electronic Army caused a flash crash that momentarily wiped out $130 billion from the markets. In 2018, Google’s search algorithms were manipulated by British protestors to make images of Donald Trump the top results from a search for ‘American Idiot’. This talk will position the disruption of algorithms within the history of tactical media and hacktivism, as well as explore how artists can use the same tactics for creative dissent within this new paradigm.

 


Prof. Ernest Edmonds

Prof. Ernest Edmonds

Ernest Edmonds is a pioneer computer artist and HCI innovator for whom combing creative arts practice with creative technologies has been a life-long pursuit. In 2017 he won both the ACM SIGCHI Lifetime Achievement Award for Practice in Human-Computer Interaction and the ACM SIGGRAPH Distinguished Artist Award for Lifetime Achievement in Digital Art. He is Professor of Computational Art at De Montfort University, Leicester UK, and Chairman of the Board of ISEA International. His most recent book is “The Art of Interaction: what HCI can learn from Interactive Art” (Morgan & Claypool, 2018). He is an Honorary Editor of Leonardo and Editor-in-Chief of Springer’s Cultural Computing book series. His work was recently described in the book by Francesca Franco, “Generative Systems Art: The Work of Ernest Edmonds” (Routledge, 2017).

The title of his talk is "Art and Influence: Learning in Augmented Worlds".

Abstract: AI is important in interactive art. The art reaches beyond the computer game paradigm to explore lifelong evolution and the building of relationships. Working in a distributed connected world a new art of evolving and connected systems is emerging. The worlds in which these new art forms exist extends to virtual and augmented realities and the physical environment. Prof. Edmonds begins by describing his “Shaping Form” series of dynamic works. Images are generated using rules that determine the colours, the patterns and the timing. A camera captures movement that changes the generative rules. The future behaviour of each “Shaping Form” evolves as a result of its interaction with the world. But what do we really mean here by interaction? With the evolving nature of these works, the words influence, stimulus or interchange are more appropriate than interaction. Prof. Edmonds uses machine learning methods to implement his art. He shows how the methods have been extended to make distributed sets of interactive nodes form a networked art system. The community made up of the work’s distributed audience collectively influence the progress and development of the art system. Finally, the paper describes how the concept is being extended again, this time into a dynamic distributed augmented reality world.

Ms. Anna Ridler

Ms. Anna Ridler

Anna Ridler is an artist and researcher whose practice brings together technology, literature and drawing to create both art and critical writing. She works with abstract collections of information or data, particularly self-generated data sets, to create new and unusual narratives in a variety of mediums, and how new technologies, such as machine learning, can be used to translate them clearly to an audience and to talk about other things - memory, love, decay. She has degrees from the Royal College of Art, Oxford University, University of Arts London and has shown at a variety of cultural institutions and galleries including Ars Electronica, Sheffield Documentary Festival, Leverhulme Centre for Future Intelligence, Tate Modern, Centre Pompidou and the V&A.

The title of her talk is "The Artistic Potential of Computer Vision". 

Research has looked at whether artificial intelligence, and more particularly machine learning, can create art. However, the focus of this work has been to consider and judge the result as “art" through the impact of visual parameters on a viewer (i.e. “does this look like art?"). This ignores a vital consideration of an artist when producing a piece, that of the impact of the materials used have. Ms. Ridler will explore what machine learning can add or take away from a piece and particularly examine the importance of datasets as a medium. Much of the focus around the critical reception in the press and academia around creative AI has been focused on the model output; however, it is also important to regard datasets and dataset creations as separate works, or parallel works that speak to the generated piece, and treat and critique them as such. 

 

Ms. Theresa Reimann-Dubbers

Ms. Theresa Reimann-Dubbers

Theresa Reimann-Dubbers is a German artist whose work concerns the environment created by technologies. She is interested in tracing genealogies of emerging technologies in order to study their societal and environmental impact. She conducts theoretical investigations that lead to speculations upon alternative technological realities in the form of objects and installations. 

She previously studied at the Royal College of Art and is currently working towards an MA at the Berlin University of the Arts. Her artworks have been shown internationally at Ars Electronica, Science Gallery Dublin and NIPS.

The title of her talk is "Borderline Speculation".

Abstract:
Knowledge of the human world is transferred to machines.
Aspects of the human condition are defined, then translated into the language of machines.
Definitions are precise and exclusive, the human condition however is not. 
The border between the realm of humans and that of machines is a filter.
Things pass through (filtrate), things are left behind (residue).
This talk examines the nature and significance of the residue and discusses art as investigations into the contradictory, the latent and the divergent.


Dr. Jing Liao

Dr. Jing Liao

Jing Liao received dual Ph.D. degrees from Zhejiang University and Hong Kong University of Science and Technology in 2014 and 2015. She was a researcher in the Visual Computing Group, Microsoft Research Asia (MSRA), from 2015 to 2018. She is now an Assistant Professor in the Department of Computer Science, City University of Hong Kong. Her research interests span Computer Graphics, Computer Vision and Deep learning, with a current focus on applying Deep Learning to digital arts and media.  Her research results have been published in several top conferences and journals (SIGGRAPH, TOG, TVCG, CVPR, ICCV), and some technologies she developed have been transferred to Microsoft products, including Xiao Ice, Pix, Skype etc.

The title of her talk is "Image and Video Stylization with Deep Neural Networks".

Abstract: Painting and drawing are popular art forms and people have been attracted by them with the advent of many fantastic artworks, e.g., Leonardo da Vinci’s “Mona Lisa”. However, manually drawing or painting an image into a particular artistic style requires professional artists and lots of time. In this talk, Dr. Liao will first introduce computational approaches that can automatically and efficiently render a photograph into some artistic style learned from a piece of artwork or a collection of artworks, by leveraging deep neural networks. Next, she will introduce the extension of neural stylization from image to video and stereoscopic image/video as they emerge with recent virtual reality hardware.


 

Prof. De Kai Wu

De Kai 

De Kai’s cross-disciplinary work in language, music, artificial intelligence and cognition centers on enabling cultures to interrelate in creative ways. As an AI professor, he is among only 17 scientists worldwide named by the Association for Computational Linguistics as a Founding ACL Fellow, for his pioneering contributions to machine translation that established cross-lingual machine learning foundations of systems like the Google/Yahoo/Microsoft translators. As a musician, he created the transcultural soul/pop collective ReOrientate, whose signature use of cross-cultural cognitive musical illusions draws from the Asian diaspora of Chinese, Indian, middle eastern, southeast Asian, and flamenco music and dance, blending them through electronica and virtual reality. His computational creativity research led to award winning AI models that learn flamenco, blues, and hip hop. De Kai's PhD thesis at the University of California at Berkeley was one of the first to construct probabilistic machines that learn to understand human languages. Recruited directly from Berkeley as founding faculty of the now world-ranked Hong Kong University of Science and Technology, he co-founded HKUST's internationally funded Human Language Technology Center which has been shaping new language and music AI paradigms ever since launching the world's first web translator over twenty years ago. In 2015, Debrett's HK 100 recognized him as one of the 100 most influential figures of Hong Kong.

The title of his talk is "How machines can be more creative than humans".

Abstract: Despite rapidly increasing generative use of artificial intelligence by artists, it remains a common trope that creativity is the province of humans rather than machines. But what is creativity? Boden analyzed creativity as arising from processes of combination, exploration, and transformation. But without these processes, there can be no intelligence — whether human or artificial. Human intelligence emerged from our linguistic abilities, and our linguistic abilities emerged from our musical abilities. While Sapir and Wharf observed that "language structures thought”, it is also true that "language structures creativity". Our artificial creativity work has been demonstrating how the same fundamental building blocks underlying language interpretation and translation are also Boden's fundamental building blocks of creativity. The same machine learning systems we pioneered for automatic translation, thus, also learn creative improvisation for hip hop, flamenco, and blues. So is the idea that creativity will remain the province of humans just a comforting myth we sooth ourselves with in the face of the impending robotic disruption?
 

Prof. Huamin Qu

Prof. Huamin Qu

Huamin Qu is a full professor in the Department of Computer Science and Engineering (CSE) at the Hong Kong University of Science and Technology (HKUST) and also the coordinator of the Human-Computer Interaction (HCI) group at the CSE department. His main research interests are in data visualization and human-computer interaction, with focuses on urban informatics, social network analysis, e-learning, text visualization, and explainable artificial intelligence. His research has been recognized by many awards including 8 best paper/honorable mention awards, IBM Faculty Award, Higher Education Scientific and Technological Progress Award (Second Class) from the Ministry of Education of China,  HKICT Best Innovation (Innovative Technology) Silver Award from the Hong Kong Institution of Engineers, and Distinguished Collaborator Award from Huawei Noah's Ark Lab.  He is also an instructor of Design Thinking, a joint course between HKUST and China Academy of Art (CAA), which has run for six years.  The course projects have been exhibited every year and covered by medias.  

The title of his talk is "Data, Visual Storytelling, and Explainable Artificial Intelligence".

Abstract: We live in the era of big data. Data is everywhere and decision-makings become more and more data-driven. With tremendous amounts of data, massive computational power, and advances in machine learning algorithms especially deep learning, AI has become very powerful and made decisions for us everyday. However, as data might be very complex and the algorithm is so advanced, the decision making by AI is more like a black box and people may hardly understand why AI makes certain decisions. This becomes a critical issue especially in healthcare, security, and finance areas. Explainable Artificial Intelligence (XAI) tries to make AI systems and their actions more understandable and transparent to humans. Data visualization, which turns data into visual forms, plays an important role in XAI. In this talk, I will briefly introduce data visualization and then use examples to illustrate how data visualization can visually telling stories in complex data and make contributions to XAI.