Barbara Plank is tenured Assistant Professor in Natural Language Processing at the University of Groningen, The Netherlands.
Her research focuses on cross-domain and cross-language NLP. She is interested in robust language technology, learning under sample selection bias (domain adaptation, transfer learning), annotation bias (embracing annotator disagreements in learning), and generally, semi-supervised and weakly-supervised machine learning for a variety of NLP tasks and applications, including syntactic processing, opinion mining, information and relation extraction and personality prediction.
Natural Language Processing: Challenges and Next Frontiers
Despite many advances of Natural Language Processing (NLP) in recent years, largely due to the advent of deep learning approaches, there are still many challenges ahead to build successful NLP models. In this talk I will outline what makes NLP so challenging. Besides ambiguity, one major challenges is variability. In NLP, we typically deal with data from a variety of sources, like data from different domains, languages and media, while assuming that our models work well on a range of tasks, from classification to structured prediction. Data variability is an issue that affects all NLP models. I will then delineate one possible way to go about it, by combining recent success in deep multi-task learning with fortuitous data sources, which allows learning from distinct views and distinct sources. This will be one step towards one of the next frontiers: learning under limited (or absence) of annotated resources, for a variety of NLP tasks.
Verónica Valeros is a hacker, researcher and intelligence analyst from Argentina. Her research has a strong focus on helping people and involves different areas from wireless and bluetooth privacy issues to malware, botnets and intrusion analysis. She has presented her research on international conferences such as BlackHat, EkoParty, Botconf and others. She is the co-founder of the MatesLab hackerspace based in Argentina. Since 2013 she is part of the Cognitive Threat Analytics team (Cisco Systems) where she specialises on malware network traffic analysis and threats’ categorisation at big scale. She is also part of the core team of Security Without Borders, a collective of cyber security professionals who volunteer assisting people at risk and NGOs on cyber security issues.
The Future of Cybersecurity Needs You, Here is Why.
In the last decade we have observed a shift in cybersecurity. Cyber threats started to impact more and more our daily lives, even to the point of threatening our physical safety. We learnt that attackers are well aware of our weaknesses and limitations, that they take advantage of this knowledge and that for being successful they need to be just a little better than us. As defendants, we struggle. We perfected existing solutions to protect our environments with some degree of success but still today we fall behind adversaries more often than not. We got really good at collecting data until the point of not being able to use it in its full extent. This lead us to ask ourselves, Is this it? Is this all we can do? The future of cybersecurity needs you, join me on this talk to find out why.
Toby
Walsh is one of the world's leading experts in artificial intelligence
(AI). He was named by the Australian newspaper
as a "rock star" of the digital revolution, and included in the
inaugural Knowledge Nation 100, the list of the 100 most important
digital innovators in Australia. Professor Walsh's research focuses on
how computers can interact with humans to optimise decision-making
for the common good. He is also a passionate advocate for limits to
ensure AI is used to improve, not hurt, our lives. In 2015, Professor
Walsh was behind an open letter calling for a ban on autonomous weapons
or 'killer robots' that was signed by more than
20,000 AI researchers and high profile scientists, entrepreneurs and
intellectuals, including Stephen Hawking, Noam Chomsky, Apple co-founder
Steve Wozniak, and Tesla founder Elon Musk. He was subsequently invited
by Human Rights Watch to talk at the United
Nations in both New York and Geneva. Professor Walsh is a Fellow of the
Australia Academy of Science, and winner of the Humboldt Award. His
book, "Machines that Think: The Past, Present and Future of AI" will be
published in early 2017 by Black Inc. His twitter
account, @TobyWalsh was voted one of the top ten to keep abreast of
developments in AI. His blog, thefutureofai.blogspot.com attracts tens of thousands of readers every month.
Many Fears About AI Are Wrong
Should you be worried about progress in Artificial Intelligence? Will Artificial Intelligence destroy jobs? Should we fear killer robots? Does Artificial Intelligence threaten our very existence? Artificial Intelligence (AI) is in the zeitgeist. Billions of dollars are being poured into the field, and spectacular advances are being announced regularly. Not surprisingly, many people are starting to worry where this will all end. The Chief Economist of the Bank of England predicted that Artificial Intelligence will destroy 50% of existing jobs. Thousands of Artificial Intelligence researchers signed an Open Letter predicting that Artificial Intelligence could transform warfare and lead to an arms race of "killer robots". And Stephen Hawking, Elon Musk and others have predicted that Artificial Intelligence could end humanity itself. What should you make of all these predictions? Should you worry? Are you worrying about the right things? And what should we do now to ensure a safe and prosperous future for all?
Advances in AI are happening at a tremendous pace, and especially Machine Learning systems are hastily deployed in settings that span as wide as social media, search engines, advertising, military use, and legal systems. At first sight results are often promising, but more and more "outliers" are becoming visible as well. Systemic bias in culture, language, or business practices often become intensified: From the Tay-chatbot turning racist, to both Google and Flickr classifying images of African-Americans as "monkeys", to discrimination and sexism encoded in language models used for everything from translation, to court sentencing decision. Luckily, initiatives that address some of these concerns is happening through more interpretable models, new privacy regulations, novel privacy-preserving learning algorithms, and researchers and engineers standing up for more just and transparent machine learning models and methods.
The panel aims to debate these themes with speakers, and the audience, and go beyond the usual "AI hype" and discuss both the amazing progress, as well as the setbacks, in making societies better, more humane, and ready for the future!
Speaking at the panel will be:
- Roelof Pieters, Panel Host, Co-founder at creative.ai
- Marek Rosa, CEO/CTO at GoodAI
- Françoise Provencher, Data Team Lead at Shopify
- Hendrik Heuer, Researcher at Institute for Information Management (ifib) at the University of Bremen
- Andreas Dewes, Founder at 7scientists
- Katharine Jarmul, Founder at kjamistan