I can’t believe that we are nearly at the end of this fascinating journey. Ten weeks have gone by so fast and I have learnt so much that it will be good to step back and process all the information. In the last few sessions, after concentrating on the structure of information and metadata, we have moved on to how we can measure and analyse the data, and from there we have taken a leap into the world of AI.
Measuring, exploring and analysing the data is to make the data work for us. By measuring the impact of an article or piece of research, whether it is by the more traditional way of counting citations or new web based techniques such as altmetrics we can find out what impact an article has had and what people are talking about. The new techniques allow measurement also of what impact is being made across all forms of social media such as blogs and Twitter. They can also indicate new ways to share information by seeing how other people share and what formats and forums they use. As with any form of measurement we need to be aware of the pitfalls, of not looking critically at what is produced – but it is certainly a very useful starting point.
We then investigated deeper, into the tools available not just for measuring but for exploring and analyzing. Tools for getting meaning – coding with python to link things together or analyse different data sets, to be able to use the information and find out what you need to know, even if the data is not set up in such a way as to tell you! More fun things such as word clouds give a visual interpretation of the information – easy to grasp and digest and can make a big impact, even if the information is limited.
The most interesting topic, possibly because it is the headline grabbing one is Artificial Intelligence. Whilst ethical awareness should be present in every aspect we have looked at, it is in the topic of AI that it seems most necessary and urgent. Fears range from terminator style scenarios fed by popular culture to real fears of possible job losses. A recent article from the Daily Telegraph (‘UK ‘not ready for the next industrial revolution’ and rise of the robots’ by Lauren Davidson, 28 Nov 2016 ) quotes a report produced by Deloitte which states that 35% of UK jobs at high risk of automation in the next 10 to 20 years.
There are fun things too. The music industry is one keen adopter. Streaming services such as Spotify already use data analysis to make recommendations. Quantone, a London- based start-up is using the IBM Watson engine to improve their recommendations by analysing huge amounts of data including tweets, blogs and online reviews (see FT.com article ‘Rise of the robot music industry’ by Nic Fildes, 2 Dec 2016). There is even an AI generated Xmas song this year! As reported in the Guardian on 29 November (‘It’s no Christmas No. 1 but AI-generated song brings festive cheer to researchers’ by Ian Sample) the Neural Karaoke project from the University of Toronto fed a Christmas picture into a computer program and it generated Christmassy lyrics and music as well.
But for all the fun and usefulness of music recommendations, Twitter bots managing your Twitter accounts, and handy helpers like Siri and Amazon Echo there are sinister undertones. Another article from the FT (‘Algorithmic discrimination‘ by Izabella Kaminska, 29 Nov 2016) discusses the idea that discrimination is the single biggest problem facing the artificial intelligence field. An algorithm cannot judge exceptions and does not have the ability to ‘..respect a human’s capacity to change, better himself or hold contradictory view points at the same time’. This danger is further highlighted by reports that researchers at Shanghai Jiao Tong University have created a machine that can identify criminals by assessing their eyes, nose and mouth, supporting the dangerous view that criminals have certain facial features (‘Minority Report-style AI learns to predict if people are criminals from their facial features‘ by Cara McGoogan, 24 Nov 2016) This all sounds worrying, and in parts of the World where surveillance is everywhere and freedoms are restricted it is worrying and a real cause for concern for human rights watchers. But the AI isn’t creating the problems, true that it is feeding them and enhancing them but it is still human beings that are creating the tools.
The fears aren’t new and the problems aren’t really new either. Any problems or fears come down to the questions of what can we do, how can we do it and should we do it and also what we allow ourselves to do and how we treat our fellow human beings. AI is just another tool, albeit an exciting one. An understanding of ethics and and ethical thinking and the need to discuss and be open about ethical problems needs to be at the starting point of everything. It needs to be part of our education, the need is to instil ethical thinking and discussion at school age, before over enthusiasm, economics and, potentially, greed kick in.
Sources: Financial Times, 23 November 2016; Telegraph.co.uk, 24 November 2016; The Observer, 27 November 2016; Daily Telegraph, 28 November 2016; The Guardian, 29 November 2016; FT.com, 29 November 2016; FT.com, 2 December 2016; The Guardian, 2 December 2016.