IKO
  • Blog
  • IKO 2019
    • Programme 2019
    • Organisers 2019
    • Register 2019
    • Venue 2019
  • IKO 2017
    • Organisers 2017
    • Programme 2017
    • Resources 2017
  • IKO 2016
    • Organisers 2016
    • Programme 2016
    • Resources 2016
  • IKO 2015
    • Organisers 2015
    • Programme 2015
    • Resources 2015
  • Contact

Learn First, Label Later: How Deep Learning Works

1/30/2015

0 Comments

 
By Patrick Lambe
To follow up on yesterday's post on deep learning, this fantastic feature article on Geoff Hinton - one of the pioneers of deep learning. Deep learning works by layering the neural nets to process at successive levels of detail, with feedback loops to reinforce the successful patterns. His breakthrough in machine learning was to allow the neural networks to learn patterns first without labelling each layer - only when the machine had learnt enough would labels be applied to the layers to assign meaning for human consumption. Actually this is exactly how good taxonomy development should begin - gather the data first, figure out meaningful patterns and clusters, assign concept labels later. It's usually disastrous to go in with a preconceived label-set and try to assign whatever you have to it. As for early practical applications? Google can now read YouTube videos and learn - by itself - how to recognise cats. Lesson? Learn first, categorise later.

"Nobody is saying that this system has exceeded the human ability to classify photos; indeed, if a human hired to write captions performed at the level of this neural net, the newbie wouldn’t last until lunchtime. But it did shockingly, shockingly well for a machine. Some of the dead-on hits included “a group of young people playing a game of frisbee,” “a person riding a motorcycle on a dirt road,” and “a herd of elephants walking across a dry grass field.” Considering that the system “learned” on its own concepts like a Frisbee, road, and herd of elephants, that’s pretty impressive." Hat tip to Andrew McAfee for this.
0 Comments

Deep Learning: Knowledge Organisation Beyond Textual Content

1/29/2015

0 Comments

 
By Patrick Lambe
For those of us who come into knowledge organisation (and knowledge management for that matter) via the taxonomy route, we are used to thinking of the field as a text-dominated field. However, there are exciting leapfrog developments in the artificial intelligence/ deep learning space as well. Image recognition (and matching), speech recognition, statistical methods of language translation (check out the new version of the Google Translate mobile app), and advances in the hardware required for this kind of computing and storage power, are all promising rapid shifts in being able to connect audio, video and image content with text-based content. One of our sessions at IKO will look at ways of building rich knowledgebases out of high resolution image details. Add facial recognition and auto-classification to that mix and you get instant propagation of insight into large historical image collections (for example). Here's a nice short piece on some of 2014's breakthroughs and promises in deep learning from Tom Simonite.
0 Comments

IKO Advisory Board established

1/21/2015

0 Comments

 
By Patrick Lambe
We are building a distinguished Advisory Board to help us put together the core programme and evaluate session proposals. Right now, we have advisors from the UK (Angus Roberts, Stella Dextre Clarke and Martin White), Australia (Janine Douglas), Hungary (Agnes Molnar), the USA (Marjorie Hlava and Douglas Oard), Singapore (Christopher Khoo and Leong Mun Kew), Hong Kong (Eric Tsui) and South Korea (Sam Oh). More to come! Check them out here.

0 Comments

Artificial Intelligence and Knowledge Organisation

1/17/2015

1 Comment

 
By Patrick Lambe
From Kevin Kelly back in October, a nice nuanced piece on the three converging factors "that have finally unleashed AI on the world". The three factors are: (1) cheaper parallel computation; (2) big data, that gives algorithms thousands or millions of examples to see patterns in and learn from; (3) more efficient architectures for networking processors in support of "deep" (and fast) learning. It's (2) in particular where knowledge organisation methodologies contribute. Big data can only see meaningful patterns across many diverse data sources, if concepts expressed through multiple vocabularies and in diverse structures, can be resolved to the same things. Kelly also points out the dependency on very large networks of users. Artificial intelligence learns effectively only on large scale, not small scale (which is very different from how humans learn, by the way). All the more, the problem of diversity of description and structure needs to be resolved.
1 Comment

Video: The Inspiration Behind the IKO Conference 2015

1/17/2015

0 Comments

 
By Patrick Lambe
In this video I explain how we were inspired to put together the IKO Conference.

Why you should attend the IKO Conference June 2015 from Patrick Lambe on Vimeo.

0 Comments

Brief History of Information Architecture

1/14/2015

5 Comments

 
Here's a nice 2011 piece from Andrea Resmini and Luca Rosati on the three strands of tradition underlying Information Architecture - information design, information systems, and information science. This helps to explain why "information architects" can be talking different at cross-purposes, ostensibly about the same thing, but speaking different languages. The authors chronicle the current shift towards pervasive IA, i.e. designing for information everywhere across mutliple, devices and contexts: ".. we live in a world where relationships with people, places, objects, and companies are shaped by semantics and not only by physical proximity.."
By Patrick Lambe
5 Comments

The Role of Taxonomy Work in Extracting Insight from Big Data

1/13/2015

0 Comments

 
So in my last post I made a claim that taxonomy work was important to resolve vocabulary variations to common concepts, in order to discern patterns from multiple data sources. This article from the New York Times "Computing Crime and Punishment" is a beautiful example of how a thesaurus can be used to recognise similar concepts in unstructured data sources over very long periods of time - in this case, 121 million words describing 197,000 trials over 239 years. Of course vocabularies changed, but Roget's Thesaurus turned out to be a beautiful instrument layered on top of the data.
By Patrick Lambe
0 Comments

Data Analytics, Expert Intuition and the Role of Taxonomy

1/13/2015

2 Comments

 
Really sharp post from Andrew McAfee today on when data analytics trumps expert intuition. Data systems can see patterns over long periods, where human learning from feedback favours short cycles. The critical dependency to my mind however, is that in big data using multiple data sources, you have to have a reliable means to resolve different labels for relevant entities and events to the same concepts. Without that the variability of vocabulary will stymie your attempts to see the patterns over the multiple sources ( and time periods). In case you think this is a minor issue, check out the 2001 deportation of Australian citizen Vivian Alvarez "as an unlawful non-citizen" - all because the various immigration department databases could not resolve different variations of her name.
By Patrick Lambe
2 Comments

    News

    We are using this blog to keep you updated on conference planning and organisation, and to link you to informative discussion materials.

    Archives

    August 2017
    May 2017
    March 2017
    January 2017
    November 2016
    September 2016
    August 2016
    July 2016
    May 2016
    March 2016
    February 2016
    January 2016
    December 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015

    Categories

    All

    RSS Feed

Copyright © 2019 Conference Organisers. All rights reserved.