Putting Evidence Based Practice to work        
At Internet Librarian this week - these are my notes from the first half of the Evidence Based Practice session - really enjoyed this one!

Putting evidence based practice to work

Amanda Hollister

Frank Cervone



Usability testing - Northwestern U

Problem of website design

Not large number of people trained in Human Computer Interaction

Have to learn in order to understand

People tend to feel like sites are" done" instead of constantly adjusting and reevaluating

Constantly evolving websites - to meet customer needs

Andrew Booth - U of Sheffield

quote from article.

heirarchy in how looked at


Data provides evidence, not anecdotal stories or common sense
Evaluation constantly.

How differentrom what happens? people making decisions now obn beliefs, not data

Evaluation after the fact is too late.

Comes from medicine - lots of writing from med school libraries

Study, compare results and compare results.

Evidence based practice process reminds me of the info life cycle - never ends - not daily, but consistently

SPICE -

Setting - where being used? What context

Population - who are users

, Intervention, - what is being done to/for?



COmparison, - what are alternatives?

Evaluation - what was result?

Needs to be much more rigorous

Look at the current evidence - focus groups & surveys are lower level. How could test at higher level?

Assume that title search is correct - example - but also looked at # and details of failed searches

Required reading list -

Req training in usabiltiy

participate in whole process

these three things teach people whole process.

Then, go back and compare new stats to old stats - have the changes worked. These stats are hard data, quantitative, not qualitative.

Also helps with justifying new tools or processes.

Lets prove that we are making things better - get better comments and feedback.

Still probs: jargon, users understand why would use library website.

Anecdotal evidence can give idea of what might need to be reviewed, but is not necessarily representative.


Ok, laptop dying now - switching to paper for notes


Will share reading list - assistant u librarian for infor technology - Frank Cervone.
          Lowongan Kerja UX Researcher        
Bachelor degree from reputable university Engineering Information Technology Human Computer Interaction Psychology or behavioral sciences graduates are preferredPossess 2-3 proven working experience as UX Researcher is a mustThorough understanding of user experienceAble to convey message ...

          A Starter Kit for Instructional Designers | EdSurge - Postsecondary Learning        
Follow on Twitter as @amyahearn11
"A 2016 report funded by the Gates Foundation found that in the U.S. alone, there are 13,000 instructional designers. Yet, when I graduated from college in 2008, I didn’t know this field existed. Surely a lot has changed!" inform Amy Ahearn, Online Learning Manager for +Acumen and a graduate of Stanford’s Learning, Design and Technology Masters program.

Photo: EdSurge

Instructional design is experiencing a renaissance. As online course platforms proliferate, institutions of all shapes and sizes realize that they’ll need to translate content into digital forms. Designing online learning experiences is essential to training employees, mobilizing customers, serving students, building marketing channels, and sustaining business models.

The field has deep roots in distance education, human computer interaction, and visual design. But I’ve come to believe that contemporary instructional design sits at the intersection of three core disciplines: learning science, human-centered design, and digital marketing. It requires a deep respect for the pedagogical practices that teachers have honed for decades, balanced with fluency in today’s digital tools.

Most people with “instructional design” in their job title are involved in converting “traditional” written curriculum or in-person teaching into an online course. But they can also be creating learning apps, museum exhibits, or the latest educational toy. My classmates from Stanford’s Learning Design and Technology master’s program have gone on to design for big brands like Airbnb and Google as well as edtech upstarts including the African Leadership University, General Assembly, Osmo and Udacity.

Over the last few years, we’ve traded resources, articles and work samples as we try to build our own starter kit for this fast-moving field. Below are some of the lessons and resources that I wish I knew of when I first went on the job market—a combination of the academic texts you read in school along with practical tools that have been essential to practicing instructional design in the real world. This is not a complete or evergreen list, but hopefully it’s a helpful start. 
Read more...

Source: EdSurge

          Tangible User Interface (TUI)        
Tangible User Interface (TUI) adalah sebuah antar muka pengguna di mana seseorang berinteraksi dengan informasi digital melalui lingkungan fisik. Sebuah TUI adalah salah satu teknologi dimana pengguna berinteraksi dengan sistem digital melalui manipulasi obyek fisik terkait dan langsung mewakili kualitas sistem tersebut.Nama awal dari TUI adalah Graspable User Interface (GUI), yang tidak lagi digunakan.

Ide dari TUI adalah untuk memiliki hubungan langsung antara sistem dan cara anda mengontrol melalui manipulasi fisik dengan memiliki makna yang mendasar atau hubungan langsung yang menghubungkan manipulasi fisik ke perilaku yang mereka picu pada sistem.

Orang-orang telah mengembangkan keterampilan canggih untuk merasakan dan memanipulasi lingkungan fisik mereka. Namun, sebagian besar keterampilan ini tidak digunakan dalam interaksi dengan dunia digital saat ini. Interaksi dengan informasi digital saat ini sebagian besar terbatas pada Graphical User Interface (GUI).Dengan keberhasilan komersial Apple Macintosh dan Microsoft Windows, GUI telah menjadi paradigma standar untuk Human Computer Interaction (HCI) hari ini. GUI merupakan informasi (bit) dengan piksel pada layar bit-dipetakan.

Mereka representasi grafis yang dapat dimanipulasi dengan remote controller generik seperti mouse dan keyboard. Dengan representasi decoupling (piksel) dari kontrol (perangkat input) dengan cara ini, GUI memberikan kelenturan untuk meniru berbagai media grafis. Namun, ketika kita berinteraksi dengan dunia GUI, kita tidak bisa mengambil keuntungan dari ketangkasan kita atau memanfaatkan keterampilan kita untuk memanipulasi berbagai benda-benda fisik seperti manipulasi blok bangunan atau kemampuan untuk membentuk model dari tanah liat.

KARAKTERISTIK

1. Representasi fisik digabungkan untuk mendasari komputasi informasi digital.

2. Representasi fisik mewujudkan mekanisme kontrol interaktif.

3. Representasi fisik perseptual digabungkan untuk secara aktif ditengahi representasi    digital.

4. Keadaan fisik terlihat “mewujudkan aspek kunci dari negara digital dari sebuah sistem.


Penerapan Tangible User Interface

1. Mouse
Salah satu penerapan TUI yang paling sederhana adalah pada mouse. Menyeret mouse melalui permukaan datar dan gerakan pointer pada layar yang sesuai merupakan cara berinteraksi dengan sistem digital melalui manipulasi objek fisik. Gerakan yang dibuat dengan perangkat tersebut memiliki hubungan yang jelas dengan perilaku yang dipicu sistem, misalnya misalnya pointer bergerak naik ketika Anda memindahkan mouse maju. Teknologi ini membuat menjadi sangat mudah untuk menguasai perangkat input dengan bantuan sedikit koordinasi tangan dan mata

2. Siftables
Merupakan perangkat kecil dari proyek awal di MT Media Lab yang memiliki bentuk menyerupai batu bata kecil yang mempunyai interface. Shiftable memiliki jumlah lebih dari satu dan mampu berkomunikasi serta berinteraksi satu sama lain tergantung pada posisinya. Shiftable yang terpisah tahu kapan shiftable lain berada di dekat mereka dan bereaksi sesuai dengan permainan user.

3. Reactable
Reactable adalah alat musik yang dirancang dengan keadaan teknologi seni untuk memungkinkan musisi (dan lainnya) untuk bereksperimen dengan suara dan menciptakan musik yang unik. Instrumen ini didasarkan pada meja bundar tembus dan bercahaya di mana satu set pucks dapat ditempatkan. Dengan menempatkan mereka di permukaan (atau membawa mereka pergi), dengan memutar mereka dan menghubungkan mereka satu sama lain, pemain dapat menggabungkan unsur-unsur yang berbeda seperti synthesizer, efek, loop sampel atau elemen kontrol dalam rangka menciptakan komposisi yang unik dan fleksibel.
Begitu setiap keping ditempatkan di permukaan, keping itu diterangi dan mulai berinteraksi dengan keping lain, menurut posisi dan kedekatannya. Interaksi ini terlihat pada permukaan meja yang bertindak sebagai layar, memberikan umpan balik instan tentang apa yang sedang terjadi di Reactable, mengubah musik ke dalam sesuatu yang terlihat dan nyata.

4. Microsoft Surface
Merupakan sebuah teknologi dengan layar multi sentuh yang memungkinkan pengguna untuk berinteraksi dengan built in system pada waktu yang sama. Yang menjadi perhatian adalah hal tersebut bereaksi tidak hanya ketika disentuh, tetapi teknologi ini juga dapat mengenali objek yang ditempatkan diatasnya dan dapat mengatur sendiri perilaku yang terkait dengan benda-benda serta bagaimana kita dapat memanipulasinya.

5. Marble Answering Machine
Contoh lain dari Tangiable User Interface adalah Marble Answering Machine (Mesin Penjawab Marmer) oleh Durrell Uskup (1992). Marmer merupakan suatu pesan yang ditinggalkan di mesin penjawab. Menjatuhkan marmer ke piring pemutar,lalu memutar ulang pesan yang terkait.

6. Sistem Topobo
Blok di Topobo seperti blok LEGO yang bisa diambil bersama-sama, tetapi juga dapat bergerak sendiri menggunakan komponen bermotor. Seseorang dapat mendorong, menarik, dan memutar blok-blok, dan blok-blok bisa menghafal gerakan-gerakan ini dan menggulang kembali gerakan-gerakan tersebut. Pelaksanaan lain memungkinkan pengguna untuk sketsa gambar di atas meja sistem dengan pena yang nyata nyata. Menggunakan gerakan tangan, pengguna dapat mengkloning gambar dan peregangan dalam sumbu X dan Y hanya sebagai salah satu program yang akan di cat. Sistem ini akan mengintegrasikan kamera video dengan sistem pengenalan isyarat.


sumber : 
http://monstajam.blogspot.com/


          Web of Science        

Covering 12,000+ scholarly journals, plus selected books and published conference proceedings in all academic disciplines, the Web of Science Core Collection combines seven citation indexes which permit searching for articles that cite a known author or work, as well as searching by subject, author, journal, and author address:

  • Science Citation Index Expanded (SCI-EXPANDED)
  • Social Sciences Citation Index (SSCI)
  • Arts & Humanities Citation Index (A&HCI)
  • Conference Proceedings Citation Index - Science (CPCI-S)
  • Conference Proceedings Citation Index - Social Sciences & Humanities (CPCI-SSH)
  • Book Citation Index - Science (BKCI-S)
  • Book Citation Index - Social Sciences & Humanities (BKCI-SSH)
Brief Description: 
Core Collection of multidisciplinary indexes which permit searching for articles that cite a known author or work.
Access: 
Subscription
Mobile Version: 
Mobile friendly interface available.
Icons: 
Authorized UM users (+ guests in UM Libraries)
New Resource Indicator: 
Not New
Coverage: 
1900 - (varies by index: 1900 - for Science and Social Sciences; 1975 - for Arts & Humanities, 1990 - for Conference Proceedings, 2005 - for Books)
Type: 
Article Index
Vendor: 
Thomson Reuters
Platform / Series: 
Web of Science
Note: 
grocho 20110826 - Removed "ISI" from title. grocho - URL prior to 20110926 - http://isiknowledge.com/wos swortman- 20120813 - updated number of journals from 8,000 to over 12,000, based on product info: http://thomsonreuters.com/products_services/science/science_products/a-z/web_of_science/ jnasond 1/23/13 updated dates of coverage sdenn 20130218 - updated brief description sdenn 20140128 - updated descriptions, Other Titles, URL, Coverage to accord with new WoS interface and latest info
ST ID: 
UMI01738
Internal Note: 
sdenn 1/30/05; grocho 20061114 - Added alternative name. bertrama 20081120 - changed native interface url
Raw ST Data: 
<source_full_info> <source_info> <source_internal_number>000006727</source_internal_number> <source_001>UMI01738</source_001> <source_name>Web of Science (ISI)</source_name> <source_short_name>Web of Science (ISI)</source_short_name> <source_searchable_flag>Y</source_searchable_flag> </source_info> <record xmlns="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> <controlfield tag="001">UMI01738</controlfield> <datafield tag="245" ind1="1" ind2=" "> <subfield code="a">Web of Science (ISI)</subfield> </datafield> <datafield tag="246" ind1="1" ind2=" "> <subfield code="a">ISI Citation Indexes</subfield> </datafield> <datafield tag="246" ind1="1" ind2=" "> <subfield code="a">AHCI, SSCI, SCI, Web of Science, WebOfScience, WoS, Web of Sci.</subfield> </datafield> <datafield tag="210" ind1=" " ind2=" "> <subfield code="a">ISI Web of Science</subfield> </datafield> <datafield tag="520" ind1=" " ind2=" "> <subfield code="a">Combines three citation indexes--Arts &amp; Humanities, Science Expanded, and Social Sciences--which permit searching for articles that cite a known author or work, as well as searching by subject, author, journal, and author address. Covers 8,400+ journals.</subfield> </datafield> <datafield tag="513" ind1=" " ind2=" "> <subfield code="a">1900 - (1956 - for Social Sciences; 1975 - for Arts &amp; Humanities, 1900 - for Science)</subfield> </datafield> <datafield tag="594" ind1=" " ind2=" "> <subfield code="a">SUBSCRIPTION</subfield> </datafield> <datafield tag="591" ind1=" " ind2=" "> <subfield code="a">sdenn 1/30/05; grocho 20061114 - Added alternative name.##bertrama 20081120 - changed native interface url</subfield> </datafield> <datafield tag="595" ind1=" " ind2=" "> <subfield code="a">Author searches are done using the last name only. ## ##Year can only be searched as the second part of a Boolean AND search. ## ##Searches in All Fields will search titles. ## ##ISSN and ISBN searches are not supported.</subfield> </datafield> <datafield tag="110" ind1="2" ind2=" "> <subfield code="a">Thomson ISI</subfield> </datafield> <datafield tag="260" ind1=" " ind2=" "> <subfield code="b">Thomson ISI</subfield> </datafield> <datafield tag="TAR" ind1=" " ind2=" "> <subfield code="a">ISI_WOS_XML</subfield> <subfield code="f">Y</subfield> </datafield> <datafield tag="ZHS" ind1=" " ind2=" "> <subfield code="a">http://wok-ws.isiknowledge.com/esti/soap/</subfield> </datafield> <datafield tag="ZDC" ind1=" " ind2=" "> <subfield code="a">WOS</subfield> </datafield> <datafield tag="AF1" ind1=" " ind2=" "> <subfield code="a">UMICH</subfield> </datafield> <datafield tag="STA" ind1=" " ind2=" "> <subfield code="a">ACTIVE</subfield> </datafield> <controlfield tag="CKB">CKB02571</controlfield> <datafield tag="RNK" ind1=" " ind2=" "> <subfield code="a">3</subfield> </datafield> <datafield tag="CJK" ind1=" " ind2=" "> <subfield code="a">lat</subfield> </datafield> <controlfield tag="FMT">DD</controlfield> <datafield tag="655" ind1=" " ind2=" "> <subfield code="a">Index</subfield> </datafield> <datafield tag="856" ind1="4" ind2="1"> <subfield code="u">http://isiknowledge.com/wos</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20040909</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1641</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20040929</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1000</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20040929</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1002</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20040929</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1002</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20040929</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1014</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20040929</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1219</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20040930</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1407</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20040930</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1407</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20041005</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1636</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Terry</subfield> <subfield code="b">00</subfield> <subfield code="c">20041129</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1559</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20041202</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1024</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Ryan</subfield> <subfield code="b">00</subfield> <subfield code="c">20041209</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1622</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Ryan</subfield> <subfield code="b">00</subfield> <subfield code="c">20041220</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1757</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Ryan</subfield> <subfield code="b">00</subfield> <subfield code="c">20041221</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1229</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Ryan</subfield> <subfield code="b">00</subfield> <subfield code="c">20041221</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1235</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20050103</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1017</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Ryan</subfield> <subfield code="b">00</subfield> <subfield code="c">20050114</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1726</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Scott</subfield> <subfield code="b">00</subfield> <subfield code="c">20050130</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1824</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Scott</subfield> <subfield code="b">00</subfield> <subfield code="c">20050130</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1826</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Scott</subfield> <subfield code="b">00</subfield> <subfield code="c">20050130</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1828</subfield> </datafield> <datafield tag="710" ind1="2" ind2=" "> <subfield code="a">Institute for Scientific Information (ISI)</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Scott</subfield> <subfield code="b">00</subfield> <subfield code="c">20050130</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1851</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20051121</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">0900</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20051121</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">0900</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20051121</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">0908</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20051121</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">0908</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20051121</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">0909</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20051121</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">0910</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Cathy</subfield> <subfield code="b">00</subfield> <subfield code="c">20051130</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1523</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20051205</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1529</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20051205</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1529</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Suzanne</subfield> <subfield code="b">00</subfield> <subfield code="c">20060104</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1724</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20060125</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1152</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20060125</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1152</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20060125</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1153</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20060316</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1526</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20060316</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1526</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">David</subfield> <subfield code="b">00</subfield> <subfield code="c">20060608</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1327</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20061114</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1434</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20061114</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1434</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Shevon</subfield> <subfield code="b">00</subfield> <subfield code="c">20070601</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1653</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20080814</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1243</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20080814</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1244</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20081120</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1131</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20081120</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1215</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Beau</subfield> <subfield code="b">00</subfield> <subfield code="c">20100218</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1047</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Scott</subfield> <subfield code="b">00</subfield> <subfield code="c">20110427</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">2210</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Scott</subfield> <subfield code="b">00</subfield> <subfield code="c">20110427</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">2211</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Scott</subfield> <subfield code="b">00</subfield> <subfield code="c">20110427</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">2213</subfield> </datafield> </record> </source_full_info>
Creator: 
Thomson Reuters
Other Titles: 
Web of Science Core Collection, WoSCC, WSCC
ISI Web of Science, ISI Citation Indexes, WebOfScience, WoS, Web of Sci.
SCI, SCI-EXPANDED, SSCI, AHCI, A&HCI, CPCI, CPCI-S, CPCI-SSH, BCI, BKCI, BKCI-S, BKCI-SSH
Science Expanded, Social Sciences, Arts & Humanities, Conference Proceedings, and Book Citation Index

          Interacción 2017        
the 18th edition of the International Conference promoted by the Spanish Human Computer Interaction Association September 2017, Cancún, México   Interacción 2017 is the 18th edition of the International Conference promoted by the Spanish Human Computer Interaction Association (Spanish name: Asociación para la Interacción Persona-Ordenador, AIPO), whose main objective is to promote and disseminate the
          7 Online courses for learning Usability, User Experience and Interaction design        
Learning is ever going process. It doesn't matter what you are professional or student, for updating your skill and knowledge you would require learning on regular basis. Field of Design (Usability, UX Design and Interaction design) is spreading across world with a rapid speed. Companies have started understanding importance of Usability and User Experience. There are lots of good material like blogs, articles and books and courses available online. If you are enthusiastic about Usability and User Experience then these courses will be very helpful for you to start.

Online courses


Remarks: Good for starting and understanding basics

Remarks: Special lectures from Don Norman on ‘Usability’.



Remarks: There are many useful courses in this portal.

Remarks: Basics of User Experience


Remarks:  Advanced User Experience and Gamification.

You can also find some amazing listing of graphic design course in OnlineCourseReview. 

Author: Abhishek Jain (Twitter @UxdAbhi)

          The Code is not the Text (unless it is the Text)        
by
John Cayley
2002-09-10

Digital utopianism is still with us. It is with us despite having been tempered by network logistics and an all-too-reasonable demand for ‘content.’ Admittedly, New Media has aged. It has acquired a history or at least some genuine engagement with the reality principle, now that the Net is accepted as a material and cultural given of the developed world, now that the dot.coms have crashed, now that unsolicited marketing email and commercialism dominates network traffic. Nonetheless, artistic practice in digital media is still often driven by youthful, escapist, utopian enthusiasms. Net Art as such pretends to leapfrog this naivety through the wholesale importation of informed, ironic, postmodern conceptualism, offering us the shock of the virtual-visceral banal at every possible juncture. Other, more traditionally delineated arts - literature, music, photography, fine art, architecture, graphics, etc. - struggle to cope with the reconfiguration of their media, or with a migration to complex new media which are suddenly shared, suddenly intercommunicable with those of artistic practices previously considered to be distinct. One way of coping is escape.

I write as a literary artist, my ever-provisional, traditionally delineated subject position in this context: poet. When asked, in social contexts, I don’t really know what to call myself, although - when I manage to remember the phrase - ‘literal artist’ seems about right. I write as a practitioner, but I am interested in the theory underlying my practice because I recognise that my artistic media are being reconfigured to a degree which may well be catastrophic, or, at least, allow me and my fellow writers to recall that these media - textual media - have always been subject to reconfiguration. Serious formalism in literature was never just a matter of rhetorical flourish; it was inevitably, ineluctably, concerned with the materiality of language, and therefore with the affect and significance of language as such.

If you persist, you are about to read a theoretically-inflected critique of what some people call ‘codework.’ Potentially codework is a term for literature which uses, addresses, and incorporates code: as underlying language-animating or language-generating programming, as a special type of language in itself, or as an intrinsic part of the new surface language or ‘interface text,’ as I call it, of writing in networked and programmable media. Why do many of the current instantiations of codework, along with some of the theoretical writing that underpins this practice, require critique at this time? What is at stake? I have to try and briefly answer this question at the outset, because what follows is largely critical, something I wrote and felt I’d completed in response to questions which are only now being formulated; it is part of an emergent debate about the role of code in literal art. There will be much more to write, at other times and places, which is less critical and more generative, precisely because of what is at stake.

In utopia, because you are nowhere you are everywhere at once. Transparency and translatability are key values of digital utopianism. We should perhaps remain sceptical not only concerning the no-place itself but also concerning its values. Are they indeed values? In much current codework language is (presented as) code and code is (presented as) language. The utopia of codework recognises that the symbols of the surface language (what you read from your screen) are the ‘same’ (and are all ultimately reducible to digital encoding) as the symbols of the programming languages which store, manipulate, and display the text you read. The mutual transparency and translatability of code and language becomes a utopian value, and when it is recast as the postmodern virtual-visceral banal - as the mutual infection-contamination of language by code and code by language - it becomes a subversive (i.e. potentially progressive) utopian value. Basically, my argument in what follows is that, in much existing codework, this is as far as we get. A simple point based on digital transparency and translatability is being made in a context which is already utopian and this more or less exhausts the significance and affect of the work. If, furthermore, your focus remains fixed on the interface text - on what can be read and recorded from the screen as writing - then much critical energy goes into interpreting work with an all-but-exhausted aesthetic program in a fairly traditional and conservative manner. The code is in the text or the text is in the code, and it’s there because it can be, and that’s what we have to say about it.

So what is left outside of this utopia? It is obviously a tactical exaggeration to say that most instances of codework in networked and programmable media are exhausted by the aesthetic I have briefly introduced and caricatured. More accurately, there is a problem with the way code-as-text is appreciated and appropriated within the broader critical ‘language of new media.’ Much work exists that can not or should not be assimilated into the utopia of code-language transparency. I argue that certain reasons why such work is alien to the utopia of transparency are also precisely reasons why it is able to generate significance and affect - because the code is not necessarily transparent or visible in human-readable language; because code has its own structures, vocabularies and syntaxes; because it functions, typically, without being observed, perhaps even as a representative of secret workings, interiority, hidden process; because there are divisions and distinctions between what the code is and does, and what the language of the interface text is and does, and so on. A specialised appreciation for code does not in any way preclude the mutual contamination of code and natural language in the texts that we read on screen, it simply acknowledges that - in their proper places, where they function - code and language require distinct strategies of reading. The necessity to maintain these distinct strategies as such should lead, eventually, to better critical understanding of more complex ways to read and write which are commensurate with the practices of literal art in programmable media.

To conclude these introductory remarks, here is a suggestive and non-exhaustive list of things I believe are at stake, a list of approaches to work which risk being ignored or downgraded if we remain focused on codework as code-language transparency:

- If a codework text, however mutually contaminated, is read primarily as the language displayed on a screen then its address is simplified. It is addressed to a human reader who is implicitly asked to assimilate the code as part of natural language. This reading simplifies the intrinsically complex address of writing in programmable media. At the very least, for example, composed code is addressed to a processor, perhaps also addressed to specific human readers (those who are able to ‘crack’ or ‘hack’ it); while the text on the screen is simultaneously? asynchronously? addressed to human readers generally. Complexities of address should not be bracketed within a would-be creolized language of the new media utopia.

- Address to other, unusual reading processes - the machine itself, or particular human readers who have learned how systems read - implies the need for different persuasive strategies, different strategies for generating significance and affect. I mean that the rhetoric of writing in code must be distinct. Again, appeal to values of hybridity and mutual linguistic contamination (addressed to postmodern humans) threatens to conceal the emergence of new or less familiar rhetorical strategies. In what follows I briefly mention two of these, the tropes of strict logical process and another I identify with compilation in the programmer’s sense. There is a lot of very necessary work to be done here, identifying the unacknowledged tropes and figures of literal art in new media. Perhaps even certain questions concerning the rhetoric of electronic games (when viewed as literal art) could be studied in this context. For example, the trope of ‘playability’ emerges as much from the composition of code as from the ‘writing’ (in the scriptwriter’s sense) during games development.

- Reading codework as code-in-language and language-in-code also risks stunning the resultant literary object, leaving it reduced to simple text-to-be-read, whereas there are real questions of how such work is to be grasped as an object: is it text, process, performance, instrument? If code is treated distinctly, as an aspect of writing with its own structures and effects, then we gain in the potential to articulate more appropriate classes of literal objects, with instrument, for example, forming one class I would prefer, personally, to instantiate and explore.

- A question I do begin to engage in what follows is the materiality of language and how this may be evolving in writing for programmable media. I query N. Katherine Hayles’ position in the code-as-text debate, particularly her readings of the work of certain codework artists along with her invocation of the ‘flickering signifier,’ which I suspect her of using to underpin this codework despite the fact that such work does not necessarily engage with the materiality of a flickering signifier. By the time we get to read code-as-text, in most cases it is presented as, at best, a chain of resolved floating signifiers, with the code elements simply providing a layer of associative complexity or slippage. Hayles’ signifier has far greater potential and this not always operating in the code-as-text variety of literal art. The flickering signifier cannot simply be seen as something which goes on behind the screen; it emerges when code is allowed, as I say, its proper place and function: when the composed code runs. As it runs, the code is not the text, it is not a set of (non-sequential) links in a chain of signifiers; the code is what makes them flicker, what transforms them from writing as record of static or floating simultaneities into writing as the presentation of atoms of signification which are themselves time-based (they are not what they are without their flickering transformations over time, however fleeting these may be).

- The implicit requirement - at one and the same time - to pay close and particular attention to the role of code in literal art, while, at certain moments of reading, to allow that this distinct role functions in concealment, will have practical as well as theoretical effects on artists’ creative methodology even if only to help them to better understand how and why they are working with code. The emergent materiality of the signifier - flickering, time-based - creates a new relationship between media and content. Programming the signifier itself, as it were, brings transactive mediation to the scene of writing at the very moment of meaning creation. Mediation can no longer be characterised as subsidiary or peripheral; it becomes text rather than paratext. Criticism of code-making in this context becomes even more important and central than, for example, the criticism of instrumentation or interpretation in musical recital. What I say about new media literary objects being reconceived as ‘instruments’ is not meant to imply that they are, in any sense, merely instrumental.

The question of the materiality of the signifier, in particular, is a big one, which I believe Hayles is currently readdressing and which I hope to take up in a sequel to what finally, now follows.

*

The use of networked and programmable systems as both delivery and compositional media for literal and verbal art (and other forms of new media art) has provoked critical engagements which pretend to reveal and exam the various levels of code and encoding which are constituent of programmatological systems. Certain terms in this essay may require explanation. I prefer, despite its awkwardness and length, ‘writing in networked and programmable media’ to any of the current words or phrases such as ‘hypertext, hyperfiction, hyperpoetry,’ etc. or the corresponding ‘cyber-’ terms, although I do generally subscribe to Espen Aarseth’s ‘textonomy,’ and would prefer cybertext to hypertext as the more inclusive, ‘catholic’ term. Espen Aarseth, Cybertext: Perspectives on Ergodic Literature (Baltimore and London: John Hopkins University Press, 1997). I use ‘programmatology’ and ‘programmatological’ by extension from ‘grammatology’ and especially ‘applied grammatology’ as elaborated by Gregory Ulmer. Gregory L. Ulmer, Applied Grammatology: Post(E)-Pedagogy from Jacques Derrida to Joseph Beuys (Baltimore: John Hopkins University Press, 1985). Programmatology may be thought of as the study and practice of writing (Derridean sense) with an explicit awareness of its relation to ‘programming’ or prior writing in anticipation of performance (including the performance of reading). I try to avoid the use of the word ‘computer’ etc. and prefer, wherever possible, ‘programmaton’ for the programmable systems which we use to compose and deliver ‘new media.’ The title of the section of the p0es1s programme which provoked this paper - ‘Code as Text as Literature’ - is a case in point. This essay was originally sketched out for the “p0es1s: poetics of digital text” symposion (sic), held in Erfurt, 28-29 September, 2001 (http://www.p0es1s.com). In more extreme forms of such engagement, a radical post-human reductionism may be proposed, such as that, for example, which can be read from certain of Friedrich Kittler’s essays, in which the ramifications of “so-called human” culture, especially as played out on new media, become qualitatively indistinguishable from “signifiers of voltage difference” (“There Is No Software” 150), demonstrably the final, lowest-level ‘ground code’ of the increasingly familiar practices of cultural production which make use of programmable tools; and perhaps also essential to the brain activity which generates the objects and subjects of psychoanalysis. Kittler is reviewed by Bruce Clarke in ebr… Nowadays voltage difference accounts for and instantiates everything from the encrypted transactional play of internet banking to the promised consensual hallucination of immersive Virtual Reality. However, the purpose of this brief paper is to address a number of less productive confusions which arise from this engagement with code-as-text, citing a few examples of artistic practice and a number of critical sources. There are times when I would like to write ‘code-as-text’ and other times, ‘text-as-code,’ occasionally with either term cycling (code-as-text-code, etc.) I will just use the one term, asking the reader to bear in mind the other possibilities in appropriate contexts. While allowing the value of certain metacritical statements such as Kittler’s (which take on questions of what culture is or may become), my aim is to disallow a wilful critical confusion of code and text, to make it harder for critics to avoid addressing one or the other by pretending that they are somehow equivalent, or that codes and texts are themselves ambiguously addressed to human readers and/or machinic processors (unless they are so addressed, however ambiguously). As an example of the prevalence of code-as-text across the widest range of artistic inscription, a version of the code-as-text or reveal code aesthetic appears as something of a culmination in Lev Manovich’s excellent and provocative The Language of New Media (not discussed in the body of the present essay because of my focus on textual and literal art practice). The final section of Manovich’s book is entitled ‘Cinema as Code’ and features Vuk Cosic’s ASCII films, “which effectively stage one characteristic of computer-based moving images - their identity as computer code.” Manovich is undoubtedly correct when he asserts that, “What [George] Lucas hides, Cosic reveals. His ASCII films ‘perform’ the new status of media as digital data… Thus rather than erasing the image in favour of the code … or hiding the code from us … code and image coexist.” Nonetheless, it is worrying to be presented, in this highlighted context, with the example of work whose aesthetic may well prove to be exhausted by a conceptual and metacritical analysis (see below), particularly in a book which makes an unprecedented contribution to our understanding of new and emergent rhetorical strategies in new media (especially the crucial role of cinematic rhetoric), and represents a deep understanding of new media’s programmatological dimension. Lev Manovich, The Language of New Media, ed. Roger F. Malina, Leonardo (Cambridge: MIT Press, 2001) 330-33.

I have invoked reductionism and by this I mean a critical thrust which, implicitly or otherwise, asks questions like, ‘What (ultimately) is this object we are examining? What is its structure? What are its essential or operative characteristics?’ and then finds special critical significance in the answers proposed. In N. Katherine Hayles’ sophisticated version of what can be read as a code-as-text argument, this reductive inclination is in evidence. Her essay ‘Virtual Bodies and Flickering Signifiers’ discovers a new or emergent object, the flickering signifier, and derives important consequences from its instantiations and methods. “The contemporary pressure toward dematerialization, understood as an epistemic shift toward pattern/randomness and away from presence/absence, affects human and textual bodies on two levels at once, as a change in the body (the material substrate) and a change in the message (the codes of representation).” N. Katherine Hayles, “Virtual Bodies and Flickering Signifiers,” in How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago: University of Chicago Press, 1999), 29. An earlier version of this essay is also published as: N. Katherine Hayles, “Virtual Bodies and Flickering Signifiers,” in Electronic Culture: Technology and Visual Representation, ed. Timothy Druckery (New York: Aperture, 1996). In other words, Hayles suggests that the constituent structure of the signifier itself may be seen as changed in contemporary culture and especially as expressed in ‘new media.’ Both the materiality and the represented content of cultural practice and production has been affected. Before examining parts of Hayles’ argument in more detail, I want simply to point out that it is clearly determined by its metacritical significance and has a reductive inclination: signifiers have come to be such and such, therefore - albeit in a cybernetic feedback loop - cultural production (in Hayles’ essay “the represented worlds of contemporary fiction”) follows suit. Hayles’ characterization of a multiply mediated signifier which flickers from level to level in chained coded structures is, as a metacritical statement, highly suggestive and useful. However, when it comes to art practice and the critique of this practice, how does such insight figure?

What is missing from Hayles’ analysis is a set of relationships - relationships constituted by artistic practice - between a newly problematized linguistic materiality and represented content. These would inevitably express themselves in formal as well as conceptual address to what she identifies as a changed matter of language and literature. Hayles’ chosen examples, with, perhaps, the exception of her use of William Burroughs, demonstrate conceptual rather than formal address; they represent flickering signification as concept rather than as instantiation in the language of the work. Hayles cites, most extensively, William Gibson’s Neuromancer as a prime example of represented content affected by and expressive of the flickering signifier. While Gibson brilliantly conveys the literally flickering, scanned and rasterized, apparent immateriality of an informatic realm, the ‘consensual hallucination’ of ‘cyberspace’ (his famous coinage) and its interpenetration of meatspace, he does this in a book - ‘a durable material substrate’ - in a more or less conventional novel, one in which, indeed, narrative predominates over character development and in which language functions in a relatively straightforward manner. Not even the narrative perspective (omniscient author third person) is shifted or experimentally inflected in any of Gibson’s cyberpunk classics. The writing is sharp and inventive but entirely subject to paraphrase.

There are further significant ironies here, for Hayles begins her essay by discussing typewriting. The physicality and static impression-making of this process of inscription is contrasted with that of word processing where less substantial bodily gestures cause word-as-(flickering)-image to be scanned onto the surface of a screen. “As I work with the text-as-flickering-image, I instantiate within my body the habitual patterns of movement that make pattern and randomness more real, more relevant, and more powerful than presence and absence” (Hayles, “Virtual Bodies and Flickering Signifiers,” 26). However, the exemplar most present later in her argument, Gibson, has made some play of his preference for composing his novels using a typewriter. William Gibson (1948-) [Web site] (Guardian Unlimited, 2001 [cited February 2002]); available from http://books.guardian.co.uk/authors/author/0,5917,96528,00.html. Michael Cunningham, The Virtual Tourist [a Short Interview with William Gibson] [Web site] (P45.net, 1996 [cited February 2002]); available from http://www.p45.net/dos_prompt/columns/3.html. “In real life, Gibson is actually the opposite of hi-tech. He maintains a high degree of goofy aloofness from the technologies he writes about in such obsessive detail - almost as if just using them would increase the risk of being somehow “infected” by them. He wrote his most famous novel, Neuromancer, on a 1927 olive-green Hermes portable typewriter, and only recently migrated to a battered old Apple Mac.” Gibson famously discussed his use of a typewriter in a phone interview for Playboy, August 30, 1996. “I do remember sitting with a blank sheet of paper and a typewriter going to ‘dataspace’ and ‘infospace’ and a couple of other clunkers, and then coming to ‘cyberspace’ thinking it sounds as though it means something.” I have touched on the question of Gibson’s and another influential contemporary novelist’s apparently conservative approach to, shall we say, avant-garde practice in a relatively early online work, John Cayley, “Why Did People Make Things Like This?” [Web site] (Electronic Book Review, 1997 [cited February 2002]); available from ebr Thus not only are the formal characteristics and the materiality of Gibson’s language at odds with the flickering signification of its represented content, but, at the very least, the once-preferred experience of this writer - his phenomenology of inscription - is an apparent denial of Hayles’ critical progression. I want to emphasise, in making these remarks, that if the subjective experience of the critic or reader is brought forward as evidence for a change in the structures of signification, then it is all the more important to examine the practices of the writer and the formal qualities of the work produced by those practices. Gibson sitting at a typewriter composing a novel may well produce a representation of the concept of flickering signification, but his practice does not necessarily embody the potential for new structures of meaning generation, or instantiate a corresponding materiality of language.

We will return to practice, but first I would like to examine Hayles’ flickering signifier in so far as it engages with the notion of code-as-text. Hayles, “Virtual Bodies and Flickering Signifiers,” 31. The immediately following quotations, interspersed with my comments are from what I take to be a crucial paragraph in Hayles’ crucial article. “In informatics, the signifier can no longer be understood as a single marker, for example an ink mark on a page. Rather it exists as a flexible chain of markers bound together by the arbitrary relations specified by the relevant codes….” At least since Saussure, it seems somewhat redundant to point to the arbitrariness of any signifer-signified relation. I suppose that Hayles is actually referring to these relations as ‘arbitrary’ because they are not necessarily significant as human readings; they are not addressed to general human readers but only to the systems and systems-makers who have coded or specified them for certain purposes. They are, nonetheless, construable and are far from arbitrary when considered as addressed to the systems in which they are embedded. They have both significance and consequence. “…As I write these words on my computer, I see the lights on the video screen, but for the computer, the relevant signifiers are electronic polarities on disk….” That is, they are Kittler’s (fundamental) signifiers of voltage difference. “…Intervening between what I see and what the computer reads are the machine code that correlates these symbols with binary digits, the compiler language that correlates these symbols with higher-level instructions determining how the symbols are to be manipulated, the processing program that mediates between these instructions and the commands I give the computer, and so forth. A signifier on one level becomes a signifier on the next-higher level.” Hayles goes on to discuss the ‘astonishing power’ which these ‘arbitrary,’ hierarchically structured chains of codes generate, since manipulations, interpreted as commands at one level can have cascading, global effects. This is, presumably, ‘power’ in the now familiar technological sense, as used in the advertising and publicity for computer systems where, to relate the term with a more general or ‘Foucauldian’ sense, we may think of it as the power to alter the behaviour of a system in an impressive manner or at great speed, etc. By shifting the argument in this way, I think she has bracketed a more significant consequence of the structure of signification which she is delineating: the question of address, the address of the specific encoded ‘levels.’

In an article on ‘digital code and literary text,’ Florian Cramer has pointed out that, as he somewhat obscurely puts it, “… the namespace of executable instruction code and nonexecutable code is flat.” Florian Cramer, Digital Code and Literary Text [Article in Web-based journal] (BeeHive Hypertext/Hypermedia Literary Journal, 2001 [cited February 2002]); available from http://beehive.temporalimage.com/content_apps43/app_d.html. From the context his meaning is clear: that the same character or symbol set is used - for example - to transcribe both the text being word processed and (to be precise) the source code of the program which may be doing the word processing. On the level plains of letters and bits, there is no radical disjuncture in the symbolic media when we cross from a region of ‘executable’ text to text ‘for human consumption.’ From the human reader’s point of view, they are both more or less construable strings of letters; from the processing hardware’s point of view they are more or less construable sequences of voltage differences. On the one hand, this statement is related to the famous inter-media translatability of digitised cultural objects (once coded, regular procedures can be used to manipulate an image, a segment of audio, a text, etc. without distinction, disregarding the significance or affect of the manipulation). Cramer is, however, more concerned with the potential for sampling and mixing code and text (in the contemporary music sense). Again, as in Hayles’ analysis, the question of the address of specific code segments and texts is bracketed. Not only is it bracketed, but the range of positions of address is simplified, as if we are speaking of a flat letterspace for: code on the one hand and text on the other; whereas, clearly, there are many levels. Both Cramer and Hayles recognize a multi-level hierarchy of codes without elaborating or distinguishing them in the course of their discussions. Within the field of networked and programmable media, at the very least, we can acknowledge: machine codes, tokenised codes, low-level languages, high-level languages, scripting languages, macro languages, markup languages, Operating Systems and their scripting language, the Human Computer Interface, the procedural descriptions of software manuals, and a very large number of texts addressed to entirely human concerns. In passing it is worth highlighting the interface itself, particularly the ever-evolving HCI, as a complex programmable object with a structure like a language, including, in some cases, an underlying textual command-line interface which mirrors the now familiar mimetic and visual instantiation of users’ interface. This is another point for potential artistic intervention as well as an vital consideration when discussing the emergent rhetorics of new media, as Manovich has demonstrated so well, even introducing the powerful concept of ‘cultural interface’ (human-computer-culture interface) as an analytic tool. Manovich, The Language of New Media 62-115.

For Cramer, and not only for Cramer, this simplified, bracketed, or ambiguous textual address has become a valorised aesthetic and even a political principle: “…computers and digital poetry might teach us to pay more attention to codes and control structures coded into all language. In more general terms, program code contaminates in itself two concepts which are traditionally juxtaposed and unresolved in modern linguistics: the structure, as conceived of in formalism and structuralism, and the performative, as developed by speech act theory” (Cramer, Digital Code and Literary Text.) To attempt a paraphrase: working or sampled or intermixed or collaged code, where it is presented as verbal art, is seen by Cramer to represent, in itself, a revelation of underlying, perhaps even concealed, structures of control, and also (because of its origins in operative, efficacious program code) to instantiate a genuinely ‘performative’ textuality, a textuality which ‘does’ something, which alters the behaviour of a system. It has the ‘astonishing power’ of other cultural manifestations of new technology and new media, the power that Hayles has also recognized as a function of the coded structures arranged at various ‘levels’ in programmatological systems, chained together by a literal topography, which is ‘flattened’ by a shared symbol set. We should pause to consider what this power amounts to. What are the systems whose behaviour can be altered by this power?

In the criticism of theoretically sophisticated poetics there is a parallel aesthetic and political agenda, which I am tempted to call the Reveal Code Aesthetic. It is partly documented and particularly well-represented in, for example, Marjorie Perloff’s Radical Artifice, where ‘reveal code’ is revealed as a project of L=A=N=G=U=A=G=E writers such as Charles Berstein, after having been properly and correctly situated in the traditions of process-based, generative and/or constrained literature and potential literature by Modernist, OuLiPian, Fluxus and related writers culminating, for Perloff, in John Cage and the L=A=N=G=U=A=G=E writers themselves. Majorie Perloff, Radical Artifice: Writing Poetry in the Age of Media (Chicago: University of Chicago Press, 1991) 189. For a separate but related discussion of some of these issues, see John Cayley, Pressing the “Reveal Code” Key [Email-delivered, peer-reviewed periodical] (EJournal, 1996 [cited February 2002]); available from http://www.hanover.edu/philos/ejournal/archive/ej-6-1.txt. The work of Emmett Williams and Jackson Mac Low, central to any assessment of the radical poetic artifice which she identifies, as also for the criticism of writing in networked and programmable media, is notable for its absence from Perloff’s book. Although the political and aesthetic of program of ‘reveal code’ appears to be shared with Cramer’s new media writers, in the context of Perloff’s poetics, the codes revealed and deconstructed in language per se (rather than digitised textuality) are as much those of “the inaccessible system core,” the machinic devices that conceal “the systems that control the formats that determine the genres of our everyday life.” (Radical Artifice 188; Perloff is citing an earlier form of Charles Bernstein, “Play It Again, Pac-Man,”) While the progressive tenor of an aesthetic and political deconstruction underlies this project, there is something of a Luddite tone in Perloff. As more writers from this tradition make the move into ‘new media,’ this position begins to change. They become ‘new media writers’ ‘digital poets,’ etc. and attitudes perceptibly shift. Writers also, of course, become more sophisticated in their understanding of programmatological systems. This can be seen particularly in Charles Bernstein’s subsequent writing on digital media and also, for example, in the work of Loss Pequeño Glazier, who is closely associated with the poetic practice which has developed from the L=A=N=G=U=A=G=E ‘school.’ See below, and, just-published, Loss Pequeño Glazier, Digital Poetics: The Making of E-Poetries (Tuscaloosa: University of Alabama Press, 2002). reviewed in ebr by Brandon Barr The critical history of this (anti-)tradition in poetic literature is generally traced at least back to Mallarmé. A convenient source for its study can be found in the two-volume anthology: Jerome Rothenberg and Pierre Joris, eds., Poems for the Millennium: The University of California Book of Modern and Postmodern Poetry, vol. 1: From Fin-de-Siècle to Negritude (Berkeley: University of California Press, 1995). Jerome Rothenberg and Pierre Joris, eds., Poems for the Millennium: The University of California Book of Modern and Postmodern Poetry, vol. 2: From Postwar to Millennium (Berkeley: University of California Press, 1998). New media writers and artists necessarily have more ambiguous political and aesthetic relations with the control structures of the media which carry their work.

The code-revealing language artists discussed by Perloff, both in their work and in their performance - be it textual performance or performance art per se or activism or (academic) critical practice - represent far better examples of the instantiation of pattern/randomness (distinguished from presence/absence) than the novelists cited by Hayles, even including Burroughs or Pynchon. While retaining her focus on the contemporary or near-contemporary writers which she associates with an innovative, L=A=N=G=U=A=G=E-inflected poetics having avant-garde inclinations, Perloff recalls an extensive tradition of poetic literature which is marked both by its attention to the materiality of language and its radicalisation of poetic practices. Perloff invokes formations and works by individuals which are also referred to by critics of writing in networked and programmable media. Like Cramer, she discusses the OuLiPo (Ouvroir de Littérature Potentielle), the working group inspired and once led by Raymond Quenueau, which is, perhaps, the primary reference for literary projects which are explicitly concerned with the application of algorithmic procedures, arbitrary constraint, generative or potential literature, and (relatively early) experimentation with the use of software. In doing so, she directly confronts the ‘repression’ of ‘numerical,’ generative procedures in poetry and poetics and turns to the work of John Cage as a cross-media figurehead. While only a minor aspect of his oeuvre, as compared with his major contribution to the art of (musical) sound, Cage’s mesostic texts, especially his ‘reading through’ of Pound and Joyce, stitch together a range of concerns - inter-media art, procedural composition, the rereading (and implicit deconstruction) of the High Modernists - which are highly relevant both to contemporary poetics and to writing in networked and programmable media. If Cage’s work is recalled in the context of the Fluxus movement (with which he is associated), then its relevance widens and deepens. Fluxus is a model of performative art practice (including explicitly literary practice) where the record of inscription is problematized (the work is an event, or the publication of a set of materials which must be manipulated by the reader/user), and where the presence/absence dialectic has been side-stepped by representations which may literally absent an artist-author. Perloff does not discuss Fluxus at length and so misses the opportunity to reassess and contextualize work by two of the most important practitioners of the ‘(numerical) repressed,’ Emmett Williams and Jackson Mac Low, both of whom deserve serious study as precursors if not ‘anticipatory plagiarists’ of writing in networked and programmable media. The term is (ironic) OuLiPian, used of any prior instantiation of work generated by a procedure which has subsequently been invented and specified by the OuLiPo. Such a discussion is beyond the scope of this essay. Fluxus also provides a historical, critical link to the traditions of visual and concrete poetics, which are discussed in Perloff’s account, particularly relevant work by Steve McCaffery and Joanna Drucker. The materiality of this work, considered as language art, visibly demonstrates a radical engagement with linguistic media and a requirement for the reader to engage with the codes - textual, rhetorical, paratextual, visual, etc. - by and of which the work is constituted.

If such prior work remains inadequately acknowledged in the discussion and reassessment of ‘codework,’ this may be, in part, simply because the traces of its inscriptions are captured and recorded in the ‘durable material substrates’ of print culture. Whereas Lacan’s ‘floating signification’ is read as an analytic metaphor, applied to language borne by a delivery media (print) on which the signs of the interface texts literally ‘rest’ (where they have been impressed) or, at best, ‘interleave,’ (they do not ‘float’), we read Hayles’ ‘flickering signifiers’ (as she encourages us to do) as literally ‘flickering,’ and constituent, as such, of text which has become ‘screenic.’ As such, it seems to exist elsewhere, not on the page but through the window of the screen in the informatic realm (Manovich, The Language of New Media 94-115. Undoubtedly, there are clear and historical distinctions of delivery media for text. Nonetheless, we must be careful to distinguish the effects of delivery media on signification and affect from those produced by shifts in the compositional media, and there is great congruence between the approach to compositional media of certain print-based writers (such as those discussed by Perloff, for example), and the potential use of compositional media which is suggested by new media, i.e. new delivery media. This potential of text- and language-making is not necessarily engaged simply because new delivery media happen to be employed. The argument here is a rehearsal of the familiar but ever-important argument against art practice, particularly new media art practice, as media-specific or media-determined. Cramer’s essay makes similar points. The locus classicus for a multi-layered, multi-level code-inflected writing and reading is, of course, Barthes’ S/Z, as Hayles explicitly acknowledges. Roland Barthes, S/Z, trans. Richard Miller (Oxford: Blackwell Publishers, 1990). N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago: University of Chicago Press, 1999) 46. S/Z was concerned with a short story programmed in ‘a persistent material substrate’ but Barthes was nonetheless able to demonstrate the potential for an iterative flickering of hermeneutic attention across structured linguistic codes, implying, I would argue, perfectly adequate complexity, mobility, and programmability in the compositional media. Barthes’ essay, after all, was not a demand for new media but a (re)call to new or latent ways of reading and writing.

We turn, nonetheless, to examples of what Cramer calls ‘codework.’ Cramer cites (amongst others) some of those writers in networked and programmable media whose work I, too, would consider in this context: Mez, Talan Memmott, Alan Sondheim, Jodi (references to specific works are given below). Leaving Jodi to one side for the moment, these are all artists who both work with code and make coded, programmatological objects. They are particularly known and notable for working code and code elements into what we might call the ‘interface text’ (the words which are available to be read by the human audiences they address). Although I do not make use of his analysis in this essay, it is well worth referring to Philippe Bootz’s analyses of systems-mediated textuality, where I believe my ‘interface text’ roughly corresponds to his ‘texte-à-voir.’ See Philippe Bootz, “Le point de vue fonctionnel: point de vue tragique et programme pilote,” alire 10 / DOC(K)S Series 3, no. 13/14/15/16 (1997). The result is a language which seems to be - depending on your perspective - enlivened or contaminated by code. In the rhetoric of this type of artistic production, contamination or infection (see Cramer as quoted above and Hayles below) is more likely to be the requisite association since transgression of the deconstructed systems of control is an implicit aspect of the aesthetic agenda. For the moment, however, we are more concerned with certain formal and material characteristics of the resulting language.

The language certainly reveals code and code elements, but what code does it reveal? What does it tell a code-naïve reader about the characteristics and the power of code? Is it, indeed, still code at all? At what level does it sit in the chained hierarchies of flickering signification? Has it been incorporated into the ‘interface text’ in a way which reflects its hierarchical origin, if it has one? Only if these and other questions can be given answers which specify how and why code is sampled in this writing would be it ‘codework’ in a strong sense. (Perhaps we should reserve Mez’s ‘code wurk’ for the weaker sense of code-contaminated language.) In the case of all of these writers (we’ll come to Jodi shortly) the code embedded in the interface text has ceased to be operative or even potentially operative. It is ‘broken’ in the now familiar programmer’s jargon. The breakdown of its operations eliminates one aspect of its proposed aesthetic value and allure, its native performative efficacy (which Cramer identified as a final throwaway without actually demonstrating or elaborating): the power of code to change the behaviour of a system. The code-as-text is more in the way of decoration or rhetorical flourish, the baroque euphuism of new media. This is not to say that - as part of the interface text - it may not generate important significance and affect. In particular, the address of this type of intermixed, contaminated language is often concerned - as shown in the work of all of these writers - with issues of identity, gender, subjectivity, technology, technoscience, and the mutating and mutable influence they bring to bear on human lives and on human-human and human-machine relationships.

For the moment, however, we are more concerned with certain formal and material characteristics of the resulting language. In a recent conference paper, Hayles has discussed the language of Memmott’s From lexia to perplexia in terms of pidgins and creoles. “In this work the human face and body are re-coded with tags in a pidgin that we might call, rather than hypertext markup language, human markup language. Code erupts through the surface of the screenic text, infecting English with programming languages and resulting in a creole discourse that bespeaks an origin always already permeated by digital technologies.” N. Katherine Hayles, “Bodies of Texts, Bodies of Subjects: Metaphoric Networks in New Media” (paper presented at the Digital Arts and Culture conference, Providence, RI, 2001). This is cited from the version of the paper posted in PDF form before the conference. Please note: Hayles may well have revised it since. Similarly, Mez has characterized her textual production as written in a new “language/code system” which she calls ‘mezangelle.’ Mez, _the Data][H!][Bleeding T.Ex][E][Ts [Website] (2001 [cited February 2002]); available from http://netwurkerz.de/mez/datableed/complete/index.htm. “the texts make use of the polysemic language/code system termed _mezangelle_, which evolved/s from multifarious email exchanges, computer code re:appropriation and net iconographs. to _mezangelle_ means to take words/wordstrings/sentences and alter them in such a way as to extend and enhance meaning beyond the predicted or the expected.” It is perhaps unfair to treat what may be metaphoric usages as literal; however, I believe this use of pidgin and creole is, in particular, a significant misdirection. A pidgin is a full-blown language, albeit arising from the encounter and hybridization of two or more existing languages; a creole is a pidgin which has become a first language for speakers raised by previous generations who have created or used a pidgin. The point here is that, in the case of a pidgin, the elements which combine to generate new language are commensurate - linguistic material is not simply being injected from one hierarchically and functionally distinct or programmatologically-operative symbolic sub-system (which is subsumed within a full-blown culture-bearing system of human language use) into another. The creation of a pidgin is, furthermore, the result of interactions by commensurate entities, i.e. humans. In the code-as-text which we have seen to date - in the texts of a reveal code aesthetic - human-specified code elements and segments are, typically, incorporated into what I have called the ‘interface text’ which is unambiguously and by definition an instance of some human-readable language. It may be contaminated, jargonized, disrupted language, but it is not a new language, not (yet) evidence for the invasion of an empire of machinic colonizers whose demands of trade and interaction require the creation of a pidgin by economically and linguistically disempowered human users. Not ‘(yet)’ as I say, although some might wish to try making the strong case for an emergent machinic culture, which is, I believe, a serious project although a misdirection in this context. The codeworks currently available to us extend, infect, and enhance natural language, but they do not create, for example, Code Pidgin English. As in the term ‘Chinese Pidgin English.’ Cf. for example, the discussion in Charles F. Hockett, A Course in Modern Linguistics (New York: Macmillan, 1958) 420-23.

The code has ceased to function as code. The resulting text pretends an ambiguous address: at once to human reader and to machinic processor, but both human and machine must read the code as part of human discourse. We would not try to compile the code in the interface texts of Memmott, Mez or Sondheim. Nonetheless, this pretended ambiguity of address remains important to the aesthetics of this work. It assumes or encourages an investment on the part of its readers in the technology of new media and, especially, in the dissemination of textual art in new media. Thus, the experiences of the reader in these worlds can be brought to bear on their reading of the codework and they can appreciate, through more-or-less traditional hermeneutic procedures, the references and allusions to technology, technoscience and the issues with which they confront us. However, I would argue, if this pretended ambiguity of address exhausts the aesthetics and politics of a project (I am not saying that it does in any of these cases) then it leaves open questions of the work’s affect and significance when compared, for example, with previous poetic work in more durable material and linguistic substrates, some of which has been cited above.

The work of Sondheim needs to be singled out, in terms of practice and form, since his use of code is well-integrated into a long-term and wide-ranging language art project. The print-media version of Jennifer, for example, reads more in the tradition of innovative or avant-garde writing than as subsumed within codework or a reveal code aesthetic. Alan Sondheim, Jennifer (Salt Lake City: Nominative Press Collective, 1998). Eastern US: http://jefferson.village.virginia.edu/~spoons/internet_txt.html; western US: http://www.anu.edu.au/english/internet_txt; additional images: http://www.cs.unca.edu/~davidson/pix/. The internet references associated with this citation will lead on to Sondheim’s wider project. Most of the texts is this selection are manipulated language, but often using procedures which are not directly related to codes and processing. Thus, while his overt subject matter - mediated gender and sexuality, explicitly inflected by computing and technoscience - and his explicitly chosen media keep him immediately allied with codeworking colleagues, Sondheim’s work must also be read against earlier and contemporary writers working within or with a sense of the formally and aesthetically innovative traditions of poetics, and not only the poetics which intersects with Burroughs and Acker. With the implication that Sondheim’s writing needs to be judged as such and should not necessarily be granted a special credit of affect or significance because of its instantiation in new media.

In the necessity to read the work in both a programmatological context and in the broader context of innovative writing - though in this sense only - Sondheim’s engagement rhymes momentarily with that of Loss Pequeño Glazier. Glazier and his work represents a literal and explicit embodiment of “a set of relationships - relationships constituted by artistic practice - between a newly problematized linguistic materiality and represented content.” Glazier has produced a body of work, grounded in an existing writing practice, which has covered a wide range of potential forms for digital poetics and he has, moreover, documented and analysed this trajectory in a series of critical contributions. Most recently in the book gathering many of these papers and essays. Glazier, Digital Poetics: The Making of E-Poetries. which, please note, includes a chapter devoted to “Coding Writing, Reading Code.” Glazier’s work has been done while he has also served as one of the motive forces and prime initiators of the major resource for innovative writing on the internet, the Electronic Poetry Center at the University of Buffalo, http://epc.buffalo.edu. Glazier’s work is characterized by his use of code and the language of code. In this, I believe, he affords himself significant ironies. He writes, for example: “The language you are breathing become the language you think… These are not mere metaphors but new procedures for writing. How could it be simpler? Why don’t we all think in UNIX? If we do, these ideas are a file: I am chmoding this file for you to have read, write, and execute permission - and please grep what you need from this! What I am saying is that innovative poetry itself is best suited to grep how technology factors language and how this technology, writing, and production are as inseparable as Larry, Moe, and Curly Java” (Glazier, Digital Poetics 31-32). This is discursive prose of a kind, but it is infected or contaminated by both code and poetry. Glazier doesn’t think in UNIX, nor would he ever wish to do so. But his language is not ‘mere metaphor’ (poetry is not metaphor) it is centred on language-making (what poetry is), and it demands a poetic practice which is alive to new procedures and new potential and which is sensitive to the changes this practice produces in the materiality of the language itself. Apart from its engagement with code and coding, Glazier’s work is also characterized by its bilingualism, or rather the multi-lingualism of ‘America’ in the sense of a Latin America which exists as historical and political soul and shadow throughout, arguably, the greater part of the United States. I raise this point to highlight distinctions in the way we may choose to consider the non-standard-English material in Glazier’s (and others’) texts, (while recalling Hayles’ metaphoric analysis via ‘pidgin’ and ‘creole’). In a Glazier text, there is a use of English intensified by an address to the materiality of language. There is the incorporation (in a strong sense, sometime within the body of a word) of linguistic material from Spanish and other languages, especially those indigenous to Mexico. There is a similar incorporation of linguistic material from code and from computing jargons. See: Loss Pequeño Glazier, “White-Faced Bromeliads on 20 Hectares” [Javascripted algorithmic text on website] (Electronic Poetry Center, 2001 [cited February 2002]); available from http://wings.buffalo.edu/epc/authors/glazier/java/costa1/00.html. This relatively recent work illustrates my specific points but also demonstrates that Glazier has been exploring the properly programmatological dimension of writing in networked and programmable media with, for example, kinetic and algorithmic texts. A classified selection of texts is at: http://epc.buffalo.edu/authors/glazier/e-poetry/. But whereas the use of other natural language material evokes significance and affect which is commensurate with human concerns - personal, political, social and cultural history, etc. - the use of ‘codewords’ evokes other concerns, closer to questions of technology and the technology of language. Glazier would rather think in Nahuatl than in UNIX, but in practice he prefers to think in P=O=E=T=R=Y.

Jodi takes us to another point in the textonomy of code-as-text, a relatively extreme position where code-as-text is, perhaps, all there is. Jodi is the very well-known, long-standing net.art project of Joan Heemskerk and Dirk Paesmans. Jodi, www.jodi.org [Website] (Jodi, 1980- [cited February 2002]); available from http://www.jodi.org. It is difficult to say anything hard and fast in terms of more-or-less conventional criticism about a site which is hardly ever the same on successive visits. Instead, I want to refer to what I remember of a visit in which a dynamic html- and javascript-mediated experience proved to have been delivered by html source which was, itself, a work of ASCII art. The practice of composing ASCII symbols, usually displayed as monospaced fonts for regularity, in order to generate imagery. In Jodi’s case this was abstract or verging on the abstract whereas, popularly, ASCII art has been figurative. Here, the actual code is a text, an artistic text. However, the code is not, in this instance, working code (at least not ‘hard-working,’ shall we say). It is comprised of code segments which are ignored in the browser’s interpretation and rendering of the html. The syntax of this markup language is particularly easy to manipulate or appropriate in this way because comments (ignored by any interpreter, by definition) may be extensive and because interpreters (browsers in this case) are, typically, programmed to ignore any<tagged<thing which they cannot render. The code works, but it is not all working code. Again it represents only a pretended ambiguity of address: its primary structures of signification were never meant for a machine or a machinic process.

I, too, have made a few ‘codeworks’ of a not dissimilar kind. By extracting and manipulating segments of the close-to-natural-language, very-high-level, interpreted programming language, HyperTalk, I was able to make human-readable texts which are also segments of interpretable, working code:

on write
   repeat twice
     do “global ” & characteristics
   end repeat
   repeat with programmers = one to always
     if touching then
       put essential into invariance
     else
       put the round of simplicity * engineering / synchronicity + one into invariance
     end if
     if invariance > the random of engineering and not categorical then
       put ideals + one into media
       if subversive then
         put false into subversive
       end if
       if media > instantiation then
         put one into media
       end if
     else
       put the inscription of conjunctions +

          I-CHASS Receives Grant to Build Fab Lab in Togiak, Alaska to Provide Educational and Economic Opportunities, and to Study Indigenous American Perspectives of Technology        
An example of an augmented Reality ProjectI-CHASS members Dr. Alan Craig (I-CHASS Associate Director of for Human Computer Interaction) and Dr. Scott Poole (I-CHASS Director) in collaboration with the Alaska Federation of Natives, the Traditional Council of Togiak, and the University of Alaska Fairbanks-Bristol Bay Campus have been awarded a National Science Foundation grant of $299,963 for a project entitled, "Bridging the Divide: Exploring Native Approaches.” The grant begins with an award of $157,504 this year with the remainder of the grant distributed next year contingent upon the availability of funding to NSF from the federal government and progress on the project. This grant will be used to plan and develop a research project that studies the implementation of a fabrication laboratory (Fab Lab) containing cutting edge technology in a rural Alaska Native village off of the Bering Sea.
          Augmented Reality Alma Mater        
An example of an augmented Reality ProjectWhen it was discovered that the famed Alma Mater statue would still be undergoing extensive repairs during the class of 2013's graduation festivities, all facets of the University, including I-CHASS Associate Director for Human Computer Interaction Dr. Alan Craig, worked together to create an "augmented reality" version of the iconic statue for students to take a picture with. Learn more in this short video documentary.
           An Evaluation of Older Adults Use of iPads in Eleven UK Care-Homes         
Jones, Tim and Kay, Daniel and Upton, Penney and Upton, Dominic (2013) An Evaluation of Older Adults Use of iPads in Eleven UK Care-Homes. International Journal of Mobile Human Computer Interaction, 5 (3). pp. 67-76. ISSN Print: 1942-390X Online: 1942-3918
          Hidden Figures        

"Hidden Figures" Movie PosterBy Venus Malave, IT Support Specialist for Pennsylvania Coalition Against Rape and the Nation Sexual Violence Resource Center

Hidden Figures is a true story based on the book by Margot Lee Shetterly and directed by Theodore Melfi about three African-American women who worked for NASA that helped launch the first successful space mission. This all in the era when white supremacy was still at its peak and segregation was still the norm for people of color.

“Human Computers” is what they called science mathematicians before the first commercial scientific computer was created by IBM.  This amazing untold story shows a riveting account of three African-American women: Katherine Johnson (played by Taraji P. Henson), Mary Jackson (played by Janelle Monáe) and Dorothy Vaughan (played by Octavia Spencer) who worked for NASA with over 30 other women of color in the segregated West Area Computing Unit for the Langley Memorial Aeronautical Laboratory in the 1950s and 60s.  The film takes you through encounters of racism and sexism that women, especially women of color, had to endure during these times when white male privilege held priority.

One of my favorite parts in the movie is when Katherine goes on a restroom run.  That involved a run every day from the East Unit to the West Unit.   One rainy day on one of her runs with paperwork in her hands, she encounters her boss who questions her absence from her desk for 45 minutes every day.  Katherine’s bold response is surprising yet heartbreaking.  She firmly and confidently expresses her frustration regarding having to have to run a half mile in heels from the East Unit to the West Unit because there are no colored restrooms in their unit, something she knew could result in the loss of her job.

Watching Hidden Figures gave me both excitement and sadness. Excitement because my favorite actress, Taraji P. Henson from “Empire,” is in it; sad because this was a glimpse of what women of color had to go through before my time.  Doing research on these hidden figures was challenging because I came across so many stories of women who were never mentioned  in my history books:  Mathematicians like Katherine, Dorothy and Mary, aerospace engineers like Dr. Christine Darden  as well as modern day rocket scientists like Olympia Ann LePoint and many other  women who were some of the masterminds behind space travel.  The thought of a person of color going to outer space for me was unimaginable. It was just a dream I knew was not for people like me.  How could it be? I had never seen a person of color in my history books that was a mathematician, an astronomer, or a rocket scientist.

I would have loved to have learned more about these hidden figures growing up because if I did, maybe, just maybe, the fascination I had with the stars and the universe would have guided me in a different career path such as an Astronomer or Astrophysicist.

Filed under

          May 2016        
INDUSTRY NEWS “SkinTrack” takes wearable electronics to a whole new level by creating an interface on your hands and arms. SkinTrack: The future of mobile interfaces From wearable, flexible electronics for the skin, to implanting electronics into your skin. Modern technology has begun to reshape how we interact with our “smart” electronics. The new wearable technology, developed by the Human Computer Institute’s Future Interfaces Group at Carnegie Mellon University, seeks to completely re-define the term of wearable skin electronics and usher in a new age of interfaces. This discovery takes wearable electronics to the next level by creating a smartphone interface on your arm. With the advent of the smartwatch, many of those in the technology field have been on the look out for ways to expand interactions beyond the restricted space of the small smartphone screen. The technology, dubbed “Skin-Track”, allows for continuous touch tracking on the hands and arms. It also has the ability to detect touches at discrete locations on the skin, enabling functionality akin to those of buttons or slider controls. Previous iterations of skin to screen systems have often looked at using flexible overlays (like the Flexi-Skin developed at Tokyo’s Graduate School of Engineering), interactive textiles and even projector/camera combinations. Often these can be quite cumbersome and restrictive. The Expanded Interface The beauty of the Skin-Track by comparison is that users are only required to wear a signal emitting ring along with their smart watch. The ring propagates a low energy, high-frequency signal through the skin when the finger touches or nears the skin’s surface. The skin track is not obtrusive making it assimilate easily and effectively into our array of everyday worn items i.e.: watches and rings. “A major problem […]
          Actually, She Did That: The Civic Lab for Women's History Month        
The team of folks here at my library who curate the Civic Lab were having a meeting a few weeks ago where we were discussing potential topics for future Civic Lab pop-ups. Sometimes we tie our pop-ups to formal programs on our calendar, sometimes to topics in the news, sometimes to installations in the library, and sometimes to specific days or months of import or conversation. We were brainstorming what topic to focus on for Women's History Month, and we had plenty to choose from--there's a lot going on right now affecting women, have you noticed? You might be surprised, then, to hear that the person who came up in conversation was Kanye.

Or maybe you're not too surprised, because he came up in the context of one particularly annoying and eye-roll-inducing line from Famous: "I made that bitch famous," said in reference to Taylor Swift. As if he, a man, made her, a huge pop star who is a woman, famous because he physically took the stage and microphone away from her while she was winning an award. Gross.

And so we had our topic for the Civic Lab for Women's History Month: women who have accomplished something, but who do not get their deserved credit (often it goes to a man or group of men), or they are better known for something irrelevant to their accomplishments.

We called it "Actually, She Did That"--taking the mansplainer's favorite opening word of "actually" and shedding light on some excellent women throughout history whom many do not know and whose accomplishments have been snatched from them.


The central activity in "Actually, She Did That" was a game of sorts. On a column constructed out of our multipurpose crates, we affixed large images of 11 different women who fit our criteria stated above. (As one of the mother/daughter participant pairs said, these 11 are only the tip of the iceberg when it comes to women not getting the credit due to them.) Each image included the woman's name and date of birth (as well as death, where relevant). On the table next to the column, we had 11 slips of paper. Each slip noted the accomplishment of one of these women, with a parenthetical about how or why she hasn't gotten credit for that accomplishment. The goal was to try to match the woman to her accomplishment, learning more about these 11 fantastic women along the way.

Our 11 featured women were:
  • Nellie Bly (1864-1922) - Bly was a brilliant, pioneering journalist, despite popular opinion that she couldn't be a good journalist because she was a woman. Bly was an early undercover investigative journalist, checking herself into a mental asylum and writing articles exposing the despicable treatment of (mostly female) patients in these facilities.
  • Selma Burke (1900-1995) - A sculptor, Burke was the artist behind the FDR profile that was used on the dime. Yet the (male) engraver typically gets credit for the design, rather than Burke.
  • Laverne Cox (1984- ) - Cox is the first transgender actress to be nominated for an Emmy in an acting category. Yet despite her talent and prowess as an actress, much media coverage of Cox returns to questions about her gender assigned at birth--regardless of its lack of relevance to her career.
  • Rosalind Franklin (1920-1958) - Franklin's research led to her discovery of the double helix structure of DNA. Her male lab partner stole her findings and gave them to Crick and Watson, who went on to win the Nobel Prize for DNA discoveries.
  • Katherine Johnson (1918- ) - One of NASA's "human computers" whose supreme math skills allowed early astronauts to safely start to explore space, Johnson and her colleagues have only recently started to get recognition due to the book and film Hidden Figures.
  • Regina Jonas (1902-1944) - The first female rabbi, Jonas was refused ordination for years despite having gone through the same training as her male colleagues. She was finally ordained before being sent to a concentration camp. She died in Auschwitz.
  • Hedy Lamarr (1914-2000) - Lamarr was a brilliant inventor, developing spread spectrum communication and frequency hopping technology which are now the basis for cell phones and wi-fi. Yet she is often known only for being a beautiful actress.
  • Ada Lovelace (1815-1852) - She wrote the first computer program, although her male friend Charles Babbage is usually credited as the first computer programmer. Lovelace is usually first credited as daughter of Lord Byron. So not only does she not get credit for what she did, but she's defined in relation to her male relative.
  • Wilma Mankiller (1945-2010) - Mankiller was the first female chief of the Cherokee Nation. Many American history texts ignore her leadership and maintain there has never been a female head of state in the U.S.
  • Arati Prabhakar (1959- ) - Prabhakar was the head of DARPA, the Defense Advanced Research Projects Agency, from 2012 until January of this year. Research and developments under her watch have included huge strides in biomedical technology like prosthetics. Credit is typically given to the presidential administration at the time of the invention.
  • Chien-Shiung Wu (1912-1997) - Called the "First Lady of Physics," Wu worked on the Manhattan Project. Her work in nuclear physics won a Nobel Prize for her male colleagues, but she was not recognized. Even though the winning experiment was called the "Wu Experiment."

We had some really wonderful conversations with patrons as they engaged in this activity. Many recognized a few names or pictures, but couldn't place their finger on where they'd seen or heard of these women before. We share biographical facts with participants, many of them shaking their heads in frustration at just how common this type of credit-stealing is. One teen girl, participating with a friend, remarked after hearing the stories of several of the women, "Why do they keep giving away credit?" We talked about how it wasn't a question of these accomplished women giving away credit, but rather them having credit taken from them or given to someone else. These teens got mad. They demand better, for the world to see them and their friends and other women. As it should be.

Alongside this activity of matching women to their accomplishments, we also had a few other elements available for Civic Lab participants. We had a number of great titles on offer for folks interested in learning about more women and their accomplishments, including:
  • 50 Unbelievable Women and Their Fascinating (And True!) Stories by Saundra Mitchell, illustrated by Cara Petrus
  • Bad Girls Throughout History: 100 Remarkable Women Who Changed the World by Ann Shen
  • The Book of Heroines: Tales of History's Gutsiest Gals by Stephanie Warren Drimmer
  • Dead Feminists: Historic Heroines in Living Color by Chandler O'Leary & Jessica Spring
  • Rad American Women A-Z by Kate Schatz, illustrated by Miriam Klein Stahl
  • Wonder Women: 25 Innovators, Inventors, and Trailblazers Who Changed History by Sam Maggs, illustrated by Sophia Foster-Dimino

We also put together a handout with resources for hearing more women's stories through an email newsletter, podcasts, and online videos. (See the handout here.)

The handout also includes three questions to get folks considering the stories of women in their own lives, as well as how they can make space to hear and share the stories of women:
  1. What have women in your life accomplished? Have they gotten credit for these accomplishments?
  2. What would you say to them in acknowledgement of what they have accomplished?
  3. How can you help to share the stories of women and their work?

We intentionally posed that first question on one of our crates, and we provided sticky notes and pencils for participants to weigh in. During the two hours a coworker and I facilitated "Actually, She Did That," however, no one wrote a response to the question. We don't think it was from lack of interest, but rather from the greater appeal of learning about the women whose images were front and center in the installation. We're hopeful that the public question, as well as the handout, provided fodder for reflecting on the women in participants' lives.

Monday was appearance number one for "Actually, She Did That." We'll be popping up again this Friday, and we're eager to see what types of interactions are prompted this time around. From there, we want to think about how to continue this idea of making clear space for women and women's stories beyond just Women's History Month.


          Cambridge University & Toshiba | Zoe the emotional avatar of the future        
Cambridge University & Toshiba | Meet Zoe, a digital talking head that can express human emotions on demand with unprecedented realism and could herald a new era of human computer interaction. A virtual talking head that can express a full range of human emotions and could be used as a digital personal assistant, or to [...]
          EuroVis 2017 Conference Report, Part 3        
Thursday and Friday at EuroVis brought a few papers on storytelling, a new toolkit for running online studies, a better way to put your list of publications online, and a lot more. Visualization Models & Human Computer Interaction The day started with my humble paper, An Argument Structure for Data Stories. I propose a simple […]
           A data structure for representing multi-version texts online         
Schmidt, Desmond and Colomb, Robert (2009) A data structure for representing multi-version texts online. International Journal of Human Computer Studies, 67 (6). pp. 497-514. ISSN 1071-5819
          China Will Overtake the US in Computing…Maybe, Someday…        
[note: the following is a rough draft -- I appreciate comments as I work this into shape and add relevant links to further sources]

December 6, 2011

Abstract:
Today, The New York Times published an article by Barboza and Markoff titled “Power in Numbers: China Aims for High-Tech Primacy.  This article echoes frequently expressed alarmist opinions that China is posed to take over the world.  I have lived in Beijing for the past 2.5 years as a visiting researcher at Microsoft Research Asia, I've taught Computer Sciences classes at Tsinghua University, and it is my opinion that China has major obstacles to overcome before becoming a high-tech powerhouse. The biggest of these is the the way creativity is discouraged in Chinese classrooms. Chinese students who spend time at western universities do pick up these skills. Creativity and the inclination to challenge norms in disruptive rather than incremental ways are at the heart of computing innovations. These traits are all but absent from Chinese universities. A solution I pose is an initiative called World Lab. We need a place for people from various cultures, backgrounds, and countries to come together to take risks in designing new technologies and to train students to become global leaders.

Today's NY Times article by Barboza & Markoff, “Power in Numbers: China Aims for High-Tech Primacy,”  would lead you to believe that the title of this blog entry (“China Will Overtake the US in Computing”) is almost a certainty. I could not help reading this somewhat alarmist article without cringing, as it follows a pattern of reporting on China that I’ve seen from since before I moved to China in 2009 and that I have noticed more frequently over the past two years now that I’m more sensitized to the realities of China’s economic rise. This lack of subtlety and nuance on China is what I’ve come to expect from media outlets such as CNN and I am more surprised to see it from seasoned journalists who are respected for their expertise, Barboza for reporting here in China and Markoff for reporting on computing.

As I prepare to leave next week to return to my position at the University of Washington, I am starting to reflect on what I have learned in my 2½ years in China. My own view is that there is incredible potential in the computing field in China – this is one of the many reasons I chose to pick up my family and move here. At the same time there are many important barriers to China’s eventual rise in computing and these barriers will not fall on their own without efforts at reforming both the educational system and government regulation, let alone certain Chinese cultural norms that are thousands of years old. That is why I’ve subtitled this blog entry “…Maybe, Someday…”. That is, I don’t believe China will rise above the US in computing anytime soon and if it is to do so, several important changes must first take place.

In the rest of this article I’ll try to touch on 1) why am I qualified to even have an informed opinion on China’s rise in computing, 2) what I saw as the misconceptions or omissions in the Barboza & Markoff NY Times article, and 3) what I think China must do to reach its potential in computing and why I think this is a good thing and not something the West should be worried about.

Who am I to Comment on Chinese Computing
As I read the NY Times article I was a bit surprised by some of the folks they had used to comment on the state of Chinese computing. I started to think, “who are the proper experts on this topic?” Later as I pondered this question, I began to think I’m as good an expert as anyone, at least from the academic computer science side, to comment on the rise of China in computing. Why is that? 

I have spent 2½ years living in China and in that time I have: worked at Microsoft Research Asia (MSRA), the top research organization in the country, taught at Tsinghua University, the top computer science department in the country, and organized several major technical research events in China. Before coming to China, I earned my PhD at Carnegie Mellon University (CMU), one of the top computer science departments in the world, earned tenure at Berkeley (another top department), founded a start-up, ran a ubiquitous computing research lab for Intel, and served as a professor at the University of Washington (another top computer science department). More detail on my background is here. I think this experience puts me in good position to make an informed assessment on computing in China. You be the judge. I’m sure I’m not right on everything and these are just my opinions, but after two years I’ve seen quite a bit, talked to many people, and I’m starting to have a good feel for what is going on here in China.


What is Wrong with the Rising View
I believe there is no question that China is quickly rising in all endeavors, whether it is in terms of China’s economics, infrastructure (think ports, highways, freight railway, and high speed rail), education, science, or technology. It is an amazing sight to see firsthand and the energy one feels living here during this important time in history is quite incredible (more than even in Silicon Valley during the 1st Internet boom of the mid to late 90s). Computing is no different from these other areas and China has made huge strides in 20 years, as reported in the NY Times article.

The key questions to ask are 1) where is China with respect to the US and the West in terms of computing today? and 2) where will China be in the future? The impressions that were given by the NY Times article on both of these questions is where I most felt the article lets the reader down. Let’s cover each of these in turn.

Where is China Computing Today
 Academic computer science has been the underlying basis for many of the major commercial strides in computing in the US (e.g., the Internet, the graphical web browser, compression for wireless communications, cloud computing, speech recognition, web indexing and search, gesture and touch-based user interfaces, location-aware computing, etc.).

China has made big strides in academic computer science over the past 20 years in terms of expansion of its programs and making a shift from mainly producing software for state-owned companies to undertaking leading edge computing research and education. In fact, China has passed major milestones in the past 5 years in terms of government support for research and in starting to publish in top computing journals and conferences.

Everything’s Big in China
Five to ten years ago, one would almost never see papers at the top academic computing conferences from China’s researchers, with the exception being papers from Microsoft Research Asia, which was started in Beijing back in 1998 by a group of Chinese and Taiwanese researchers who were trained in the US and worked in the US before returning to Asia. Today, there are many Chinese researchers who are publishing papers at top research venues. But, the number is still quite small given the large number of universities and researchers that are pursuing computing research in China. Computer Science & Technology is the largest undergraduate major in China and some estimates I’ve heard say there are over 1,000 computer science departments in China and over 1,000,000 computer science majors at a time across these departments. This is huge! The government is clearly making massive investments in computing.

Supercomputing isn't so Super?
One of the big accomplishments Chinese computer science has made given these investments over the last 5-10 years has been in Supercomputing: the very large, high speed machines often used for climate modeling, weapons simulation, etc. A couple of years ago China temporarily had the fastest machine in the world with the Tianhe-1A. This coveted spot on the TOP500 supercomputer list has traditionally been held by either US or Japanese supercomputers, though it changes all the time as new faster computers come into service.

Although getting to the top of the list was a major accomplishment for China, the news of China’s conquest of supercomputing really didn’t seem to be big news for almost anyone I know in leading computer science departments. Why is that? I think most leading computer scientists believe that although supercomputers are useful for certain problems, this is a technology of the past that will simply improve incrementally with underlying processor improvements (in fact, most supercomputers today use conventional processors used in desktop computers rather than the special purpose processors used in the past).

The big innovations in supercomputing have been in the programming models, network interconnects, and most recently in cooling/power usage. But, people seem to see much more important innovation going on in the cloud computing clusters that literally combine thousands of commercial processors together in standard racks connected with traditional networks in huge data centers around the world. This is the technology that powers Google, Microsoft, Apple, Amazon, and the many other web computing giants of the world and is then resold inexpensively to every little web site or mobile phone application that needs to do computing in the cloud. This type of architecture supports a far wider range of applications than supercomputing. Cloud computing is a hot topic in both industrial and academic computer science research and American computer scientists are clearly far in the lead in this area of work.

Academic Publications
In my own subfield of Ubiquitous Computing (ubicomp) and Human Computer Interaction (HCI), China is still in its early stages. Ubicomp has been around since 1991 and in those 20 years China has had almost no presence in the field (for example there were no papers from China at the 2010 UbiComp conference). This year I co-organized the conference with my colleagues at Tsinghua University and we held UbiComp 2011 here in Beijing (link). There were over 300 papers submitted and only 50 were accepted for presentation at the conference (a highly competitive 17% acceptance rate). Although this year we saw 38 papers submitted from China (last year there were only 10), only 3 of these papers with primary Chinese authors were accepted (and all of those were from Microsoft Research Asia). There were many US universities that alone had as many or more papers than all of China (e.g., Carnegie Mellon had seven and UW had four!).

This trend is very similar at other top computing conferences: China had almost no representation 5 or 10 years ago and now there is a smattering of papers (e.g., 1-3 papers/year – out of a 30 paper program – the last couple of years at each of the top systems and networking conferences: SIGCOMM, NSDI, and SOSP). Again, the majority of these papers are coming out of Microsoft Research Asia, not the top Chinese universities.

So we see China starting to be represented at major computing conferences, but Chinese researchers are at this stage no more impactful than many other smaller countries (e.g., France). Given the large number of universities and researchers pursuing computing in China, the interesting question is whether this a straight line that is going to continue its meteoric rise of the last few years (similar to China’s economic growth of ~10% for ten years) or is China’s impact in computing research going to start to grow at a much more modest rate (similar to many predictions of its economy growing at still fast yet more modest rates).

Research Creativity: Students, Faculty, & Academic Structure
Creativity, innovation, and “design thinking” have been some of the most overused buzzwords bandied about in the US business press over the last 3-5 years and this has especially accelerated in the few months since the passing of Steve Jobs. In computing research as well as in industry, creativity and innovation are also important topics. These hard to measure attributes are what we all believe lead to “impact”, which is also hard to measure, but is that which we are all after! Counting papers at top conferences or patents does not measure impact, but people (including me above) tend to sometimes fall back on this counting exercise, as it is easy to measure.

Having interacted with many top Chinese students while here in China, at both MSRA (the top place to have an internship for a computer scientist in China) and at Tsinghua (the top CS department), I’ve gotten a chance to observe the level of creativity and innovation in these top students. We’ve also attracted some of the top design students in China to our lab (in addition to hiring top designers from the US and Europe). I’ve also been lucky to interact with the top Chinese research computer scientists (i.e., folks who already have their PhDs) at MSRA and at the universities.

The simple fact is, the level of innovation and creativity in this cohort is much lower than in similar cohorts in the US. And in fact, the ones that are the best on the “creativity” scale almost invariably are folks who received their PhDs in the US/Europe or worked in the US/Europe. This is not to say those who haven’t left China for their education aren’t doing good work – as I mentioned above MSRA is one of the top places in the world for CS research and the researchers there are publishing at the top venues, but many of the most successful of these researchers have spent years under the tutelage of computer scientists who were trained in the West – almost going through a 2nd PhD while working at MSRA.

The simple fact is if you are educated in the Chinese system, from primary school through university, you have a much harder chance of practicing being “creative” than if you were educated elsewhere. This is not a genetic trait (as many Chinese educated in the West have clearly shown), but a trait of the Chinese educational system, which is based on over a thousand years of Chinese culture.

There are many articles (link) on how cultural underpinning of the Chinese educational system does a good job with the basics (e.g., the students in Shanghai beat the entire world on the PISA Test a year ago), but many here in China question whether the pervasive emphasis on memorization, test taking, and a cultural imperative that almost requires copying the teacher (link art article) and the past “masters” leads to a population that cannot think “outside of the box” (link).

Again, this lack of creativity is cultural and obviously there are folks who don’t fit the system and are creative and innovative (the art scene in China is growing by leaps and bounds). For many years, the top students in China have left the Chinese system for graduate school in the US. Although some of these students start out in America as brilliant and hard working students, many do not show much creativity when they start. They have learned not to question the professor, or others in positions of authority, and they are used to being told what to do rather than coming up with ideas on their own. But, many soon rise above this after a few years of practice and have turned into some of the top stars in the field (e.g., my own classmates at Carnegie Mellon, Harry Shum and Qi Lu, are now two of the top executives at Microsoft (links)).

I have personally advised students like this that have gone onto great computing careers, relying on their innovation and creative skills everyday. But this was only after 5-6 years in the “American” higher education system. My colleagues have often told me of similar examples. Now many Chinese are also aware of this key difference in our educational systems. The latest trend among middle class and wealthy Chinese is to send their kids to the US for their undergraduate degrees or even their high school education (some 200,000? were studying in the US this year alone link).

Now this trend by itself would cause one to believe that China will overtake the US in computing as this massive cohort of students return to China after earning their degrees. Although the “sea turtle” trend of returning to China after several years of working in the US continues, it doesn’t appear as common as some would believe. Many Chinese students become very accustomed to what is still an easier life in US cities and often choose to remain in America. In fact, a more important “glue” for these students might be the far more streamlined US corporate life (many Chinese companies are still fairly byzantine in their politics and structure and corruption is still a major problem). In fact, recent reports show that most wealthy Chinese are starting to secure homes and passports in the West, often for the educational opportunities outlined above, but also to avoid environmental degradation, corruption, and find access to healthcare (link report).

Last Spring I attended a major National Science Foundation workshop on computer science research collaboration with China (http://current.cs.ucsb.edu/nsf-uschina11/). Of the 80 attendees, over half were Chinese who were now professors at American universities. In computing research, many Chinese with US PhDs might be staying in the US for the prospect of working at a better university and with better graduate students than they can in China. Will this change soon?

One of the major differences I’ve noted between Chinese universities (and in fact Chinese organizations in general) and American universities is the power structure exposed in the academic hierarchy. American universities are hierarchical in that Full Professors make decisions about Associate and Assistant Professors, and Associate Professors in turn also make decisions (e.g., tenure) about Assistant Professors. But, I’ve also noticed that in the top departments I’ve been in that the more “senior” faculty understand that a lot of the innovation and best work occurs in the groups led by the “young” Assistant Professors and we in fact “protect” them so as to allow them to better develop and get this great work accomplished (e.g., we do not give them a lot of tedious committee work to do and we encourage them to teach advanced courses in their specialized areas rather than large, general undergraduate courses).

In Chinese universities, there is far more power and money concentrated in the hands of the senior faculty. In many universities the Assistant Professors are just that – they assist a senior faculty member and have no true independent agenda of their own. In a fast moving field like computer science, I believe this structure is bound to fail and cannot keep up with the changes in technology that occur so rapidly. Certainly more rapidly than the 10 years or more it will take a hotshot young faculty member to rise to the top of that hierarchy.

Today’s computing technology is nothing like it was 10 years ago! I believe this structural impediment makes it hard for anyone to name a computer science researcher in a Chinese university that they would say is one of the top in the world in their subfield (other than the few famous names, e.g., Andy Yao – a Turing Award winner, who have been “imported” to Chinese universities).

This means that unless the Chinese universities change this system, it will take many years (15-20) before their CS departments could even have a chance of being stocked from top to bottom with world-class computer scientists. And that would assume they start producing the top scientists here in China (which hasn’t happened yet) or start importing them from abroad (only a few have come so far). Again, this is not to say there aren’t good people here already. There are plenty of good people working in Chinese universities. For example, Prof. Yuanchun Shi, my co-chair for UbiComp 2011 from Tsinghua, is doing lots of great research in her group at Tsinghua. These folks are just spread thin and not a single Chinese computer science department has the strength of even a top 25 or maybe even a top 50 computer science department in the United States. This will be hard to change anytime soon without a massive change in hiring practices as well as in how those people are treated when they come on board.

Startups
Although academic computer science research in China isn’t yet all it can be and has some major impediments to its continued improvement, I believe the start-up scene is a bit healthier. Although I am not an expert on this, I try to keep up by following the top China tech blogs and writers on twitter (cite niubi, wolfegroupasia, tricia, kaiserkuo, affinitiy china, china law) and pay attention to what is going on at the key start-up events (e.g., TechCrunch Beijing was the most recent such activity).

I’ve also spent time chatting with and reading the works of folks who do study the start-up scene closely, such as Vivek Wadhwa (@wadhwa), professor at Duke and Stanford, who studies high-tech entrepreneurship in Silicon Valley and around the world. Professor Vwada has commented recently on the healthy start-up scene he has encountered while traveling in China (link). Noticing that this culture is starting to come to terms with the need to try and fail and start over again, as has fueled the amazing rise of Silicon Valley’s companies.

The conclusion I’ve come to from watching the Chinese start-up scene is that 1) it is vibrant, 2) some major early movers, especially on the Internet, e.g., Baidu, Alibaba, Sina, have already amassed fairly dominant positions in their niches as happened in the US (though we know as Yahoo has shown most recently that these positions can be lost easily), and 3) the amount of venture funding and number of startups are both increasing rapidly. 

In addition to these traditional spaces for innovation, there are other cool things that happen in China that are an outgrowth of its manufacturing innovation. In particular, the entire Shanzhai market (link), which started with fake name-brand goods, including phones and purses, has quickly moved into making novel new products. Again, they tend to be useful tweaks (e.g., multiple SIM card phones, new shapes, etc.) rather than major innovation. This might be where lots of the creative engineers end up in China as these types of folks may not have conformed with the rigid educational system to get into the elite schools.

There is innovation in the China computing startup world, but the type of innovation that happens in start-ups and in industry tends not to be the innovation that will pay off for the entire computing field in 10 years (e.g., the invention of the internet and many of the other computing advances I noted in the introduction to this article). Start-ups tend to take ideas that have already been floating around for a while and repurpose them to a new problem or incrementally improve on them. China’s start-ups are especially known for this incremental improvement strategy. As noted tech environmental crusader Peggy Liu (@shanghaipeggy) wrote today on Twitter, “China is not good at radical innovation, but it's great at tweakovation.” This quote exactly captures the type of activity happening most often in China’s startup scene.

This criticism for copying and tweaking rather than innovating is probably overblown, but continues to be said in and about the Chinese computing industry. One of the biggest names in China Tech funding, Kai Fu Li, founder of Innovation Ventures and former Google China Head, Microsoft Research Asia head, and all around Chinese high tech success story (from Taiwan), now has the nickname in China of “Start-Copy Li” (check for proper translation) for the propensity of companies in his venture portfolio to simple copy a popular western web site and give it some minor Chinese characteristics. For instance, there were hundreds of Groupon clones in China just a few months ago.

So although start-ups in China might be healthy, if a little less innovative than in the West, I do not think this is a fundamental problem for Chinese computing. The bigger question is can they really make the type of fundamental advances in the future that in the past led the US computing industry to its dominance. And can the Chinese make those advances if they are not first taking place in academic research. I do not believe they can and therefore encourage the Chinese to keep upgrading the educational system and infrastructure – but with more than just increased funding. I believe the structure needs to change (see below).

Patents
One argument for China’s future dominance in the fundamental underlying technologies of computing is the large Chinese patent portfolio. The NY Times article pointed out how China has overtaken Europe in number of patents filed and is catching up to the US and Japan. What the article fails to mention is that many, many people believe that many of these Chinese patents are bogus (link Vivek, China La blog) and come out of 1) a quota system that requires organizations to produce a certain number of patent filings per year regardless if they are actually any good and 2) a tendency to copy foreign patents, make minor changes to them, and then use these as trade barriers to western companies trying to do business in China (link China law blog). Leaving this type of information out of the NYTimes article really distorts the patent story. When paired with the lack of strong intellectual property rights protection in China, the patent story leads one to believe that China will not be able to innovate in the future.

How China Can Reach its Computing Potential
My analysis above might leave you with the opinion that I think China’s computing field is going nowhere fast. That is far from the truth. I think China will continue to improve in computing for two major reasons. First, computing in China will improve simply due to China's massive size: (1) in 1.3B people there are going to be a lot of great ones, no matter what barriers you put in their way and (2) the domestic market by itself will be huge and thus a great opportunity! Second, the large investment in technology research funding coming from the government (growth on the order of 10%/year for 10 years) will allow a lot of researchers to carry out many ambitious projects. But, I believe that instead of fearing China, we should see that China reaching its potential in computing could change the world in a very positive way and it is something we should try to help with.

China is Part of the Solution
Why do we want Chinese computing to succeed? I believe that the major problems that the US faces, the rest of the world faces, and China especially faces. China is key to helping solve these problems and by helping China’s research and education system in computing, we have a better chance of creatively solving these problems together. These are problems in:
  •  Sustainability: maintaining the environment, and stopping global warming in particular
  • Education: Improving education for all in both the basics as well as in creativity and innovation
  • Healthcare: Creating a healthcare system that will care for an aging population (North America, Europe, and China all suffer from this) as well as all one that will service all citizens at a reasonable price
All three of these problem areas will have solutions that involve government, policy, and pricing. Yet they also are problems where major technology innovations, especially computing technology innovations, can make a major positive impact. By working together with China on these problems we can help improve the world.

World Lab
In light of this view, I’ve been working the last few months on trying to create a new, multidisciplinary research institute that is jointly housed between a major Chinese university and an American university. This World Lab will become known as the place for risk taking, breaking the mold, inventing the future, and tackling the major problems facing the world. We will apply a new methodology I term “Global Design” to find a balance between design and technology, between human-centered & technology centered approaches, between academia & industry, and between Eastern and Western culture. The World Lab will push the boundaries of what is possible and invent the future today. This institute will help train the students and leaders of tomorrow’s universities and companies to be free thinkers who can create the solutions that society will need to solve these challenging problems.

I believe China’s rise in computing is remarkable, but the future is not assured. As a computer scientist I support helping China improve in computing because I believe it will help the world as well as the population of China. The problems are complex and success is not assured, but together I think we can create a better world.


Disclaimer: The opinions set out in this article are those of James Landay and do not represent the opinions of the University of Washington, Microsoft Corp., Intel Corp, or anyone else (unless they decide to say so – which I’d appreciate).

Acknowledgements: Thanks to Ben Zhao from UCSB (@ravenben) for some of the data on top networking and systems conferences. Thanks to Frank Chen (@frankc), Lydia Chilton, Aaron Quigley (@aquigley), Robert Walker, and Sarita Yardi (@yardi) for helpful comments on this essay.


My Background


Unlike other computing academics who have commented on Chinese computing, I’ve not just dropped into China for a week or two here or there and developed an impression. I’ve actually been living here full time for 2½ years. In that time I’ve helped build a new research group at Microsoft Research Asia(link), taught a course at Tsinghua University(link), co-organized a major international computing conference(link), started a major computing lecture series/symposium on new uses of computing(link), traveled to many different universities to speak, visit, and meet the students and faculty, and attended several meetings of the top computing faculty in China (a few of which also were attended by their US counterparts link: http://www.nsfc.gov.cn/Portal0/InfoModule_479/30695.htm).

I’ve also thrown myself into reading much of the press and blogs on innovation and start-ups in China and I’ve tried to go to events here in Beijing on these topics when I could. I also chat with others about these topics whenever I get a chance. As an expat you can easily meet some of the movers and shakers in this circle even when living in a city of 20M+.

In addition to my time in China, I think I’ve also been lucky to have been at the center of some of the top places in computing over the last 20 years. I obtained my PhD in Computer Science at Carnegie Mellon University (link). CMU is ranked by most as one of the top departments in the world. I was a faculty member and received tenure in CS at UC Berkeley (link), another one of the world’s top departments. Until coming to China, I was a faculty member in Computer Science at the University of Washington (link), another top department. At UW we’ve built one of the top programs in the world in Human-Computer Interaction and Design (link), which is a field that is at the forefront of envisioning and building the future of computing technology.

I also have industrial experience. In addition to the last 2½ years at Microsoft Research Asia, unquestionably the best computing research organization in all of Asia, I was the co-founder and CTO of a silicon valley-based start-up (NetRaker) while on the faculty at Berkeley and I ran a ubiquitous computing research lab for Intel in Seattle for 3 years (link). The researchers at the Intel lab invented many leading edge technologies in that time, including the city-scale, beacon-based location capabilities that were originally found on the iPhone and every single smart phone since (link), activity inference technology that uses sensors to tell what physical activities you are doing in the real world (e.g., running, walking, biking, taking stairs, etc.), which is just starting to show up in products in its most basic form (e.g., the FitBit (link)), and other very cool technologies that hopefully you will hopefully see in products some day in the future.

So, I think I’ve got a pretty significant amount of experience in computing research at the top academic institutions, industrial experience through my time at Intel and Microsoft, and start-up experience through NetRaker, that when combined with my time and study in China puts me in a fairly strong position to comment on where China is in computing and where it might be going.


          Human Computer Interaction Consortium (HCIC) 2009        
I went to the HCIC '09 Workshop in Fraser Colorado last week. It was a really great experience. UW's dub institute was recently admitted to this member only organization and the keys to this workshop are the small size (~75 attendees), top researchers attending (about half are top/senior folks in the field and the other half are the top graduate students that their departments have chosen to send), and the sessions are 90 minutes to cover ONE paper (yes one! -- see Leysia Palen of the U. Colorado ponder The Future of HCI at left). It also includes a lot of time for informal discussion while walking, taking a break, or skiing. I hadn't been since graduate school and forgot how great a venue it is. I had great chats with lots of folks, including my own former PhD advisor (Brad Myers --on left below).

Part of the excitement of the meeting for me was to see so many of my former graduate students taking an active part in the organization (Scott Klemmer), the talks (Jason Hong), and the discussion (Mark Newman, Jeff Heer, Scott, and Jason). It was also great to see one of my current students (Jon Froehlich) take it all in and see how he might be just like these former students soon. I felt like a proud father seeing his son ski down a hill for the 1st time (which I did indeed experience with both of my sons in a major way on this trip -- nothing like a 3 year old skiing and a 7 year old challenging himself on intermediate runs!)

It was great to present my talk on Activity Based Design to this group of strong researchers. I doubt the work would have had that good of an audience had I presented at CHI or another major conference due to the parallel tracks.

Some comments and questions about HCIC. If you made the workshop more open to others, then you'd lose some of the keys to the size. If you added more talks so that more of the folks there could participate, you would lose these great 90 minute sessions that you simply don't get at conferences. I guess we shouldn't muck with it. Any ideas?
          Comment on HCI Week 1: Introduction to HCI by allar05        
I wanted to add a usability bug-hunt example that i think is extremely relevant and although I experience it myself, I have to admit that I did not think of it of my own accord (I must credit that to Donald H Norman). I am speaking of the landline office phones that "bug the hell" out of people everyday. Modern office phones (phone systems) are designed to perform a variety of tasks ranging from transfering calls to collegues, putting a caller on hold, call-back, etc.. and these functions can be activated by dialing the right sequence of numbers or symbols (#0123456789*) - using the dial pad that we all know so well and the function it normally has. The problem is that the buttons used to call the advanced functions are not labeled for that purpose and there are no indications on the device that instruct us how to. -Subsequently, the user either (a) gives up his/her quest for accessing the functions and sticks to the basic use (b)Takes the tedious time to memorize some or all the functions from the manual. --> becomes the resident expert of the office (c) Relies on tacid knowledge obtained by other partial or full expert users From my experience at the Centre for Regional and Economic Development, the only "advanced function" i can recall, is the function that transfers an incoming call from any idle phone to my own - just press 66 on the key pad - "just do that stupid". -right!! :-) Extremely poor mapping! ==> terrible design - poor human computer interaction. That was my bug-hunt contribution, please excuse spelling mistakes.. i only have one functioning hand at the moment ;) Best regards Allan Larsen
          17 May 2011 : VU Fall Midterm Current Papers (May 2011)        

ACC501 VU Midterm Current Paper (May 2011)

Total 59 mcq'a 
10 questions

what is optimal credit policy state? 3
what is difference between market value and book value? 3
how cost of debt can be measured? 3
define benchmarking and its method? 5
find out portfolio? 5
find out capita gain and dividend yeild and total percentage of return? 5
describe difference type of firm's inventory and retail business? 5
what is the best cash policy lec 41 page no.219 5

two question was also from last lectures. 
mcq's were not from past papers aur online quizez just 10 out of 59. all mcq's were new very conceptual

CS101 VU Midterm Current Paper (May 2011)
total 26 questions

20 mcqs
6 short questions

2marks 2 questions
3 marks 2 questions
5 marks 2 questions

1. difference between batch mode and interactive mode

CS304 VU Midterm Current Paper (May 2011)
q.1 can constent object access the none constant mamber function ot eh class.

q.2. Give at least two problems that we should check when we overloading assignemts oprater ("=") in string class

Q3. Give c++ code to overloaded unary "--" oprators to comples member class.

q4.What is simple association? explain it with the help of example.

Q5.explain the defferent between the static variable of a class with none static variable with the help of example
stream extraction and stream insertion

CS401 VU Midterm Current Paper (May 2011)
70% MCQ were from past papers
2 questions of 5 marks

a) Describe MOVS and CMPS instructions
b) Describe Local Variables
3 marks questions

a) explain LES and LDS.
b) if AX is pushed to stack in process of decrementing the stack, what will be the behavior of SP?

CS402 VU Midterm Current Paper (May 2011)
Question No: 27 ( Marks: 2 )
Diffrentiate between Regular and Non regular languages?

Ans:The main difference between regular and non regular language are as:

1. The regular language is that language which can be expressed by RE is known as regular language whereas any language which can not be expressed by RE is known as non regular language.

Question No: 28 ( Marks: 2 )
What is meant by a "Transition" in FA?

Question No: 29 ( Marks: 2 )
What are the halt states of PDAs?
Ans:
There are some halts states in PDA which are as:
Accept or reject stat is also halt state.
Reject state is like dead non final state.
Accept state is like final state.

Question No: 30 ( Marks: 2 )
Identify the null productions and nullable productions from the following CFG:
S -> ABAB
A -> a | /\
B-> b | /\

Question No: 31 ( Marks: 3 )
Describe the POP operation and draw symbol for POP state in context of Push down stack.

Question No: 32 ( Marks: 3 )
What does the the following tape of turing machine show?
Ans:
Arbitrary Summary Table:

The arbitrary summary table shows the trip from READ9 to READ3 does not pop one letter form the STACK it adds two letters to the STACK.

Row11 can be concatenated with some other net style sentences e.g. row11 net(READ3, READ7, a)Net(READ7, READ1, b)Net(READ1, READ8, b) it gives the non terminal Net(READ9, READ8, b),

The whole process can be written as:

Net(READ9, READ8, b) ?Row11Net(READ3, READ7,a) Net(READ7, READ1, b)Net(READ1, READ8, b)

This will be a production in the CFG of the corresponding row language.

Question No: 33 ( Marks: 3 )
Find Pref (Q in R) for:

Q = {10, 11, 00, 010}

R = {01001, 10010, 0110, 10101, 01100, 001010}

Question No: 34 ( Marks: 5 ) ****
Consider the Context Free Grammar (CFG)

S à 0AS | 0

A à S1A | SS | 1a

Show that the word 0000100 can be generated by this CFG by showing the whole derivation starting from S



Question No: 35 ( Marks: 5 )
Consider the language L which is EVEN-EVEN, defined over Σ = {a,b}. In how many classes does L may partition Σ*. Explain briefly.

Question No: 36 ( Marks: 5 )
What are the conditions (any five) that must be met to know that PDA is in conversion form? 
Ans:
Conversion form of PDA:

A PDA is in conversion form if it has following conditions:

1. The PDA must begin with the sequence

2. There is only one ACCEPT state.

3. Every edge leading out of any READ or HERE state goes directly into a POP state.

4. There are no REJECT states.

5. All branching, deterministic or nondeterministic occurs at READ or HERE states.

6. The STACK is never popped beneath this $ symbol.

7. No two POPs exist in a row on the same path without a READ or HERE.

8. Right before entering ACCEPT this symbol is popped out and left.

Question :

Define Myhill Nerode Theorem.

Question:
How you Differentiate between wanted and unwanted branches while deriving a string from CFG?

Question:
What is the difference between concatenation and intersection of two FAs and Union and addition of two FAs?

Question:
Use Pumping Lemma II to show that following language is not regular

L ={an2; n=1,2,3,4,…….}

Question:
Draw Moore Machine equivalent to the Following Mealy Machine?

Question#1 Consider the CFG ( 5marks)
S--> bS | aX | ^

X--> aX | bY | ^

Y--> aX | ^

Derive the following string from CFG. Show all steps

baabab , ababaab

Question#2 Construct corresponding CFG for the given language (5 mark)

(1) All words of even length but not multiple of 3.

(2) Palindrome (both even and odd palindrome).

Question#3 Write the CFG for the following RE
(a b)* aa (a b)* ( 5Marks)

Question-4.What does the following arbitary summary table shows (3 Marks)
From

Where
To

Where
READ

What
POP

What
Push

What
ROW

number

READ 9
READ3
b
b
abb
11


Question #5.Is the following CFG ambiguous? How can you remove the ambiguity?

S
→aS│bS│aaS│ ^ ( 3marks)

Question# 6. If L1, L2, L3 are any three finite languages on , when will be the (3marks)

Question#7.Construct RE for the language having words of even length over ∑= {a.b} (2 Mark)

Question#8.A Push down Automata consists of and input TAPE with ----------many location in one direction. (Marks 2)

Question# 9. Write alternative form of this production (2 Marks)

Question 10. What is the first step when you want to write RE corresponding to TG (2Marks??)

Question: 31 (Marks 1)
Can you say that string of 0’s whose length is a perfect square is not regular?

Question: 32 (Marks 1)
Question: 33 (Marks 2)
Is the following an FA or TM?

Question: 34 (Marks 2)

If L is the language that accept even length strings then what strings will Lc accept?

Question: 35 (Marks 3)
Define Myhill Nerode theorem

Question: 36 (Marks 3)
If L1,L2 and L3 be any three finite languages over Sigma = {a,b}, then how will be

(L1 INTERSECTION L2) Union (L2 INTERSECTION L3) ≠ Ø

Question: 37 (Marks 3)
How you differentiate between wanted and unwanted branches while deriving a string from in the context of CFG?

Question: 38 (Marks 5)
What is the difference between concatenation and intersection of two FAs and union and addition of two FAs?

Question: 39 (Marks 5)
Use pumping lemma II to show that following language is not regular.

L = {an2 ; n =1,2,3,4…}

Question: 40 (Marks 10)
Draw Moore Machine equivalent to the following Mealy Machine.

Question: 41 (Marks 10)
Write CFG of the following PDA. Also write the stack alphabet and tape alphabet.



Question: 1
Use pumping lemma II to show that following language is not regular.

L = {an2 ; n =1,2,3,4…}

Question: 2
What is the difference between concatenation and intersection of two FAs and union and addition of two FAs?

Question: 3

How you differentiate between wanted and unwanted branches while

deriving a string from in the context of CFG?

Question: 4


Can you say that string of 0’s whose length is a perfect square is not regular?

CS403 VU Midterm Current Paper (May 2011)

CS403-Data Base management

MCQ’s =20 Mostly form past paper’s

Q21:-what do you know about partial dependency? (2)

Q22:-Define domain of an attribute? (2)

Q23:-Define relationship type? (3)

Q24:-Describe shortly “ the difference Operation “ in relational algebra? (3)

Q25:-Explain the salient features of foreign key with help of example? (5)

Q26:-Consider the relation R with four attributes A,B,C and D and the functional dependencies

(A,B)->(C,D) and C->D

a). The above relation is normalized relation upto which normal form.

b).Write the PK of relation R. (5)

CS408 VU Midterm Current Paper (May 2011)

MIDTERM  EXAMINATION
SPRING 2011 (15 MAY 2011)
CS408- HUMAN COMPUTER INTERACTION
Time: 60 min
Marks: 40

Total 26 Questions
20 x MCQs
2 x 2 Marks Questions
2 x 3 Marks Questions
2 x 5 Marks Question

Few MCQs which I remembered are as under:-

Question No: 1 ( Marks: 1 ) - Please choose one
 ____________ is a term used to refer to an attribute of an object that allows people to know how to use it.
       ► Visibility
       ► Affordance
       ► Constraint
       ► None of these

Question No: 2 ( Marks: 1 ) - Please choose one
 What is a semantic network?
       ► A model of long-term memory
       ► A record of our memory of events
       ► The part of the brain which allows us to remember things
       ► A mechanism for improving memory

Question No: 4 ( Marks: 1 ) - Please choose one
You can load a VCR tape the right way because of _____________.
       ► Physical constraints
       ► Logical constraints
       ► Cultural constraints
       ► None of these

Question No: 5 ( Marks: 1 ) - Please choose one
A mouse button invites pushing by the way it is physically constrained in its plastic shell, is an
example of ___________ Design Principle.
       ► Visibility.
       ► Affordance
       ► Mapping
       ► None of these



Question No: 17 ( Marks: 2 )
What are Design Edge Cases?

Question No: 18 ( Marks: 2 )
What are the Pointing Devices?

Question No: 19 ( Marks: 3 )
Define following in relation to Ethnographic Interviews.
  • Early Phase
  • Mid Phase

Question No: 20 ( Marks: 3 )
Define following in context of
resizing button given at the right bottom corner of any Windows:-
corner
  • Natural Mapping
  • Feedback

Question No: 22 ( Marks: 5 ) (Lucky enough to get additional 2 marks due to repetition :P)
What are the pointing devices? Explain Touch Pad as pointing device? [2+3]

Question No: 23 ( Marks: 5 )
Explain following:-

          9th Interfaces and Human Computer Interaction 2015 conference in Madeira, Portugal        
CALL FOR PAPERS IHCI 2015 – Deadline for submissions: 30 January 2015 9th International Conference on Interfaces and Human Computer Interaction 2015 Las Palmas de Gran Canaria, Spain, 22 – 24 July 2015 (http://www.ihci-conf.org/) Part of the Multi Conference on Computer … Continue reading
           Human Computer Giraffe Interaction: HCI in the Field         
Pascoe, Jason and Ryan, Nick S. and Morse, David R. (1998) Human Computer Giraffe Interaction: HCI in the Field. In: UNSPECIFIED. (Full text available)
          Dorkbot Vienna #9: Martin Kaltenbrunner (reacTIVision, TUIO, reactable)        
I’m hosting Dorkbot Vienna #9… and our guest will be Martin Kaltenbrunner. (Thanks to the Metalab!) Martin Kaltenbrunner is a Human Computer Interaction Designer, currently finalizing his Ph.D. at the Pompeu Fabra University in Barcelona, Spain. Recently he has been mainly working on the interaction design of the reacTable – a tangible modular synthesizer based […]
          RE        
@Kroc So removing spyware makes you a UI design expert? As a guy who actually rights software and has to deal with human computer interaction I'm going to say it's _nowhere_ near a fundamental design flaw. Your browser analogy is completely flawed. Icons keep getting bigger and bigger, mostly for aesthetic reasons. People have huge displays and hate tiny little icons. This trend is going to continue. Why not take advantage of those extra pixels those icons are using. Please stay with fixing computers and leave HCI to people who actually know what they're doing and who's opinions actually matter.
          Robert Aish talks DesignScript at AIANY Center for Architecture, Dec. 12        

Join Robert Aish, Director of Software Development at Autodesk, as he discusses “DesignScript: Integrating Multiple Disciplinary Tools to Create a New Design Environment” at the Center for Architecture this Wednesday, Dec. 12 from 6-8pm. The event is free but space is limited so reserve a seat online.

Location:
Center for Architecture, Tafel Hall (Lower Level).
536 LaGuardia Place, New York, NY
Admission: FREE; AIA CES Learning Units: 1.5

This program targets firm principals, studio leaders, and other professional staff focused on innovative approaches to design. This presentation will discuss how DesignScript is addressing these issues and will include a walkthrough of the recently released version available on Autodesk Labs.

Computational design is well established as an essential aspect of innovative architectural design, building engineering and digital fabrication. As we move to the second generation tools, new challenges are emerging: How to make computational design tools which are suitable for a range of programming skills from the novice to the expert? How to build systems that scale from simple to complex projects? How to progress from discipline specific applications to tools that support multi-disciplinary design collaboration at a computational level?

Robert Aish is a Director of Software development at Autodesk responsible for the development of DesignScript. He previously developed Generative Components at Bentley and is a co-founder of SmartGeometry. He is a graduate of the Royal College of Art, London and has a Ph.D. in human computer interaction.
 


          Hidden Figures        
Taraji P. Henson, Octavia Spencer, 
Janelle Monae, Kevin Costner, Kirsten Dunst,
Jim Parsons, Mahershala Ali

"Meet the Women you Don't know,
behind the Mission you Do."


If it wasn't for this truly inspirational movie, we would not have known about these brilliant women who not only were instrumental in sending the first American astronaut to orbit around Earth but also laid the ground work to the numerous successful space missions for NASA.

This is the true story of three pioneering African American women who were part of the 'human computers' pool in the 1960s during the early stages of the space agency. A period of racial segregation amidst the fierce space program race between the US and the USSR, they proved that anything was possible despite the challenges (racism, gender inequality) they faced not only at work, in school but also in their own community. 

Katherine Johnson, Dorothy Vaughn, Mary Jackson are very good role models not only for African Americans but for the human race. Geniuses who were skilled in mathematical calculations, deciphering the IBM computer code, achieving feats in engineering  - they represent the triumph of the human spirit.

The elaborate set design is reminiscent of the 1960s from the wardrobe, the cars, the NASA office complex to the music through the collaborative efforts of Pharrell Williams and Hans Zimmer.  The nicely compiled bounchy soundtrack gave the film its light and glossy tone.  

Taraji P. Henson, Octavia Spencer and Janelle Monae thoroughly took center stage with the credible depiction of their multi-dimensional real life characters. They were funny and lively when happiness abound and also disappointed, sad and crestfallen when faced with adversities. Great performances.

I do have misgivings about how the 'white characters' were portrayed as being racists and misogynists. So while it is good to celebrate and recognize the efforts of these 'human computers' and their contribution to the space program, doing it at the expense of the other equally qualified employees who just happen to be white and portrayed as 'villains' is not fair, at all.

So although, these hidden figures were 'unmasked' and their long overdue story is well narrated through this movie, I believe NASA owes its success to the collective efforts of  all these hard-working people, regardless of their position, race and gender.

          229 EE Women in Science: Hidden Figure Katherine Johnson, NASA's Human Computer        
none
           Human computer interaction using gestures for mobile devices and serious games: A review         
Spanogianopoulos, Sotirios and Sirlantzis, Konstantinos and Mentzelopoulos, Markos and Protopsaltis, Aristidis (2014) Human computer interaction using gestures for mobile devices and serious games: A review. In: 2014 International Conference on Interactive Mobile Communication Technologies and Learning (IMCL2014). IEEE pp. 310-314. (doi:https://doi.org/10.1109/IMCTL.2014.7011154 ) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided)
          Recent Events        

Last week I went to Super Freelancing - an event organised by the wonderful Super Mondays crew. You can even see my wee blonde head in the audience. Super Mondays is a creative and IT community in the North East of England who meet once a month and host a selection of speakers from across the industry. As I have recently branched out into the world of freelancing - to tide me over until I decide what I'm doing with my life - this month's even was of particular interest to me.

Paul Easton of Easton Media was first up, who gave us some hints and tips on using PR to our advantage. He urged us not to underestimate the power of good old fashioned newspapers and journalists, and some provoking questions from the audience resulted in us all learning what to avoid when offered a feature-and-ad combo.

Paul was followed by Laura Maddison of Altitude Recruitment  who gave some insider information on presenting our CVs. Always valuable advice. Rob Lavendar followed her, giving a talk titled "The Freelancer's Toolkit". I found this very interesting as it laid out some of the key tools and programs which aid the life of a freelancer. He has kindly uploaded his keynote here, which I highly recommend you check out. Lee Simpson was last up with tips on how to generate a passive income. Very interesting but not really applicable to me, unfortunately.

The following Thursday, I attended the Design PhD Conference at Northumbria University. The keynote speaker was Mike Press, a lecturer of mine from Dundee University, so it was nice to catch up as well as hear his lecture.He showed and described a broad selection of work from Jewellers to Animators, each of whom had made international impacts in their subjects from human computer interaction to cleaning up the planet. This brought the audience to a conclusion that the way ahead for design is in Social Innovation. You can read more about the conference and the lectures at the link above.

The past week has definitely given me some food for thought.
          How women were one of the first computers        
Back in the 1940s and 1950s, computers were people, not machines. And one group of these human computers worked at a NASA research lab in southern Virginia. An upcoming movie, Hidden Figures, focuses on how three of these human computers helped with … Continue reading
          7: Web RTC and Designing Realtime Experiences        

In episode 7 of the web platform podcast, ‘’Web RTC and Designing Realtime Experiences”,  we talk with Agility Feat (http://agilityfeat.com/), a design and development firm in the US, Costa Rica, Nicaragua, and Honduras. Agility Feat has been not only building out real-time apps for a while now but they are also actively contributing back to the community around it as well by speaking at events, distributing a RealTime.com newsletter, and more.

 

Web RTC (http://www.webrtc.org/) is “a free, open project that enables web browsers with Real-Time Communications (RTC) capabilities via simple JavaScript APIs”. It is a peer-to-peer communication tool and its been around for a while. Contrary to popular belief Web RTC is not just video & chat in the browser. It is more than just that, it has data channels, screen sharing, streaming, and much more.

 

Web RTC is an evolving standard for realtime app development and is gaining popularity quickly in the realtime web community. More browsers are starting to implement it and Agility Feat has seen the capabilities & usefulness of Web RTC to assist in developing the user experience of realtime applications. In this episode Agilty Feat discusses how they approach designing for browsers that don’t support it and how they use Web RTC effectively in their applications.

 

Christian Smith (@anvilhacks) and Erik Isaksen (@eisaksen) host this episode with guests Allan Naranjo (@OrangeSoftware), Mariana Lopez (@nanalq), & Arin Sime (@ArinSime) The AgilityFeat team talks with us about the user experience considerations in building realtime applications and the technologies involved.

 

Allan Naranjo (@OrangeSoftware) is a core member of the development team at AgiltyFeat. He is a leader in creating detailed mobile experiences with heavy client side frameworks. Allan was the winner of  ‘The  Access Innovation Prize’ in 2012 for one of his Facebook Applications.

 

Mariana Lopez (@nanalq) is the UX lead at AgilityFeat. She designs real-time applications for clients across a variety of industries.  Mariana studied Human Computer Interaction at Carnegie Mellon University, and is also a professor of Interaction Design at the Universidad Veritas (Costa Rica) and Universidad de Costa Rica.

 

Arin Sime (@ArinSime) is co-founder of AgilityFeat. Arin has over 16 years of experience as a developer, entrepreneur, consultant, and trainer for everything from small startups to Fortune 100′s and federal agencies.

 

Resources

http://www.realtimeweekly.com

http://agilityfeat.com/blog

http://iswebrtcreadyyet.com

http://techcrunch.com/2014/06/27/google-hangouts-will-no-longer-require-a-plugin-for-chrome-users/

http://www.agilityfeat.com/blog/2014/05/real-time-ux-design-video/

http://www.nojitter.com/post/240168527/webrtc--the-good-and-the-bad

https://plus.google.com/u/0/communities/106044262972906929746/stream/8faf729a-47a6-48d5-810f-e3f261ff585a

https://www.accessnow.org/blog/2012/12/11/first-annual-access-innovation-awards-prize-winners-announced

http://bloggeek.me/amazon-fire-phone-webrtc/

http://www.realtimeweb.co/

http://youtu.be/vvg_uFEu9Kk

http://webrtchacks.com/

http://learnfromlisa.com/learn-webrtc/

http://www.html5rocks.com/en/tutorials/webrtc/basics/

http://www.html5rocks.com/en/tutorials/webrtc/infrastructure/

http://www.html5rocks.com/en/tutorials/webrtc/datachannels/

https://developer.mozilla.org/en-US/docs/Web/Guide/API/WebRTC/Peer-to-peer_communications_with_WebRTC

https://github.com/html5rocks/www.html5rocks.com/tree/master/content/tutorials/webrtc


          Genevieve Bell on moving from human-computer interactions to human-computer relationships        

The O’Reilly Radar Podcast: AI on the hype curve, imagining nurturing technology, and gaps in the AI conversation.

This week, I sit down with anthropologist, futurist, Intel Fellow, and director of interaction and experience research at Intel, Genevieve Bell. We talk about what she’s learning from current AI research, why the resurgence of AI is different this time, and five things that are missing from the AI conversation.

Here are some highlights:

AI’s place on the wow-ahh-hmm curve of human existence

I think in some ways, for me, the reason of wanting to put AI into a lineage is many of the ways we respond to it as human beings are remarkably familiar. I'm sure you and many of your viewers and listeners know about the Gartner Hype Curve, the notion of, at first you don’t talk about it very much, then the arc of it's everywhere, and then it goes to the valley of it not being so spectacular until it stabilizes. I think most humans respond to technology not dissimilarly. There's this moment where you go, 'Wow. That’s amazing' promptly followed by the 'Uh-oh, is it going to kill us?' promptly followed by the, 'Huh, is that all it does?' It's sort of the wow-ahh-hmm curve of human existence. I think AI is in the middle of that.

At the moment, if you read the tech press, the trade presses, and the broader news, AI's simultaneously the answer to everything. It's going to provide us with safer cars, safer roads, better weather predictions. It's going to be a way of managing complex data in simple manners. It's going to beat us at chess. On the one hand, it's all of that goodness. On the other hand, there are being raised both the traditional fears of technology: is it going to kill us? Will it be safe? What does it mean to have autonomous things? What are they going to do to us? Then the reasonable questions about what models are we using to build this technology out. When you look across the ways it's being talked about, there are those three different factors. One of excessive optimism, one of a deep dystopian fear, and then another starting to run a critique of the decisions that are being made around it. I think that’s, in some ways, a very familiar set of positions about a new technology.

Looking beyond the app that finds your next cup of coffee

I sometimes worry that we imagine that each generation of new technology will somehow mysteriously and magically fix all of our problems.

The reality is 10, 20, 30 years from now, we will still be worrying about the safety of our families and our kids, worrying about the integrity of our communities, wanting a good story to keep us company, worrying about how we look and how we sound, and being concerned about the institutions in our existence. Those are human preoccupations that are thousands of years deep. I'm not sure they change this quickly. I do think there are harder questions about what that world will be like and what it means to have the possibility of machinery that is much more embedded in our lives and our world, and about what that feels like.

In the fields that I come out of, we've talked a lot since about the same time as AI about human computer interactions, and they really sat inside the paradigm. One about what should we call a command-and-control infrastructure. You give a command to the technology, you get some sort of piece of answer back; whether that’s old command prompt lines or Google search boxes, it is effectively the same thing. We're starting to imagine a generation of technology that is a little more anticipatory and a little more proactive, that’s living with us—you can see the first generation of those, whether that's Amazon's Echo or some of the early voice personal assistants.

There's a new class of intelligent agents that are coming, and I wonder sometimes if we move from a world of human-computer interactions to a world of human-computer relationships that we have to start thinking differently. What does it mean to imagine technology that is nurturing or that has a care or that wants you to be happy, not just efficient, or that wants you to be exposed to transformative ideas? It would be very different than the app that finds you your next cup of coffee.

There’s a lot of room for good AI conversations

What's missing from the AI conversation are the usual things I think are missing from many conversations about technology. One is an awareness of history. I think, like I said, AI doesn’t come out of nowhere. It came out of a very particular set of preoccupations and concerns in the 1950s and a very particular set of conversations. We have, in some ways, erased that history such that we forget how it came to be. For me, I think a sense of history is missing. As a result of that, I think more attention to a robust interdisciplinarity is missing, too. If we're talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings, I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art—I want them all in that conversation because I think they're all part of it.

I worry that this just becomes a conversation of technologists to each other about speeds and feeds and their latest instantiation, as opposed to saying, if we really are imagining a form of an object that will be in dialogue with us and supplemental and replacing us in some places, I want more people in that conversation. That's the second thing I think is missing.

I also think it's emerging, and I hear in people like Julia Ng and my colleagues Kate Crawford and Meredith Whitacre an emerging critique of it. How do you critique an algorithm? How do you start to unpack a black-boxed algorithm to ask the questions about what pieces of data are they waging against what and why? How do we have the kind of dialogue that says, sure we can talk about the underlying machinery, but we also need to talk about what's going into those algorithms and what does it mean to train objects.

For me, there's then the fourth thing, which is: where is theory in all of this? Not game theory. Not theories about machine learning and sequencing and logical decision-making, but theories about human beings, theories about how certain kinds of subjectivities are made. I was really struck in reading many of the histories of AI, but also of the contemporary work, of how much we make of normative examples in machine learning and in training, where you're trying to work out the repetition—what's the normal thing so we should just keep doing it? I realized that sitting inside those are always judgements about what is normal and what isn't. You and I are both women. We know that routinely women are not normal inside those engines.

There's something about what would it mean to start asking a set of theoretical questions that come out of feminist theory, out of Marxist theory, out of queer theory, critical race theory about what does it mean to imagine normal here and what is and what isn't. Machine learning people would recognize this as the question of how do you deal with the outliers. I think my theory would be: what if we started with the outliers rather than the center, and where would that get you?

I think the fifth thing that’s missing is: what are the other ways into this conversation that might change our thinking? As anthropologists, one of the things we're always really interested in is, can we give you that moment where we de-familiarize something. How do you take a thing you think you know and turn it on it's head so you go, 'I don’t recognize that anymore'? For me, that’s often about how do you give it a history. Increasingly, I realize in this space there's also a question to ask about what other things have we tried to machine learn on—so, what other things have we tried to use natural language processing, reasoning, induction on to make into supplemental humans or into things that do tasks for us?

Of course, there's a whole category of animals we've trained that way—carrier pigeons, sheep dogs, bomb sniffing dogs, Coco the monkey who could sign. There's a whole category of those, and I wonder if there's a way of approaching that topic that gets us to think differently about learning because that’s sitting underneath all of this, too. All of those things are missing. When you've got that many things missing, that’s actually good. I means there's a lot of room for good conversations.


           Principles of Human Computer Interaction Design: HCI Design         
Valverde, Raul (2011) Principles of Human Computer Interaction Design: HCI Design. LAP Lambert Academic Publishing. ISBN 9783845414621
          BibSonomy gets a new front-end        
We're currently working hard on a new user interface for BibSonomy. In the past, we've had lots of hints on how to optimize the layout, accessibility, and usability. We're scientists and programmers but we aren't product designers or experts of human computer interaction. So we've decided to use a framework that helps us to implement all these necessary improvements.

There are lots of front-end frameworks out there. We’ve chosen Bootstrap for the following reasons:

  1. Bootstrap is open source and freely available
  2. It supports responsive web design. It is very hard for a community of an open source project to develop and maintain two front-ends. With Bootstrap we develop code once-only and it works for computer screens, tablets (like the iPad), and smartphones
  3. Bootstrap is widely used. The look and feel of all elements is familiar. 
  4. And of course, it looks great ☺

The aim of a new front-end is to achieve an easier way to interact with BibSonomy. For this, we’ve defined a few rules, which we try to implement with the switch to Bootstrap.

  1. Give  all elements room to breathe! Currently, there are too many control elements spread over areas that are too small.
  2. Use larger fonts! Large fonts create larger clearness and better readability on mobile devices.
  3. If possible, use existing standard elements of Bootstrap. The elements of Bootstrap are approved and established. They are tested on different devices and browsers.
  4. Help the user where he/she needs help. With the last front-end redesign, we have added a lot of help and hints. Now we want to use it so it supports the user even better.
  5. Keep navigation menus clear. Easier menu structures helps users to find what they are looking for.

Finally, I would like to give you some insights in the new front-end:

post list and the new user menu

Publication details page

friends overview page



Mobile view

Tablet view

View for computer screens
We hope you like what you see. More information and release dates coming soon.

Keep happy and tagging!
Sebastian

          IBMVoice: On Martin Luther King Day, We Celebrate Early STEM Pioneers Portrayed In "Hidden Figures" Movie        
During the Space Race in the early 1960s, three "human computers" were instrumental in getting the Mercury astronauts into space. Three African-American mathematicians, Katherine Johnson, Mary Jackson and Dorothy Vaughan, became hidden figures, forgotten by history because of their gender and race.
          Joining the Software Revolution - John Lilly (Greylock)        
Investor and entrepreneur John Lilly shares how he became interested in human computer interaction at Stanford, and the moment he was inspired by the thoughts of entrepreneur Mitch Kapor, who championed software design to make us fundamentally more human in our interactions with technology.
          How to Build Your Own Alexa Service        

With the recent introduction of Amazon and Google products that provide Ironman-esque voice control functionality, we've been wondering lately what this means for the future of human computer interactions. Always on the lookout for emerging technology to get ahead of, we decided to put a project together to see what these little devices are capable of.

We had about 2 weeks before the three Viget offices were assembling for an all hands gathering, so we wanted to something both fun and interactive. What we ended up with was an Alexa service that could figure out which Viget employee you were thinking about. We called it: The Know It All

There are a couple pieces to this puzzle - a Rails backend, a React frontend, and an Alexa ... other frontend. I'll cover the Alexa aspect more in depth as that's what's new and interesting here, but you can find links to the other pieces down below. Enough chatter, let's get into how this thing actually works!

Making an Alexa Skill

Amazon has a Developer Console, which may take some hoop jumping to get into. But once you're in, all of the integration work takes place inside of an Alexa Skill. And more specifically, the Interaction Model of that Skill, which includes an Intent Schema, and Sample Utterances. Let's take a look at what that looked like for us:

Intent Schema

{
  "intents": [
    {
      "slots": [
        {
          "name": "answer",
          "type": "POSSIBLE_ANSWERS"
        }
      ],
      "intent": "Play"
    },
    {
      "intent": "AMAZON.YesIntent"
    },
    {
      "intent": "AMAZON.NoIntent"
    },
    {
      "intent": "Skip"
    }
  ]
}

Sample Utterances

Play begin
Play I want to play
Play {answer}
Play they are a {answer}
Skip i don't know

So, what's going on here? Sample Utterances are the entry point. When you say "Alexa, tell [the name of your service] to [do something]", it takes your [do something] checks for a matching line. If there is a match, then it invokes the associated Intent (identified by the first word in the line).

So when we say "Alexa, tell The Know It All that I want to play," the Play Intent is passed to the server endpoint we've configured.

Another piece to be aware of is the "slots" key. That comes in to play when you say something like They are a woman. The Play they are a {answer} line would match there, which fires the Play Intent with the term "woman" in the answer slot.

And lastly I'll point out the POSSIBLE_ANSWERS slot type associated with our "answer" slot. Amazon has a few built in slot and intent types if you wanted to hook into a well known data set (eg. dates, sports, actors, etc.) For our purposes, we had a custom list of possible answers to our questions, so we defined our own slot type to be matched on.

The Backend

As I mentioned before, you can configure an Alexa Skill to make it's requests to an API endpoint. With that hooked up, Amazon will send a POST request anytime there is an interaction with your defined Skill, and the user's speech will be sent along according to the schema you've laid out. It also ties in a session variable which enables you to engage in a back-and-forth interaction with the user which you can continue or terminate at any point.

A major help for us here was the use of the alexa-rubykit gem. It assists you in building up the appropriate response to send back to Amazon so you can easily define: - what the Echo should say next - what the Echo should say if it doesn't hear anything immediately - whether the session should continue or close out - any auxiliary parameters you'd like to track in the session

As promised, here's a gist of the Alexa specific pieces of Ruby code powering the backend - https://gist.github.com/efatsi/d95ec9b9fa35ed9a64ee7ba5c7a7fe7f.

The Frontend

I won't discuss the intricacies of this too much, it was really just an excuse for me to dabble in React, Microcosm, and React Motion. All of which are excellent tools by the way. The Microcosm app receives updates via Pusher (also excellent) and serves as a visual display for the current state of the active "game" being driven by Alexa.

There's also a fancy waiting page, let's just watch a nice gif of that.

looping-know-it-all

Wrapping Up

It's been fun and relatively straightforward to get our own custom Alexa Skill up and running. And you ever happen to swing by our HQ office, definitely give The Know It All a spin yourself!


          Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race        
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
author: Margot Lee Shetterly
name: Drick
average rating: 3.91
book published: 2016
rating: 5
read at: 2017/07/04
date added: 2017/07/04
shelves: history, race-ethnic-studies
review:
I saw the movie and was moved to read ( or in my case an audiobook, listen) this book and am so glad I did. Through the full story, Margot Lee Shetterly tells the story of Dorothy Vaughn. Mary Jackson, and Katherine Johnson, three of many African-American and White women hired by Langley Labs to be human computers. These three, along with many others ended up playing pivotal and crucial roles in the development of the planes that flew during World War II and the spaceships in the Mercury and Apollo programs. What one gets in the book that is missing in the movie, is the full story of how these three women came to be in the NASA program and their positive effect not only on that program, but also their families and communities. Also Shetterly does an excellent job of making intelligible the high level mathematics these women and all the people in the NASA program. Finally, she tells this story against the backdrop of the struggle for racial justice taking place for the Civil Rights Movement and the battle for gender equality taking place in the feminist movement. I would strongly recommend this book to get the full story the movie can only hint at.

          Hidden Figures and Unclear Facts: Lessons in Clear Communication from Hollywood and NASA        

The breakout film Hidden Figures made waves recently for shining a light on the until-now largely unsung work of black female mathematicians during the Space Race. These incredible women were among NASA’s “human computers”, hand-calculating the flight trajectories of some of the U.S. [More]

The post Hidden Figures and Unclear Facts: Lessons in Clear Communication from Hollywood and NASA appeared first on MarketSmiths.


          SEI-HCII Collaboration Explores Context-Aware Computing for Soldiers        
As the number of sensors on smart phones continues to grow, these devices can automatically track data from the user's environment, including geolocation, time of day, movement, and other sensor data. Making sense of this data in an ethical manner that respects the privacy of smartphone users is just one of the many challenges faced by researchers. In this podcast, Dr. Anind Dey, director of the Human Computer Interaction Institute (HCII) at CMU, and Dr. Jeff Boleng, principal researcher at the SEI, introduce context-aware computing and discuss a collaboration to help dismounted soldiers using context derived from sensors on them and their mobile devices, to ensure that they have the information and sensor support they need to optimize their mission performance.
           Constructing structure maps of multiple on-line texts         
PAYNE, S. J. and READER, W. R. (2006). Constructing structure maps of multiple on-line texts. International journal of human computer studies, 64 (5), 461-474.
           Pattern languages in HCI: a critical review         
DEARDEN, Andy and FINLAY, J. (2006). Pattern languages in HCI: a critical review. Human computer interaction, 21 (1), 49-102.
          New book on Human Computer Confluence - FREE PDF!        

Two good news for Positive Technology followers.

1) Our new book on Human Computer Confluence is out!

2) It can be downloaded for free here

9783110471137.jpg

Human-computer confluence refers to an invisible, implicit, embodied or even implanted interaction between humans and system components. New classes of user interfaces are emerging that make use of several sensors and are able to adapt their physical properties to the current situational context of users.

A key aspect of human-computer confluence is its potential for transforming human experience in the sense of bending, breaking and blending the barriers between the real, the virtual and the augmented, to allow users to experience their body and their world in new ways. Research on Presence, Embodiment and Brain-Computer Interface is already exploring these boundaries and asking questions such as: Can we seamlessly move between the virtual and the real? Can we assimilate fundamentally new senses through confluence?

The aim of this book is to explore the boundaries and intersections of the multidisciplinary field of HCC and discuss its potential applications in different domains, including healthcare, education, training and even arts.

DOWNLOAD THE FULL BOOK HERE AS OPEN ACCESS

Please cite as follows:

Andrea Gaggioli, Alois Ferscha, Giuseppe Riva, Stephen Dunne, Isabell Viaud-Delmon (2016). Human computer confluence: transforming human experience through symbiotic technologies. Warsaw: De Gruyter. ISBN 9783110471120.

 


          Third seminar details        
Seminar 3

Organised by Lansdown Centre for Electronic Arts, Middlesex University, held at London Knowledge Lab.

Download the programme for the day (one-page PDF file, 40k).

Following the successful formula of the previous two seminars, the day combined personal case studies with in-depth investigations of key issues.

  • Prof. Stephen Scrivener University of the Arts, London
  • Prof. David Durling Art and Design Research Institute, Middlesex University
  • Prof. Carol Costley Institute for Work Based Learning, Middlesex University
  • Dr. Stephen Boyd Davis Lansdown Centre for Electronic Arts, Middlesex University
  • Helen Bendon Lansdown Centre, Middlesex University
  • Dr. Ralf Nuhn Lansdown Centre, Middlesex University

Prof. David Durling Art and Design Research Institute, Middlesex University
Practice in the Design PhD: the debate so far.

With the ink still wet on his PhD certificate, [Dr] David Durling entered the academy in 1996 as a research director in a School of Art and Design. Umpteen research publications, several successful completions, two jobs and a wife later, he reflects upon more than a decade of rescue supervision and endless debates about researchy things, often confused and sometimes remarkably simple.


Dr. Stephen Boyd Davis
Lansdown Centre for Electronic Arts, Middlesex University. Defending the Thesis: why the written thesis is now a better idea than ever.



The written thesis is under attack. This presentation defends it against some of the principal objections popularly made. The argument is based on considering the "power of the word" in particular ways, above all that the written thesis is a visual medium (with the important affordances that this confers) and that the world digital environment in which each thesis is now situated means that the old objections to the unread dusty volume on the library shelf are a thing of the past.


Helen Bendon Lansdown Centre, Middlesex University.
Practice as Research: a personal account of a practice-based contribution to an ESRC project.

Helen undertook a creative residency with Vivacity2020 in 2006/7. This EPSRC funded research consortium engaged in a five year study of urban sustainability and the 24 hour city. One of two artists selected to work alongside academics, architects, town planners, social scientists and public agencies, Helen made a body of new video work. It was the intention that the inclusion of artists would assist in providing innovative and interactive ways of engaging the public with the research, and would broaden the perspective on issues of change and progressive urban developments. This presentation uses the experiences with the Vivcaity2020 project to explore issues around creative research methodologies and how these sit within a wider interdisciplinary research project.


Dr. Ralf Nuhn Lansdown Centre, Middlesex University.
Theory and practice in the PhD: a personal reflection.

This presentation focuses on my mixed-mode PhD in Media Arts, completed in autumn 2006.

I commence with a video recording of the key practical project for my thesis, UNCAGED, which is a series of six interactive installations aiming to bridge the gap between the screen-based worlds of computers and their immediate physical surroundings (see www.telesymbiosis.com). This is followed by a discussion of UNCAGED's contextualization within a broader theoretical framework ranging from aesthetic considerations, scientific and philosophical concepts, the particular role of sound to human computer interaction (HCI). I then describe how my critical engagement with the work, largely informed by Jean Baudrillard’s conception of the "real" and the "virtual", has resulted in a new heightened sensitivity regarding the role of digital technology in my artistic practice and has strongly influenced my subsequent artistic creations. (This is, at least, my argument within the narrative of my written thesis).

The subsequent part of my presentation problematizes two related notions regarding my sentiments about my own PhD as well as mixed-mode PhDs more generally. First, I (simply) question the adequateness of academic regulations concerning the actual format of mixed-mode PhDs, in particular the requirement for the thesis to fit on a library shelf, which inevitably seems to obscure the practical dimension of the work. Second, I discuss the relationship between the written and the practical part from a more theoretical perspective arguing that, at least in some cases, the former might just be an unnecessary "interface" narrowing the richness of the practical work within very clearly defined limits and, thus, becoming a mere academic exercise.


Prof. Carol Costley Institute for Work Based Learning, Middlesex University.
On the distinction (if any) between doctorates which are research qualifications and those which are qualifications in advanced practice.



Since the early 1990’s work based learning (WBL) has been developing in UK universities within subject disciplines and also outside disciplinary frameworks as a field of study in it own right. Both forms of WBL (as a mode of study and as a field of study), have developed pedagogies that have moved away from more traditional approaches. In some part this can be attributed to the mature adult community who are attracted to part-time courses that incorporate study into their work rather than a learning experience unrelated to working life. However, the developing pedagogies also relate to a wider, more transdisciplinary reflection of a knowledge-based society.

Following the successful institution of WBL ‘taught’ degrees at Bachelor and Master levels the natural progression was to introduce work-based doctorates. Professional doctorates had already started to increase in the UK and in the late 1990’s the Doctorate in Professional Studies sometimes called Professional Practice (DProf. sometimes called Prof D.) was introduced. The DProf is aimed at the actual work activities and circumstances of people engaged in high-level professional practice. Candidates already have considerable expertise in their work and their work-based research and development projects are likely to draw upon knowledge from a range of fields and also on tacit and professional knowledge. The Candidates’ situatedness outside the academic sphere brings about a balance of activity, focus and control between the academic and the professional environments.

Drawing mainly on the DProf., the presentation explores how postgraduate WBL works in higher education and there is some consideration of its academic underpinning (Costley and Stephenson 2008). There is discussion concerning generic assessment criteria; the structure of the doctoral programme; the kinds of research and development projects undertaken by the candidates; and the learning and teaching processes which are ‘essentially concerned with the individual and their own practice’ (Scott et al 2004).


Prof. Stephen ScrivenerUniversity of the Arts, London.
Artistic and designerly research: articulated transformational practice.



Starting from a discussion of the conditions of research, as suggested by dictionary and institutional definitions, this paper identifies and elucidates the symptoms indicative of research that provide the grounds for criteria that function as rules or tests for judging something as research, and on that basis approving or disapproving of it as such. These conditions and criteria provide an inclusive framework that accommodates differences between interpretative frameworks and between the research method demanded by a particular research project and the given interpretative framework in which it operates. Artistic and designerly research, it is argued, should also exhibit these symptoms and hence be subject to the same rules and tests. This being the case, why qualify research by the term artistic or designerly? What might be the additional or special symptoms and associated evaluative criteria of such research? To explore this question three ways of thinking about the relationship between the work of art and design and works of art and design, as described by Frayling (1993), i.e., research into, through and for art and design, are explored.

It is concluded that neither research into art or through art and design merit the qualification artistic of designerly research. However, it is argued that research for art, i.e., cognitively surprising artistic and design interventions that expand knowledge and understanding of the nature and scope of art and design, does merit this distinction because it implies additional subject specific symptoms and criteria. Research for art and design, it is proposed, claims material interventions that transform what is apprehended as art and design, concurrent with claims to knowledge of the manner in which art and design has thereby been transformed. Consequently, four additional symptoms of research for art and design are identified: transformational art and design is claimed and produced such that correspondence is instantiated between the cognitive adjustment achieved in its apprehension and the claims made for that apprehension as yielding an expanded understanding of art and design.

Download the text Prof. Scrivener's talk here

You may be interested to see the kinds of PhDs undertaken at the Lansdown Centre.
          Help Desk Administrator Resume        
James Edwards
855 Von Kolintz Road Mt Pleasant, Pittsburgh
SC 29464-3299

Objective:

A position in application or help desk department with a focus on usability in the Pittsburgh PA area.

Education:

University of Pittsburgh, Pittsburgh, PA
Master of Science in Information, 2000

Human Computer Interaction (HCI) Specialization
GPA 4.00 / 4.00
  • Member of Michigan Ohio Computer-Human Interaction (MOCHI) chapter
  • Member of Association for Computing Machinery (ACM)
  • University of Pittsburgh, Pittsburgh, PA
  • Bachelor of Science in Engineering, Computer Engineering, 1994

Minor in Management Information Systems
Magna Cum Laude, GPA 3.83 / 4.00
  • National Merit Scholar
  • Member of Tau Beta Pi National Engineering Honor Society
  • Member of Golden Key National Honor Society

Experience:

Hubbard and Assoc, LLP Pittsburgh, PA 2/1995 – 7/1999
Software Support Specialist

  • Implemented new software and upgraded existing software on 17 Novell Netware servers and on individual workstations.
  • Automated the installation of software to 1200 end-user PCs via DOS batch files, Symantec Basic scripting, and Novell Application Launcher snapshot technology.
  • Provided second level support to internal Help Desk to resolve user concerns with specific applications, including Windows 95, Microsoft Office, GroupWise, and numerous industry specific applications.
  • Identified and implemented technologies and methods of using existing technology to help users work more efficiently.

Academic Computing Services, University of Pittsburgh, Pittsburgh, PA 9/1993 - 12/1994
Consultant/Monitor

  • Provided personal assistance to students on IBM and Macintosh computers
  • Maintained computing facilities and managed printers

Verizon Development Engineering, Pittsburgh, PA 5/1993 - 8/1993
Student Technician/Professional Trainee, Technical Services, Quality and Systems

  • Training and help desk for over 500 people.
  • Helped establish an inventory database to track computer equipment
  • Train new hire secretaries in Word97, iManage, Legal MacPac, Softwise MacroSuite, CompareRite, CMS Time; train summer associates; in-coming new attorneys. Involved in the training and rollout of Outlook 2000.

Technical Skills:

Computer Skills

  • Windows XP, 2000, 98, 95, NT Microsoft Office
  • Word 2002, 2000, 97, 7.0, 6.0
  • WordPerfect 9.0
  • IManage/infoRite
  • Docs Open
  • SQL
  • Softwise MacroSuite
  • Legal MacPac
  • PowerDocs
  • Ability and desire to learn new technologies quickly

Usability Skills

  • User Needs Assessment
  • User Centered, Participatory Design
  • Competitive Analysis
  • Cognitive Modeling
  • Heuristic Evaluation
  • Usability Testing
  • Interface Prototyping and Testing
  • Task and GOMS Analysis
  • Usability Testing
  • Survey Development and Analysis
  • Content Analysis
  • Card Sorting
  • Relevant Coursework:

Evaluation of Systems and Services

Responsible for highly skilled training and floor support for diverse groups of people (legal secretaries, word processors, attorneys, paralegals, and administrative staff) in a variety of software applications with the aim to present software in such a way that person thrive during transitions and conversions. Worked with clients as project lead trainer to resolve on-site training related issues. Assisted with documentation. Training Consultant/Help Desk.


          Super Mensa Lumen, or "Luxsa" for short        
Are you a Secret Super Brain?
(and don't even know it?) 


How about we consider some kind of expanding, brain-sizzling, angelically devilish entertaining questions before watching a mindless video on meditation? (get it?) those who do, continue reading.

Are you in league with Isaac Asimov or Buckminster Fuller and don't even know it? Let us find out! 

But before we do, let us give ourselves a name, so that people of exceptionally high intelligence might have a label with which to identify should the subject arise at their next dinner party. 

Let's start with Mensa. "Mensa" is Latin for table, so Beyond Mensa is Super Mensa Lumen, or "Luxsa" - our newly adopted and beloved colloquial expression for Smarty Pants. It also means that our desk now has a table lamp.

Disclaimer: If you are concerned with elitism, Luxsa might not be for you ... for elitism is the belief one is superior to one's peers, while elite is sufficient enough. As such, Luxsa is elite. 

Luxsa is not mindless Trivia without context. For to do well on Trivia, one only need be in possession of a well-furnished, overstuffed mind. But if Trivia is a favorite pastime, you'll find yourself in good company here. 
  1. Which does not belong? George Sand, George Eliot, George Orwell? 
  2. If while in a coffee shop you heard people discussing ullage and botrytis, what is it they were discussing? 
  3. In the novel by Jules Verne, who went around the world in eighty days?
For the more arcane, how many imaginary places from world literature can you name? For the Super Arcane, how many landscapes from imaginary places can you close your eyes and verbally walk me through? 

Come back down from the slender stilts that rise from the ground at a great distance from one another and are lost above the clouds of the city and aim your spyglass and telescope back upon Luxsa for you will never tire of examining it, page by page, leaf by leaf, stone by stone, particle by particle, contemplating with fascination abstract notions of concrete realities such as absence and presence.  

Now that our minds are warmed up, let us start 4 hours after the meridian in Greenwich strikes 12 o'clock noon, which would be right about now. 

The following set of questions are relative to the unimportant matters or things one's mind considers. The tedious, never-quieting internal dialogue and debate on the nature of chicken-and-egg riddles and tyrannical influences and civil responsibility. The fun and charming challenges of nurturing a large working memory and the triumphs in fine mental tuning. Let us draw our own lines and color inside or outside them, and then arrange the elements in such a way as to arrive at a conclusion, a decision, or solution to some random and entirely important-in-the-moment thought ... to a place where our intuition resides. And by intuition we mean not some mystic or mysterious force that belongs in the realm of psychic phenomena. But rather a real, definable, and, to a greater or lesser extent, present in all of us accumulation of millions - perhaps billions or trillions - of tiny, "trivial" bits of information stored in the recesses of our memories that we harness, dust off, and bring together in an appropriate combination when the situation calls for it.

Armed with our thinking caps we enter a room filled with thoughts, and instantly we experience a feeling, either positive or negative. Let us pause and consider what creates that first impression? Are we hard-pressed to offer specifics? 

If our reaction be negative, what in the world, might I ask, is that human computer of ours doing? Is it being unruly? Focusing on facial expressions, mannerisms, a way of walking, and style of dressing - and with matching socks, dressing up these experiences with, and reactions to, "trivial" matters based on arcane or worse yet - boring information from the past? A kind of sad and desperate subconscious picture is drawn and in a fraction of a second reacted to that presents to the mind the notion of "bummer vibes?" 

The longer and more actively we engage our brains, the keener our intuition becomes. There are those who can take one look at a person, read a few words in a comment, or observe someone's manner and in an instant know precisely how that individual will react in certain circumstances. Dangerous, you say? Indeed, but only when used for ill. For there are those whose systems independent of their prowess of intellect adhere to higher grand principles from which to engage the world. Higher, not mindless and unexamined. 

Because the accumulation of facts is important to intuitive thinking, to the myriad of snap decisions and quick judgments one must make in order to go about one's day; trivia, in all its lack of glory, is part and parcel to our thinking experiments. 

We are almost compelled to conclude that Luxsa will be filled with Trivia and relatively unimportant matters or things, but these things can be another's essentiality. As we are not aware of the essentiality of others, those things by which they define their life purpose, we can only surmise - a few of us effectively - what those things might be based on their actions, words, complaints and celebrations ... for data examined is often illuminated. And fortunately for us, we have a light on our desk to see it. 

There will be some cramming in the head of information that one must merely suck up and learn, and by learn I mean not memorize. If there is a subject, rather than consult Google to see which posts rank highest and then take as proof of answer that which fits one's mindset; delve deeper, read scholarly journals on the subject, and "think" about the matter and allow your mind to openly wonder without bias and preconception. 

Travel along the neural framework you have carved for yourself with ceaseless thinking activities. If for any reason your neural framework is not functioning clean and clear of clutter, draw yourself a mind map of the 15 basic thinking paradigms by which your brain processes thought. Then delve down deeper into categories and subcategories and exceptions that belong to those areas of thought. Once you have mapped out where your thoughts reside, with a big picture view, you can now make the necessary adjustments to put your brain on your desired track. If you prefer to remain in the mire of twisting and turning and churning in your stomach over trivial matters, enjoy. If you consider that an unpleasant experience, retreat to a safe harbor, examine your mind's map, and adjust accordingly. 

One final thought, if you come to this activity with good cheer and sufficient rest for your mind, your enjoyment will be increased tenfold. In other words, you'll have more fun. If this latter comment on amusement was charming and nostalgic rather than illuminating, welcome to Luxsa. 

Match Wits with Luxsa

  1. Describe how your perfectionism developed. 
  2. Under which hierarchy has your critical perception and evaluation of values arisen? 
  3. What is frustration and why is this question important? 
  4. Define superiority and inferiority? 
  5. Does thinking cause you disquietude? 
  6. What is the value of agitation and anxiety? 
  7. Does surprise and shock exist? 
  8. How does one rectify moral failure (guilt)? 
  9. Which positive maladjustments have you adopted? 
  10. Does antagonism against social opinion and protest against the violation of intrinsic ethical principles make you feel better about yourself? 

Though uncomfortable, those who can answer these questions have the potential to fully realize and illuminate their mind map. 

In our next activity, we'll pull out our drawing utensils and make our very own mind map so that we might more easily keep our table lamp shinning bright. 






























          Dec 13, 2017: CIT Training: Interaction Design: 3-Day Course at Phillips Hall        

This course is for Designers, Engineers, Managers & Project Leaders. This is an intense, immersive, and engaging one-semester-equivalent starting course in the theory and practice of Human Computer Interaction. It is down-to-earth, understandable, and delivered by the speaker our attendees consistently report as our most entertaining. Upon completion, you'll be fully prepared to organize for and engage in real-world design. For more information, check out the full course outline under Course Attachments. If you are interested in following up with the UX Certification program, exam fees will be $80 per participant.

To register for this class, visit:

https://cornell.sabacloud.com/Saba/Web_spf/NA1PRD0089/common/ledetail/cours000000000005080

View on site | Email this event


          Janet Murray's response        
by
Nick Montfort
2004-04-02
Riposte to: 

Nick Montfort is a committed advocate of the genre that he calls Interactive Fiction (IF). He is helpful in clarifying its enduring charm and distinguishing attributes, and also in raising our awareness of the non-orthogonal nature of the many categories currently being brought to bear on storytelling and gaming in digital environments. Identifying genres and distinguishing interpretative categories is a crucial part of extending the coherence and expressiveness of the new medium, and Montfort makes a persuasive case for IF as its own digital genre with clearly identifiable attributes, with clear formal roots in the ancient story/game known as the riddle.

His identification of characteristics reinforces my own confidence in the helpfulness of looking at the digital medium as a single entity with many genres but with four chief and defining characteristics: the procedural, participatory, spatial, and encyclopedic affordances of the medium.

Montfort points out that it is the world-making quality of IF that distinguishes it from text in other media. World-creation is something that novelists and other storytellers do, and also something that some gamemakers do. Games derived from stories, such as Dungeons and Dragons, which is loosely derived from Tolkien’s fantasy universe, combine the instantiation of detailed, richly imagined verbal description with the pleasure of enacting behaviors that actively create belief in the world. The world becomes more present when we can act in it. On the computer, the world gains an even stronger presence because it becomes instantiated in an artifact that has behavior. We can think of this artifact as a machine, as Nick suggests, but we experience it as interactors as a performative world, a world that reveals itself not only in words but in actions. Of course, in IF the actions are embodied in words, but they are actions nonetheless. The behavior of these objects comes from the procedural power of the computer; their responsiveness from the participatory affordance; the detailing of the worlds that reinforces our belief builds upon the encyclopedic quality of the digital medium.

To the extent that the world of the interactive fiction is described as a geographical place or a physical space, we do not merely read about it - we navigate it. The power of the original IF games was explicitly centered in the navigation of caves and dungeons, and the pleasure of such navigation was extended into the shared networked spaces on multi-user domains (MUDs). As I point out in chapter three of Hamlet on the Holodeck, the spatial property of the digital medium is quite independent of its multimedia affordances, because it is derived from the procedural and participatory properties of the medium. The space of Zork is real to us even though there are no pictures, because it is consistently scripted and therefore navigable by command. This power of the computer to create navigable space is one of its most expressive affordances, and text-based environments have provided some of the most magical experiences of this representational power.

IF was one of the first and most energetic genres to be developed on the computer because of the fit between the enjoyment of a fantasy world and the development of the procedural, participatory, encyclopedic, and spatial qualities of a new medium. In one sense, the magical domain that IF explores is the inner space of the computer itself - the hackers that explored and elaborated upon the spell-rich dungeons also referred to themselves as “wizards” in their “real life” work of programming.

But today’s magic is tomorrow’s old technology. World-creation, especially spatially concrete world creation, poses a serious design problem as IF moves into increasingly sophisticated computational environments. In the 1970s when text-based adventure games were invented, and well into the 1980s which saw the flourishing of the form in the Infocom games, the standard computer interface was the command line. Hypertext was still a gleam in Ted Nelson’s eye, word processing was in its infancy, and the graphical interface driven by Doug Engelbart’s mouse was languishing in the invention orphanage of Xerox PARC waiting for Steve Jobs and his design team to come find it and adopt it.

Navigation through digital space has changed a great deal since 1980. Interactors are used to surfing the Web by mouse-clicks, arriving at Web “sites” and departing from them. The change from file transfer protocol, which users thought of as the “uploading” and “downloading” - or shipping and receiving - of containers of bits, to hypertext transfer protocol, which users experience as personally traveling from one place to another is, among other things, a change in dramatic expectations. Montfort’s IF is different from hypertext, but in the online MOOs it often co-exists with hypertext. The navigation systems are then in conflict: do we go from room to room by typing “N” “S” “W” “E” as in Zork, or do we click on doorways in a picture of a room or on a compass rose? Does the screen change visually in answer to our commands or is place instantiated solely by text? Do we set up multiple channels of interaction, one for typing or clicking through directions, one for typing commands other than directions, one for conversing with virtual characters? Do we use the same line text box to type out these three very different modalities of interaction?

Ben Shneiderman, one of the founders of the discipline of human computer interaction, has articulated the core design principle of “direct manipulation,” which is widely accepted as a desirable goal for structuring interaction. Typing “N” in order to move through a virtual space is far less direct and less manipulative than clicking on an image of the space or even an image of a compass. Is there a reason why the design criteria of other digital environments should not apply to IF? Is there a pleasure in indirect manipulation, in refusing to concretize the space? After all, the existence of an affordance should not dictate design. We choose between the panorama and the close-up in photography, designing by exclusion as well as inclusion. Is there a valuable trade-off in not allowing the interactor to move by mouse-click, or to click and drag a picture of coal into a picture of a diamond-making machine? Why not structure the IF world as a resource allocation simulation with pointing and clicking and dragging of objects? Why would we choose to keep the world in words and to keep the interaction verbal? Why privilege typing words over pointing to things, the keyboard over the mouse? Is the category of “Interactive Fiction,” like the category of “literature,” intrinsically verbal?

Montfort appropriately speaks of a “natural language” interface as one of the hallmarks of the form, but the use of commands drawn from natural language runs the risk of raising the interactor’s expectations of what can be expressed, while severely limiting their actual expressiveness. The command-line, programming code interface can conflict with the literary aspirations of the author. In online MOOs it is common to see verbose descriptions of spaces, whose tone and length evoke bookishness if not literary merit, combined with the restricted code of the command line. These two very different modalities create a discord, which is further heightened if the interactor is engaged in conversation with a character within the story.

So IF has certain intrinsic design difficulties, a built-in awkwardness in the way it represents spatial navigation and the inconsistency with which it handles language. And yet it continues to draw devoted practitioners and interactors. It is, in Montfort’s view, a still vibrant tradition.

Why does IF work despite these design difficulties? Perhaps the answer lies in its structure as a riddle. Riddles, unlike puzzles, are always verbal and are based on a conversational exchange. They are intrinsically interactive, and have a formal syntax, a variant of call-and-response structure. A riddle is a word-puzzle, framed as a conversation. The riddler poses a question that has to be reasoned out. But usually there is a surprise or misdirection involved. “What’s black and white and red all over?” we asked in my childhood and children still ask. The answer: a newspaper. This riddle, like many others, only works in speech because we distinguish between the homonyms “red” and “read” in writing. In fact, it is a way of calling a child’s attention to the category of homonym, or of helping a child to express her discomfort with such anomalies. Riddles are most popular with children who are just learning to read, or just learning how to reason things out, to hold more than one idea in mind at the same time. Riddles are like puns in the pleasure they take in the expressiveness of language. They are popular in Shakespeare, as are puns, perhaps because he was writing at a moment when an entire society was adjusting to the spread of literacy, the result of the maturity of the printing press and the form of the book. A riddle is a kind of debugging of our cognitive apparatus, calling attention to possible mistakes in verbal or logical decoding. Sometimes, as in the riddle of the sphinx or the graveyard riddles in Hamlet, they explicitly point to a larger human conundrum: the riddle of our consciousness and our mortality that underlies all gaming and all storytelling. Riddles are about the power and the limits of representation and communication in themselves.

IF is a riddle most of all because it is a conversation. It is not a conversation with an imaginary character, a chatterbot like Eliza, though it may include characters. It is a conversation with the author of the imaginary world, who is challenging the interactor to solve the puzzle, to figure out what the author has in mind, to debug their own interactive processes, repeating the sequences until the desired ending is reached. In the early online games there was no way of saving one’s position or undoing moves. The space could be traversed at will, assuming there were not locked doors, but time was relentless and irreversible. As in a conversation with another person, you could not unring a bell; as in an obsessive or superstitious ritual, the only way to get it right was to do it in exactly the acceptable order, no matter how many repetitions it might take to get it right. An interactor learning an IF environment had to memorize the sequences (or record them on paper) and say them back in the right order to please the god of this magical world. Meanwhile, the author is taunting or encouraging the interactor, and in either case making clear his or her own cleverness. Like the poser of the riddle, the author of an interactive fiction exists only as a conversational partner. Like the person to whom a riddle is posed, the interactor is in a contest, drawn in by a desire to “match wits,” with the riddle-poser, to test the operation of their own cognitive processes against the trickery of the master.

For my own taste, I prefer the riddling of a murder mystery to a dungeons and dragons plot, and so the only interactive fiction that I have found appealing was Deadline, an old Infocom game.

Would I play it again in command line form? Probably not: I’d rather play a game like The Last Express, which sets up its puzzle in sound and images. I find the voice of the narrator somewhat irritating, as I now find the once delightful voice of Henry Fielding in Tom Jones. Like that eighteenth century novel, IF stands at the beginning of a narrative genre, and its emphasis on the narrator is perhaps in part a sign of its self-consciousness.

But would I read anything that Nick Montfort has to say about the qualities and pleasures of IF? Absolutely.

Brenda Laurel responds

Nick Montfort responds


          The Most Human Human: what artificial intelligence teaches us about being alive        
book summary from publisher: Each year, the artificial intelligence community convenes to administer the famous — and famously controversial — Turing test, pitting sophisticated software programs against humans to determine if a computer can “think.” The machine that most often fools the judges wins the Most Human Computer Award. But there is also a prize, [...]
          China Will Overtake the US in Computing…Maybe, Someday…        
[note: the following is a rough draft -- I appreciate comments as I work this into shape and add relevant links to further sources]

December 6, 2011

Abstract:
Today, The New York Times published an article by Barboza and Markoff titled “Power in Numbers: China Aims for High-Tech Primacy.  This article echoes frequently expressed alarmist opinions that China is posed to take over the world.  I have lived in Beijing for the past 2.5 years as a visiting researcher at Microsoft Research Asia, I've taught Computer Sciences classes at Tsinghua University, and it is my opinion that China has major obstacles to overcome before becoming a high-tech powerhouse. The biggest of these is the the way creativity is discouraged in Chinese classrooms. Chinese students who spend time at western universities do pick up these skills. Creativity and the inclination to challenge norms in disruptive rather than incremental ways are at the heart of computing innovations. These traits are all but absent from Chinese universities. A solution I pose is an initiative called World Lab. We need a place for people from various cultures, backgrounds, and countries to come together to take risks in designing new technologies and to train students to become global leaders.

Today's NY Times article by Barboza & Markoff, “Power in Numbers: China Aims for High-Tech Primacy,”  would lead you to believe that the title of this blog entry (“China Will Overtake the US in Computing”) is almost a certainty. I could not help reading this somewhat alarmist article without cringing, as it follows a pattern of reporting on China that I’ve seen from since before I moved to China in 2009 and that I have noticed more frequently over the past two years now that I’m more sensitized to the realities of China’s economic rise. This lack of subtlety and nuance on China is what I’ve come to expect from media outlets such as CNN and I am more surprised to see it from seasoned journalists who are respected for their expertise, Barboza for reporting here in China and Markoff for reporting on computing.

As I prepare to leave next week to return to my position at the University of Washington, I am starting to reflect on what I have learned in my 2½ years in China. My own view is that there is incredible potential in the computing field in China – this is one of the many reasons I chose to pick up my family and move here. At the same time there are many important barriers to China’s eventual rise in computing and these barriers will not fall on their own without efforts at reforming both the educational system and government regulation, let alone certain Chinese cultural norms that are thousands of years old. That is why I’ve subtitled this blog entry “…Maybe, Someday…”. That is, I don’t believe China will rise above the US in computing anytime soon and if it is to do so, several important changes must first take place.

In the rest of this article I’ll try to touch on 1) why am I qualified to even have an informed opinion on China’s rise in computing, 2) what I saw as the misconceptions or omissions in the Barboza & Markoff NY Times article, and 3) what I think China must do to reach its potential in computing and why I think this is a good thing and not something the West should be worried about.

Who am I to Comment on Chinese Computing
As I read the NY Times article I was a bit surprised by some of the folks they had used to comment on the state of Chinese computing. I started to think, “who are the proper experts on this topic?” Later as I pondered this question, I began to think I’m as good an expert as anyone, at least from the academic computer science side, to comment on the rise of China in computing. Why is that? 

I have spent 2½ years living in China and in that time I have: worked at Microsoft Research Asia (MSRA), the top research organization in the country, taught at Tsinghua University, the top computer science department in the country, and organized several major technical research events in China. Before coming to China, I earned my PhD at Carnegie Mellon University (CMU), one of the top computer science departments in the world, earned tenure at Berkeley (another top department), founded a start-up, ran a ubiquitous computing research lab for Intel, and served as a professor at the University of Washington (another top computer science department). More detail on my background is here. I think this experience puts me in good position to make an informed assessment on computing in China. You be the judge. I’m sure I’m not right on everything and these are just my opinions, but after two years I’ve seen quite a bit, talked to many people, and I’m starting to have a good feel for what is going on here in China.


What is Wrong with the Rising View
I believe there is no question that China is quickly rising in all endeavors, whether it is in terms of China’s economics, infrastructure (think ports, highways, freight railway, and high speed rail), education, science, or technology. It is an amazing sight to see firsthand and the energy one feels living here during this important time in history is quite incredible (more than even in Silicon Valley during the 1st Internet boom of the mid to late 90s). Computing is no different from these other areas and China has made huge strides in 20 years, as reported in the NY Times article.

The key questions to ask are 1) where is China with respect to the US and the West in terms of computing today? and 2) where will China be in the future? The impressions that were given by the NY Times article on both of these questions is where I most felt the article lets the reader down. Let’s cover each of these in turn.

Where is China Computing Today
 Academic computer science has been the underlying basis for many of the major commercial strides in computing in the US (e.g., the Internet, the graphical web browser, compression for wireless communications, cloud computing, speech recognition, web indexing and search, gesture and touch-based user interfaces, location-aware computing, etc.).

China has made big strides in academic computer science over the past 20 years in terms of expansion of its programs and making a shift from mainly producing software for state-owned companies to undertaking leading edge computing research and education. In fact, China has passed major milestones in the past 5 years in terms of government support for research and in starting to publish in top computing journals and conferences.

Everything’s Big in China
Five to ten years ago, one would almost never see papers at the top academic computing conferences from China’s researchers, with the exception being papers from Microsoft Research Asia, which was started in Beijing back in 1998 by a group of Chinese and Taiwanese researchers who were trained in the US and worked in the US before returning to Asia. Today, there are many Chinese researchers who are publishing papers at top research venues. But, the number is still quite small given the large number of universities and researchers that are pursuing computing research in China. Computer Science & Technology is the largest undergraduate major in China and some estimates I’ve heard say there are over 1,000 computer science departments in China and over 1,000,000 computer science majors at a time across these departments. This is huge! The government is clearly making massive investments in computing.

Supercomputing isn't so Super?
One of the big accomplishments Chinese computer science has made given these investments over the last 5-10 years has been in Supercomputing: the very large, high speed machines often used for climate modeling, weapons simulation, etc. A couple of years ago China temporarily had the fastest machine in the world with the Tianhe-1A. This coveted spot on the TOP500 supercomputer list has traditionally been held by either US or Japanese supercomputers, though it changes all the time as new faster computers come into service.

Although getting to the top of the list was a major accomplishment for China, the news of China’s conquest of supercomputing really didn’t seem to be big news for almost anyone I know in leading computer science departments. Why is that? I think most leading computer scientists believe that although supercomputers are useful for certain problems, this is a technology of the past that will simply improve incrementally with underlying processor improvements (in fact, most supercomputers today use conventional processors used in desktop computers rather than the special purpose processors used in the past).

The big innovations in supercomputing have been in the programming models, network interconnects, and most recently in cooling/power usage. But, people seem to see much more important innovation going on in the cloud computing clusters that literally combine thousands of commercial processors together in standard racks connected with traditional networks in huge data centers around the world. This is the technology that powers Google, Microsoft, Apple, Amazon, and the many other web computing giants of the world and is then resold inexpensively to every little web site or mobile phone application that needs to do computing in the cloud. This type of architecture supports a far wider range of applications than supercomputing. Cloud computing is a hot topic in both industrial and academic computer science research and American computer scientists are clearly far in the lead in this area of work.

Academic Publications
In my own subfield of Ubiquitous Computing (ubicomp) and Human Computer Interaction (HCI), China is still in its early stages. Ubicomp has been around since 1991 and in those 20 years China has had almost no presence in the field (for example there were no papers from China at the 2010 UbiComp conference). This year I co-organized the conference with my colleagues at Tsinghua University and we held UbiComp 2011 here in Beijing (link). There were over 300 papers submitted and only 50 were accepted for presentation at the conference (a highly competitive 17% acceptance rate). Although this year we saw 38 papers submitted from China (last year there were only 10), only 3 of these papers with primary Chinese authors were accepted (and all of those were from Microsoft Research Asia). There were many US universities that alone had as many or more papers than all of China (e.g., Carnegie Mellon had seven and UW had four!).

This trend is very similar at other top computing conferences: China had almost no representation 5 or 10 years ago and now there is a smattering of papers (e.g., 1-3 papers/year – out of a 30 paper program – the last couple of years at each of the top systems and networking conferences: SIGCOMM, NSDI, and SOSP). Again, the majority of these papers are coming out of Microsoft Research Asia, not the top Chinese universities.

So we see China starting to be represented at major computing conferences, but Chinese researchers are at this stage no more impactful than many other smaller countries (e.g., France). Given the large number of universities and researchers pursuing computing in China, the interesting question is whether this a straight line that is going to continue its meteoric rise of the last few years (similar to China’s economic growth of ~10% for ten years) or is China’s impact in computing research going to start to grow at a much more modest rate (similar to many predictions of its economy growing at still fast yet more modest rates).

Research Creativity: Students, Faculty, & Academic Structure
Creativity, innovation, and “design thinking” have been some of the most overused buzzwords bandied about in the US business press over the last 3-5 years and this has especially accelerated in the few months since the passing of Steve Jobs. In computing research as well as in industry, creativity and innovation are also important topics. These hard to measure attributes are what we all believe lead to “impact”, which is also hard to measure, but is that which we are all after! Counting papers at top conferences or patents does not measure impact, but people (including me above) tend to sometimes fall back on this counting exercise, as it is easy to measure.

Having interacted with many top Chinese students while here in China, at both MSRA (the top place to have an internship for a computer scientist in China) and at Tsinghua (the top CS department), I’ve gotten a chance to observe the level of creativity and innovation in these top students. We’ve also attracted some of the top design students in China to our lab (in addition to hiring top designers from the US and Europe). I’ve also been lucky to interact with the top Chinese research computer scientists (i.e., folks who already have their PhDs) at MSRA and at the universities.

The simple fact is, the level of innovation and creativity in this cohort is much lower than in similar cohorts in the US. And in fact, the ones that are the best on the “creativity” scale almost invariably are folks who received their PhDs in the US/Europe or worked in the US/Europe. This is not to say those who haven’t left China for their education aren’t doing good work – as I mentioned above MSRA is one of the top places in the world for CS research and the researchers there are publishing at the top venues, but many of the most successful of these researchers have spent years under the tutelage of computer scientists who were trained in the West – almost going through a 2nd PhD while working at MSRA.

The simple fact is if you are educated in the Chinese system, from primary school through university, you have a much harder chance of practicing being “creative” than if you were educated elsewhere. This is not a genetic trait (as many Chinese educated in the West have clearly shown), but a trait of the Chinese educational system, which is based on over a thousand years of Chinese culture.

There are many articles (link) on how cultural underpinning of the Chinese educational system does a good job with the basics (e.g., the students in Shanghai beat the entire world on the PISA Test a year ago), but many here in China question whether the pervasive emphasis on memorization, test taking, and a cultural imperative that almost requires copying the teacher (link art article) and the past “masters” leads to a population that cannot think “outside of the box” (link).

Again, this lack of creativity is cultural and obviously there are folks who don’t fit the system and are creative and innovative (the art scene in China is growing by leaps and bounds). For many years, the top students in China have left the Chinese system for graduate school in the US. Although some of these students start out in America as brilliant and hard working students, many do not show much creativity when they start. They have learned not to question the professor, or others in positions of authority, and they are used to being told what to do rather than coming up with ideas on their own. But, many soon rise above this after a few years of practice and have turned into some of the top stars in the field (e.g., my own classmates at Carnegie Mellon, Harry Shum and Qi Lu, are now two of the top executives at Microsoft (links)).

I have personally advised students like this that have gone onto great computing careers, relying on their innovation and creative skills everyday. But this was only after 5-6 years in the “American” higher education system. My colleagues have often told me of similar examples. Now many Chinese are also aware of this key difference in our educational systems. The latest trend among middle class and wealthy Chinese is to send their kids to the US for their undergraduate degrees or even their high school education (some 200,000? were studying in the US this year alone link).

Now this trend by itself would cause one to believe that China will overtake the US in computing as this massive cohort of students return to China after earning their degrees. Although the “sea turtle” trend of returning to China after several years of working in the US continues, it doesn’t appear as common as some would believe. Many Chinese students become very accustomed to what is still an easier life in US cities and often choose to remain in America. In fact, a more important “glue” for these students might be the far more streamlined US corporate life (many Chinese companies are still fairly byzantine in their politics and structure and corruption is still a major problem). In fact, recent reports show that most wealthy Chinese are starting to secure homes and passports in the West, often for the educational opportunities outlined above, but also to avoid environmental degradation, corruption, and find access to healthcare (link report).

Last Spring I attended a major National Science Foundation workshop on computer science research collaboration with China (http://current.cs.ucsb.edu/nsf-uschina11/). Of the 80 attendees, over half were Chinese who were now professors at American universities. In computing research, many Chinese with US PhDs might be staying in the US for the prospect of working at a better university and with better graduate students than they can in China. Will this change soon?

One of the major differences I’ve noted between Chinese universities (and in fact Chinese organizations in general) and American universities is the power structure exposed in the academic hierarchy. American universities are hierarchical in that Full Professors make decisions about Associate and Assistant Professors, and Associate Professors in turn also make decisions (e.g., tenure) about Assistant Professors. But, I’ve also noticed that in the top departments I’ve been in that the more “senior” faculty understand that a lot of the innovation and best work occurs in the groups led by the “young” Assistant Professors and we in fact “protect” them so as to allow them to better develop and get this great work accomplished (e.g., we do not give them a lot of tedious committee work to do and we encourage them to teach advanced courses in their specialized areas rather than large, general undergraduate courses).

In Chinese universities, there is far more power and money concentrated in the hands of the senior faculty. In many universities the Assistant Professors are just that – they assist a senior faculty member and have no true independent agenda of their own. In a fast moving field like computer science, I believe this structure is bound to fail and cannot keep up with the changes in technology that occur so rapidly. Certainly more rapidly than the 10 years or more it will take a hotshot young faculty member to rise to the top of that hierarchy.

Today’s computing technology is nothing like it was 10 years ago! I believe this structural impediment makes it hard for anyone to name a computer science researcher in a Chinese university that they would say is one of the top in the world in their subfield (other than the few famous names, e.g., Andy Yao – a Turing Award winner, who have been “imported” to Chinese universities).

This means that unless the Chinese universities change this system, it will take many years (15-20) before their CS departments could even have a chance of being stocked from top to bottom with world-class computer scientists. And that would assume they start producing the top scientists here in China (which hasn’t happened yet) or start importing them from abroad (only a few have come so far). Again, this is not to say there aren’t good people here already. There are plenty of good people working in Chinese universities. For example, Prof. Yuanchun Shi, my co-chair for UbiComp 2011 from Tsinghua, is doing lots of great research in her group at Tsinghua. These folks are just spread thin and not a single Chinese computer science department has the strength of even a top 25 or maybe even a top 50 computer science department in the United States. This will be hard to change anytime soon without a massive change in hiring practices as well as in how those people are treated when they come on board.

Startups
Although academic computer science research in China isn’t yet all it can be and has some major impediments to its continued improvement, I believe the start-up scene is a bit healthier. Although I am not an expert on this, I try to keep up by following the top China tech blogs and writers on twitter (cite niubi, wolfegroupasia, tricia, kaiserkuo, affinitiy china, china law) and pay attention to what is going on at the key start-up events (e.g., TechCrunch Beijing was the most recent such activity).

I’ve also spent time chatting with and reading the works of folks who do study the start-up scene closely, such as Vivek Wadhwa (@wadhwa), professor at Duke and Stanford, who studies high-tech entrepreneurship in Silicon Valley and around the world. Professor Vwada has commented recently on the healthy start-up scene he has encountered while traveling in China (link). Noticing that this culture is starting to come to terms with the need to try and fail and start over again, as has fueled the amazing rise of Silicon Valley’s companies.

The conclusion I’ve come to from watching the Chinese start-up scene is that 1) it is vibrant, 2) some major early movers, especially on the Internet, e.g., Baidu, Alibaba, Sina, have already amassed fairly dominant positions in their niches as happened in the US (though we know as Yahoo has shown most recently that these positions can be lost easily), and 3) the amount of venture funding and number of startups are both increasing rapidly. 

In addition to these traditional spaces for innovation, there are other cool things that happen in China that are an outgrowth of its manufacturing innovation. In particular, the entire Shanzhai market (link), which started with fake name-brand goods, including phones and purses, has quickly moved into making novel new products. Again, they tend to be useful tweaks (e.g., multiple SIM card phones, new shapes, etc.) rather than major innovation. This might be where lots of the creative engineers end up in China as these types of folks may not have conformed with the rigid educational system to get into the elite schools.

There is innovation in the China computing startup world, but the type of innovation that happens in start-ups and in industry tends not to be the innovation that will pay off for the entire computing field in 10 years (e.g., the invention of the internet and many of the other computing advances I noted in the introduction to this article). Start-ups tend to take ideas that have already been floating around for a while and repurpose them to a new problem or incrementally improve on them. China’s start-ups are especially known for this incremental improvement strategy. As noted tech environmental crusader Peggy Liu (@shanghaipeggy) wrote today on Twitter, “China is not good at radical innovation, but it's great at tweakovation.” This quote exactly captures the type of activity happening most often in China’s startup scene.

This criticism for copying and tweaking rather than innovating is probably overblown, but continues to be said in and about the Chinese computing industry. One of the biggest names in China Tech funding, Kai Fu Li, founder of Innovation Ventures and former Google China Head, Microsoft Research Asia head, and all around Chinese high tech success story (from Taiwan), now has the nickname in China of “Start-Copy Li” (check for proper translation) for the propensity of companies in his venture portfolio to simple copy a popular western web site and give it some minor Chinese characteristics. For instance, there were hundreds of Groupon clones in China just a few months ago.

So although start-ups in China might be healthy, if a little less innovative than in the West, I do not think this is a fundamental problem for Chinese computing. The bigger question is can they really make the type of fundamental advances in the future that in the past led the US computing industry to its dominance. And can the Chinese make those advances if they are not first taking place in academic research. I do not believe they can and therefore encourage the Chinese to keep upgrading the educational system and infrastructure – but with more than just increased funding. I believe the structure needs to change (see below).

Patents
One argument for China’s future dominance in the fundamental underlying technologies of computing is the large Chinese patent portfolio. The NY Times article pointed out how China has overtaken Europe in number of patents filed and is catching up to the US and Japan. What the article fails to mention is that many, many people believe that many of these Chinese patents are bogus (link Vivek, China La blog) and come out of 1) a quota system that requires organizations to produce a certain number of patent filings per year regardless if they are actually any good and 2) a tendency to copy foreign patents, make minor changes to them, and then use these as trade barriers to western companies trying to do business in China (link China law blog). Leaving this type of information out of the NYTimes article really distorts the patent story. When paired with the lack of strong intellectual property rights protection in China, the patent story leads one to believe that China will not be able to innovate in the future.

How China Can Reach its Computing Potential
My analysis above might leave you with the opinion that I think China’s computing field is going nowhere fast. That is far from the truth. I think China will continue to improve in computing for two major reasons. First, computing in China will improve simply due to China's massive size: (1) in 1.3B people there are going to be a lot of great ones, no matter what barriers you put in their way and (2) the domestic market by itself will be huge and thus a great opportunity! Second, the large investment in technology research funding coming from the government (growth on the order of 10%/year for 10 years) will allow a lot of researchers to carry out many ambitious projects. But, I believe that instead of fearing China, we should see that China reaching its potential in computing could change the world in a very positive way and it is something we should try to help with.

China is Part of the Solution
Why do we want Chinese computing to succeed? I believe that the major problems that the US faces, the rest of the world faces, and China especially faces. China is key to helping solve these problems and by helping China’s research and education system in computing, we have a better chance of creatively solving these problems together. These are problems in:
  •  Sustainability: maintaining the environment, and stopping global warming in particular
  • Education: Improving education for all in both the basics as well as in creativity and innovation
  • Healthcare: Creating a healthcare system that will care for an aging population (North America, Europe, and China all suffer from this) as well as all one that will service all citizens at a reasonable price
All three of these problem areas will have solutions that involve government, policy, and pricing. Yet they also are problems where major technology innovations, especially computing technology innovations, can make a major positive impact. By working together with China on these problems we can help improve the world.

World Lab
In light of this view, I’ve been working the last few months on trying to create a new, multidisciplinary research institute that is jointly housed between a major Chinese university and an American university. This World Lab will become known as the place for risk taking, breaking the mold, inventing the future, and tackling the major problems facing the world. We will apply a new methodology I term “Global Design” to find a balance between design and technology, between human-centered & technology centered approaches, between academia & industry, and between Eastern and Western culture. The World Lab will push the boundaries of what is possible and invent the future today. This institute will help train the students and leaders of tomorrow’s universities and companies to be free thinkers who can create the solutions that society will need to solve these challenging problems.

I believe China’s rise in computing is remarkable, but the future is not assured. As a computer scientist I support helping China improve in computing because I believe it will help the world as well as the population of China. The problems are complex and success is not assured, but together I think we can create a better world.


Disclaimer: The opinions set out in this article are those of James Landay and do not represent the opinions of the University of Washington, Microsoft Corp., Intel Corp, or anyone else (unless they decide to say so – which I’d appreciate).

Acknowledgements: Thanks to Ben Zhao from UCSB (@ravenben) for some of the data on top networking and systems conferences. Thanks to Frank Chen (@frankc), Lydia Chilton, Aaron Quigley (@aquigley), Robert Walker, and Sarita Yardi (@yardi) for helpful comments on this essay.


My Background


Unlike other computing academics who have commented on Chinese computing, I’ve not just dropped into China for a week or two here or there and developed an impression. I’ve actually been living here full time for 2½ years. In that time I’ve helped build a new research group at Microsoft Research Asia(link), taught a course at Tsinghua University(link), co-organized a major international computing conference(link), started a major computing lecture series/symposium on new uses of computing(link), traveled to many different universities to speak, visit, and meet the students and faculty, and attended several meetings of the top computing faculty in China (a few of which also were attended by their US counterparts link: http://www.nsfc.gov.cn/Portal0/InfoModule_479/30695.htm).

I’ve also thrown myself into reading much of the press and blogs on innovation and start-ups in China and I’ve tried to go to events here in Beijing on these topics when I could. I also chat with others about these topics whenever I get a chance. As an expat you can easily meet some of the movers and shakers in this circle even when living in a city of 20M+.

In addition to my time in China, I think I’ve also been lucky to have been at the center of some of the top places in computing over the last 20 years. I obtained my PhD in Computer Science at Carnegie Mellon University (link). CMU is ranked by most as one of the top departments in the world. I was a faculty member and received tenure in CS at UC Berkeley (link), another one of the world’s top departments. Until coming to China, I was a faculty member in Computer Science at the University of Washington (link), another top department. At UW we’ve built one of the top programs in the world in Human-Computer Interaction and Design (link), which is a field that is at the forefront of envisioning and building the future of computing technology.

I also have industrial experience. In addition to the last 2½ years at Microsoft Research Asia, unquestionably the best computing research organization in all of Asia, I was the co-founder and CTO of a silicon valley-based start-up (NetRaker) while on the faculty at Berkeley and I ran a ubiquitous computing research lab for Intel in Seattle for 3 years (link). The researchers at the Intel lab invented many leading edge technologies in that time, including the city-scale, beacon-based location capabilities that were originally found on the iPhone and every single smart phone since (link), activity inference technology that uses sensors to tell what physical activities you are doing in the real world (e.g., running, walking, biking, taking stairs, etc.), which is just starting to show up in products in its most basic form (e.g., the FitBit (link)), and other very cool technologies that hopefully you will hopefully see in products some day in the future.

So, I think I’ve got a pretty significant amount of experience in computing research at the top academic institutions, industrial experience through my time at Intel and Microsoft, and start-up experience through NetRaker, that when combined with my time and study in China puts me in a fairly strong position to comment on where China is in computing and where it might be going.


          Human Computer Interaction Consortium (HCIC) 2009        
I went to the HCIC '09 Workshop in Fraser Colorado last week. It was a really great experience. UW's dub institute was recently admitted to this member only organization and the keys to this workshop are the small size (~75 attendees), top researchers attending (about half are top/senior folks in the field and the other half are the top graduate students that their departments have chosen to send), and the sessions are 90 minutes to cover ONE paper (yes one! -- see Leysia Palen of the U. Colorado ponder The Future of HCI at left). It also includes a lot of time for informal discussion while walking, taking a break, or skiing. I hadn't been since graduate school and forgot how great a venue it is. I had great chats with lots of folks, including my own former PhD advisor (Brad Myers --on left below).

Part of the excitement of the meeting for me was to see so many of my former graduate students taking an active part in the organization (Scott Klemmer), the talks (Jason Hong), and the discussion (Mark Newman, Jeff Heer, Scott, and Jason). It was also great to see one of my current students (Jon Froehlich) take it all in and see how he might be just like these former students soon. I felt like a proud father seeing his son ski down a hill for the 1st time (which I did indeed experience with both of my sons in a major way on this trip -- nothing like a 3 year old skiing and a 7 year old challenging himself on intermediate runs!)

It was great to present my talk on Activity Based Design to this group of strong researchers. I doubt the work would have had that good of an audience had I presented at CHI or another major conference due to the parallel tracks.

Some comments and questions about HCIC. If you made the workshop more open to others, then you'd lose some of the keys to the size. If you added more talks so that more of the folks there could participate, you would lose these great 90 minute sessions that you simply don't get at conferences. I guess we shouldn't muck with it. Any ideas?
          Digital Lives: Report of Interviews With the Creators of Personal Digital Collections        

Pete Williams, Ian Rowlands, Katrina Dean and Jeremy Leighton John describe initial findings of the AHRC-funded Digital Lives Research Project studying personal digital collections and their relationship with research repositories such as the British Library.

Personal collections such as those kept in the British Library have long documented diverse careers and lives, and include a wide variety of document (and artefact) types, formats and relationships. In recent years these collections have become ever more 'digital'. Not surprisingly, given the inexorable march of technological innovation, individuals are capturing and storing an ever-increasing amount of digital information about or for themselves, including documents, articles, portfolios of work, digital images, and audio and video recordings [1]. Read more about Digital Lives: Report of Interviews With the Creators of Personal Digital Collections

Personal collections such as those kept in the British Library have long documented diverse careers and lives, and include a wide variety of document (and artefact) types, formats and relationships. In recent years these collections have become ever more 'digital'. Not surprisingly, given the inexorable march of technological innovation, individuals are capturing and storing an ever-increasing amount of digital information about or for themselves, including documents, articles, portfolios of work, digital images, and audio and video recordings [1]. People can now correspond by email, have personal Web pages, blogs, and electronic diaries. Many issues arise from this increasingly empowered landscape of personal collection, dissemination, and digital memory, which will have major future impacts on librarianship and archival practice as our lives are increasingly recorded digitally rather than on paper. Not only the media and formats but, as we discovered in our research into digital collections, also the contents of works created by individuals are changing in their exploitation of the possibilities afforded them by the various software applications available. We need to understand and address these issues now if future historians, biographers and curators are to be able to make sense of life in the early twenty-first century. There is a real danger otherwise that we will lose whole swathes of personal, family and cultural memory. Various aspects of the subject of these personal digital archives have been studied, usually as aspects of 'personal information management' or PIM, such as that on the process of finding documents that have been acquired [2]. As Jones [3] says in his comprehensive literature review on the subject, 'much of the research relating to ... PIM is fragmented by application and device ...'. Important studies have focused on:

  • Email [4] [5];
  • The Web or Internet [6];
  • Paper or electronic retrieval [7].

The research reported in this article, which forms part of a longer-term study 'Digital Lives: research collections for the 21st century', takes a wider look at personal digital document acquisition and creation, organisation, retrieval, disposal or archiving, considering all applications and formats. 'Digital Lives' is a research project focusing on personal digital collections and their relationship with research repositories. It brings together expert curators and practitioners in digital preservation, digital manuscripts, literary collections, Web archiving, history of science, and oral history from the British Library with researchers in the School of Library, Archive and Information Studies at University College London, and the Centre for Information Technology and Law at the University of Bristol. Aims The Digital Lives research project aims to be a groundbreaking study addressing these major gaps in research and thinking on personal digital collections. The full study is considering not only how collections currently being deposited are changing, but also the fate of the research collections of the future being created now and implications for collection development and practice. We are seeking to clarify our understanding of an enormously complex and changing environment, engage with major issues, and evaluate radical new practices and tools that could assist curators in the future. Within this broad remit, this article focuses on the first stage of the digital archive process - individuals' own digital behaviour and their build-up of a digital collection. We wanted to find out:

  • How and why people use computers and other ICT (Information and Communications Technologies);
  • How this usage is creating a digital collection;
  • Within the narrative of their use of ICT, how they learned to use various software (system and applications) and hardware, and what relevant training they have had;
  • How they acquire, store, access and generally manage their digital archive;
  • What the relationship is between digital and non-digital components of hybrid personal archives.

Methodology Methods We used in-depth interviews to explore the views, practices and experiences of a number of eminent individuals in the fields of politics, the arts and the sciences, plus an equal number of young or mid-career professional practitioners. Questions covered the subjects listed above, with the first question addressing the history of the interviewee's experience with computers and ICT. The narration of this account often touched on topics such as training, manipulating files, backing up and transfer, collaborative work and thus offered information contextualised in more general experiences, attitudes and perceptions. Sample For this qualitative phase of the research, a wide spectrum of respondents, in terms of ages, backgrounds, professional expertise, and type and extent of computer usage, were interviewed. This was to elicit a diverse range of experiences and behaviours. The 25 interviewees included respondents who were:

  • "Established" (BL recruited), in that their works would already be of interest to institutional repositories: an architect, authors (including a Web author), a playwright, a Web designer, a molecular biologist, a geophysicist, a crystallographer, a politician, a photographer, and a knowledge management expert;
  • "Emerging", in that their works may develop in a way that may be of interest to such repositories: a digital artist, a theatre director, lecturers in cultural studies and education, a music publisher, and lecturers in participatory media, along with postdoctoral researchers and PhD students in the fields of English literature, human-computer interfaces, psychology, archaeology, information science and cultural studies.

Findings During the course of the research a fascinating variety of experiences, behaviours and approaches were uncovered, ranging from the digitisation of scientific records, to forwarding emails to oneself so that the subject line could be changed, to filming a theatre production and then projecting it onto the surrounding environment. Overall, the breadth of disciplines, backgrounds, ages and experiences of the individuals interviewed gave such contrasting and varied accounts that it is almost impossible to generalise findings at this preliminary stage. However, the narratives do provide excellent descriptions of a whole range of 'digital' behaviours that will be very useful in drawing up a questionnaire survey to be undertaken in the next phase of the research. Context of Use Despite 'home' usually being where an individual's use of computers first developed, later usage seems to be dominated by work. Nearly all respondents have a collection of digital photographs, and a minority have a blog or page on a social networking Web site, but overwhelmingly documents and other 'electronic artefacts' produced are work-related, and the work environment was the one thought of first when answering the question. An important point which has implications for archiving is that for some people, much work is undertaken remotely – directly from a server. This includes

  • blog writing (where no local copy is kept);
  • online questionnaire construction and use (e.g. Survey Monkey);
  • using software applications remotely, for example for statistical analysis, where only the resulting dataset is kept;
  • server-side email accounts.

There was a surprising enthusiasm for updating technology (although, of course, the sample was biased towards those who have a 'digital collection'). Only one respondent showed any reluctance – an interviewee who has been using C120 standard cassette tapes to record a diary for 40 years (he played one to us!) and intends to continue doing so. Equally, he said he did not wish to digitise his accumulated collection. One unexpected finding was that, unless specifically asked to discuss 'non-work' digital artefacts, respondents did not readily include discussion of them in their accounts. In fact, there was far less convergence between professional and work- related items on the one hand, and non-work and leisure items on the other, than had been expected. There was little evidence of a division of computers between home and work use (though there was such a division among family members – with each person generally having their own log-in and/or separate directories). The organisation of documents seemed always to reflect the separation of work and leisure. IT Skills, Training and Support Several key points arose. First, the norm is to be self-taught, even where people's jobs involve very sophisticated application of computer software. This is mainly because of early fascination with computers or the influence of parents or older siblings; but also out of necessity. Often a PhD topic would require the use of a particular application, which students would take upon themselves to learn for the sake of their studies. This ranged from citation-referencing software such as Endnote to learning high-level computer programming. Second, where respondents were not (or only partially) self-taught, training given was often informal, sporadic or ad hoc. For example, ongoing help was provided for one of the BL interviewees by a variety of family and friends. In another case, a digital collection was managed by one of the interviewee's sons. In other cases a relationship was built up between the computer user and a particular individual (such as 'Reg, the computer man') who was called as required, and in whom all trust was placed. Despite many respondents working in an environment in which IT support was readily available, such support did not spring to mind when respondents spoke of their needs in this area. When this was mentioned, comments were generally negative: 'they don't know anything' or '[they are] generally unhelpful'. A leading information manager called IT support workers 'an interesting breed', explaining that 'they don't understand that non-IT people have skills. IT and librarians will always clash, because IT people are always concerned with security and librarians with information-sharing'. However, one respondent – perhaps significantly, also an IT expert – found them 'very good', having worked directly with them in his job. Finally, in the cases where informal help was provided, many misconceptions were apparent, although whether these were due to the quality of advice or teaching given is not known. To give one example, a Web author did not realise her novel, written as a serial blog, or her emails, were stored remotely from her own computer and that without an Internet connection, therefore, she would be unable to access them. She has never been unable to do so as her broadband Internet connection is on all the time. She had no idea of managing email messages and did not know that these were retained after being opened. In this respect, the current study supported the findings of Marshall and colleagues [8] whose study of how people acquire, keep, and access their digital 'belongings' showed 'a scattered and contradictory understanding of computers in the abstract'. (p.26) Filing, Storage, Transferring and Deleting Files Filing and Storage No obvious pattern has emerged so far. The main points are, firstly, that documents are stored in folders that reflect either chronological creation, or topic, depending on appropriateness. The variety of material and general work determined this decision. For example, where documents are related to one theme – such as repeated experiments –they are more likely to be filed by date, or in folders by date range. The system's automatic allocation of 'date modified' was considered not to be of use in files where a date is important. This is because often the important date is the one in which the experiment or event took place and not when the document was modified. Many other people also tend to put the date manually as part of the file name, even though the system automatically records and displays the date when a document was last modified. This is for four main reasons:

  • Some interviewees do not trust the automatic date allocation provided by the system;
  • Others tend to make tiny modifications to a document not classified in the writer's mind as a modification. One interviewee, for example, declared that occasionally he would read a file which had been completed and might find a small error or wish to add 'just one word'. This would not merit changing the date on the file name. So although the system would record the latest but minor document change, the document was effectively 'completed' earlier, since the original date was deemed to make more sense in the context of the document history;
  • Convenience – where the date is displayed as part of a file name it is not necessary to display 'details' in the system listing;
  • A document in progress can be saved under the same file name several times, with only a date being modified in the name. This is a way of saving individual drafts either to have a record of changes by date, and also as a kind of back-up, in case the working version becomes unusable for any reason.

Another broad finding was that collections appear to grow organically – instead of moving completed files and folders to a less visible position on a computer, other folders are simply created next to existing resources. However, a minority of respondents do delete files once they are backed up elsewhere, or create folders in which they relocate folders of completed documents, so there are not too many folders at the top level. The study also found similar results to Jones et al. [9] in that people replicate file structures where similar filing is required for different projects. An obvious example of this is a lecturer who has a folder for each course, within which are subfolders containing, respectively, PowerPoint slides, lecture notes, current student work, etc. A theme that emerged with regard to email in particular was that documents and other digital artefacts accumulate unintentionally. There were examples of email archives containing literally thousands of out of date messages, kept only because it was less effort to retain than to delete them. According to Marshall [10] 'most personal information in the digital world is not collected intentionally and thus does not form a coherent collection; instead heterogeneous materials accumulate invisibly over time' (p5). Whilst the present study's results would not suggest that 'most' information is collected inadvertently, this may be true for emails and attendant attachments. Another 'theme' or 'generality' that came out of the interviews was that a change in computer is often the motivating force and the main way in which files are removed from an 'active' location. In other words, the act of transferring files from one computer to another includes that of weeding files, whereby those that are no longer active are either discarded with the old machine or, where the hardware is retained, kept in long-term storage. Finally, there were few examples of documents not being organised into folders and directory hierarchies, but retrieved or accessed by keyword only – a finding that has echoes in the work of Jones and colleagues [9](p. 1505), who found that all but one of their sample of 14 professionals and academics refused to pilot a search-based system for retrieving information without organising it hierarchically. Our research found that folder hierarchies represented 'information in their own right' and that 'Folders … represented key components of a project and thus constituted an emerging understanding of the associated information items and their various relationships to one another'. This supports the approach that has been adopted by the Digital Manuscripts Project at the British Library, for example, where the value of contextual information beyond simply the digital files has been emphasised [11]. In the present study, each folder and subfolder system formed discrete units of work themed around time periods or different tasks, such as teaching, research etc. An aspect of folders not mentioned by Jones was that of the facility to browse. Lecturers said that they sometimes needed material given in a particular talk for something else. Using a traditional folder system they could browse filenames both to find specific files and for inspiration – there were occasions where, in at least one case, a file existed that the interviewee had forgotten and was only remembered on seeing the file name. Ironically, of those who did not use folders and hierarchies, one was of a computer expert, who is adept at information retrieval, and the other of a novice. The latter did not appear to know about folders or hierarchies, and had much help from one of his sons in indexing his files, which were integrated into a bigger collection of cassette tapes, hard copy articles and letters etc. Transferring or Deleting Files Almost all respondents said that they deleted fewer files 'these days' because there were not the electronic storage problems that marked earlier computer usage, with some exceptions such as institutional file storage limits (see email below). For many it was actually less of an effort to simply keep a file than to delete it. However, people were aware that they were creating possible problems in the future in the form of 'document overload' – in other words, having too many files and directories to easily navigate to active documents. As mentioned above, there were ways of obviating this problem. With regard to the transferring of files, many respondents generally only remember their most recent computer transfer, or their behaviour in this respect over the last three or four years. When asked about periods before this, they usually prefix answers with the words 'I must have ...' and sometimes puzzle about what happened to 'all those floppy disks'. Many documents appear to have been lost when changing from an old to a new computer, where the former was then sold on or discarded. There were cases, though, where old computers were kept simply for the files they contain, although subsequently retrieving the files and migrating them to newer computers was problematic, as in some cases the only way to remove a file was on a floppy disk, which in turn could not be read by a newer computer. There were cases of this within the 'eminent' sample, whose collections may be deposited at the BL or similar institution, where the interviewees were told of the capacity of the BL to both extract files, and also to access corrupted or deleted material through computer forensics. Backing-up Back-up policies appear to relate to three major factors:

  • The value placed on a document or other digital artefact: This aspect is discussed more with regard to archiving, below;
  • Confidence in hardware and software: Interviewees most familiar with the technology seemed to be those who were more diligent about backing-up and more mindful of the possibility of document loss. However, almost all respondents backed up to a certain extent, even if in some cases this was only done at infrequent and irregular intervals;
  • Knowledge about methods for backing-up: A number of interviewees simply backed up their documents by saving them in a different directory on the same computer, thus not protecting themselves against a computer 'crash' or theft. Others did back up to an additional hard drive or to removable media, but these were often stored in the vicinity or same room as the computer, and so would be destroyed in the unlikely event of a fire or flood – or even house theft.

In a minority of advanced user cases, synchronous or automatic back-up is undertaken, and two interviewees have the new Mac system with its 'Time machine' function, enabling users to restore files and folders to their status and 'position' on any given date. Nevertheless, even this method on its own is vulnerable if there is no backing up to an additional, separate, store. Other back-up methods included storage media such as external drives, floppy disks, CDs or DVDs, and alternative computers. However, email was also used quite extensively. This is discussed more below. Archiving As Jones [3] points out, 'decisions concerning whether and how to keep … information are an essential part of personal information management'. Although Jones was not necessarily talking about information to be kept in the long term, clearly, archiving files – and deciding whether to keep particular documents or not – is also a critical aspect of PIM. Our study found, as with earlier work by Ravasio et al. [12] that saving work that was completed (even if not actually using the word 'archiving') was an important part of working with digital documents. The main points that emerged are that the decision to archive appears to depend on both affective and utilitarian factors. These were:

  • The value placed on the digital artefact professionally, academically (as with, say, a major dissertation), or emotionally (say, in the context of correspondence with someone which, though retained, is no longer read; for example, one of the interviewees reported having notes from student days which survived as obsolete WordPerfect files but which were effectively unreadable for the owner);
  • The possibility that a document might be useful in the future;
  • As a back-up for a hard copy archive.

The Value of a Digital Artefact Regarding the first of these points, the time and/or emotional investment in producing a document appeared to be a major factor in its retention, regardless of whether it would ever be needed again. Marshall et al. [8] (p30) suggest that 'value' can be calculated using five factors:

  • 'demonstrated worth (e.g. how often an asset has been replicated);
  • creative effort (e.g. the asset's genre and mode of creation);
  • labour [sic] (e.g. time spent in creation);
  • reconstituteability (...source, stability, and ... cost); and
  • emotional impact (a factor which may be influenced by with whom items have been shared)'.

For the current project, creative time and effort appeared to be interlinked to the extent that they formed one factor. In many cases the 'emotional impact' was also inextricably linked with the time and effort expended in creating the artefact, although this was also influenced by the contextual factors surrounding its creation and history. The work and emotional effort going into a project defined it as an important statement of achievement, and thus heightened its value and guaranteed its continued existence. Sometimes with the case of key artefacts, there would be back-up copies (for example, on CD) just to ensure survival. Indeed, some respondents look at their archive as a reflection of their life's work, and keep items of no further practical value. Logically, one might assume that it would be hard copy or 'physical' artefacts that would be retained as such a testimony. Much hard copy material is also kept because of the time or effort invested in its creation, or for representing important points in the creators' lives. However, the reluctance to dispose of electronic files indicated that they too constituted an important part of a professional or academic portfolio. Of course, in the case of many (increasingly prevalent) digital files such as animated images, audio and video, it is not meaningful to print out a hard copy. It has long been known from conventional (neither digital nor hybrid) archives that people retain items for their sentimental value and as biographical records or pointers to a person's individual or family life story. Thus while people differ in how many items are retained for this kind of reason, the importance of at least a modest degree of personal archiving is widely and strongly felt. Etherton [13] has noted in an illuminating account of the role of archives: 'Families very often keep personal records of people and events such as family photograph albums and baby albums which record the growth of and achievements of a child's early years', and argues convincingly that such things play an important role in a person's sound psychological well-being, helping to provide individuals with a sense of belonging and a sense of place. So much so that social work and medical professionals working with children in care and with terminally ill young mothers, ensure that life story books, memory boxes and oral history recordings are prepared in order to provide 'fundamental information on the birth family and on the early details of the child'. It is perhaps not surprising that this general need for personal memory is (to varying degrees) also felt by academics and professionals in respect of their careers as well as of their home life. Moreover, such a need has begun to embrace digital objects as well as non-digital ones, as is borne out by some of the comments of interviewees. On several occasions interviewees showed researchers either hard copy or electronic files which strongly reminded them of the contextual aspects surrounding the creation or acquisition of those files. Examples of such contexts were working with friends, undertaking specific activities, or being constrained by the technology. For example, a geophysicist showed us printouts of his attempts to model an ice block melting during his PhD research. This evoked memories of limited computing power and memory (the model could not be visualised on-screen or stored locally), alongside nervous anticipation about the results of his early research efforts. Programs had to be run overnight from departmental computers and the results retrieved from a remote printer in the morning. Many examples were encountered where creative effort and labour lent considerable value to documents, even where they were now, to all intents and purposes, without any practical worth. For example, Word Perfect files were not deleted even though they may have been unreadable to the creator. It needs to be borne in mind, however, that the retention of obsolete files is not necessarily emotional or irrational. Once a file is destroyed it is gone, but in the case of an obsolete file there remains some hope of recovery. In science, for example, datasets and records of analyses need to be kept in order to allow colleagues the possibility of re-analysis. A scientist who deletes a file (obsolete or not) might expose himself or herself to the criticism that he or she has actively denied other researchers the possibility of re-analysis: a gradual obsolescence might be deemed more acceptable. With storage capacity so much larger now, it is easier to retain documents. In fact, as mentioned above, many interviewees simply did not bother to delete documents that they no longer needed. Thus the long-term existence of a document no longer necessarily implies that it has been invested with considerable value. Archiving for Future Use Another reason for archiving is that documents may be required in the future. Much of the literature on this [3] discusses acquired documents in this context. The research reported here, however, shows that created documents are also archived for their potential later usage in different contexts. Not surprisingly, lecturers kept Word and PowerPoint files for later reference. They included material for courses no longer taught or current, just in case they were needed at a later stage – even to illustrate historical points (where it is the contemporary nature of the discipline that matters). Even student essays were kept (partly) for this reason. One interviewee said that his undergraduate essays contained 'decent' reviews of past literature that could be 'plundered' for later use. Academic writers also found that they could 'recycle' parts of old articles. For example, a conclusion to one article could be used in a section outlining the author's own prior research in a further paper. Of course, quite often interviewees could not specify exactly how a document would be used later but still felt that, as long as there was a possibility that it might be of use, it was worth keeping. Email messages that were never, or no longer, 'actionable' were also stored. Hard Copy Documents Hard copy is also an option for some interviewees, with the hard copies often being generated by others on their behalf (e.g. by receiving hard copies of journal articles). Final versions of articles are particularly likely to be printed out. One scientist interviewed has his journal articles in electronic and hard copy form, and binds the latter every time he reaches another 30 publications. In some cases, the hard copy is actually more extensive than the digital back-up. For example, a playwright whom we interviewed prints his work every day, reads through and makes any changes by hand. The next day he makes the adjustments on his computer, saving the document under the same name and thus overwriting his previous version. He keeps and files all his printouts and thus has a comprehensive record of all stages of his work. In addition, every 30 days or so, he does create a new electronic version (i.e. by changing the filename). Communication and Social Networking The exploration of the use of email proved to be one of the most fruitful and interesting areas of study. Usage has gone far beyond the original purpose of email (e.g. communicating with others) and is being appropriated in various innovative ways by respondents. These include:

  • Sending reminder messages to oneself;
  • Using email as a file storage system, whereby individuals attach a document to a message and then send it to themselves as a form of back-up (or send it to themselves using a different account which may have a bigger storage capacity);
  • Using email as an appointments diary (where the presence of messages in an inbox acts as a reminder);
  • Forwarding received messages to oneself, at the same account, as a way of changing the subject line (it is not generally possible to edit this as an inbox entry without forwarding it) to reflect more accurately the message content or to highlight the content of any attachment;
  • As a record of work or contacts.

Within the topic of email, the creation and usage of different accounts was particularly instructive:

  • Many respondents have several accounts – being used respectively for online shopping, social mail, work, etc. One respondent and his close colleagues had even set up Google accounts for the sole usage of their collaborative group, etc.;
  • Some 'defunct' accounts are maintained to keep in touch with people who may not know new contact details, and so are occasionally checked;
  • Some respondents are also prepared to sacrifice the kudos that an academic address gives them for the convenience of having a better system with which to work;
  • Some (especially younger) respondents do not like using work or university email accounts for social messages, and opened Web-mail accounts to avoid this;
  • One respondent even prefers his Gmail account to his university one for formal use. He begins any new formal communication with his university account, to confirm his professional credentials. However, once people know who he is, he is happy to relinquish the 'status' that his university's account accords him and switches to his Googlemail account;
  • Saving is often a function of system limits (e.g. maximum inbox size); and several respondents reported that free Web-mail services (Gmail, Yahoo etc.) offered far more 'space' than work mail (younger respondents especially tended towards this view).

Curatorial Issues The research and the nature of the personal collections and digital behaviour described above clearly have significant implications for large institutional repositories such as the British Library. Large, hybrid collections of contemporary papers, partly generated using computers, including eMANUSCRIPTS and eARCHIVES, have resulted in personal collections of substantial quantity and complexity in terms of version control of documents, archival appraisal and selection. Our findings so far indicate that these issues will remain pertinent in dealing with personal hybrid and digital collections, although they may need to be tackled in different ways. For example, more of the management, appraisal and selection of archival material from personal digital collections may need to be carried out by creators in partnership with repositories in their lifetimes, rather than retrospectively. Moreover, repositories may need to deal with a greater variety of digital formats as part of a continuous decision-making process and workflow, rather than parcelling out different aspects of personal collections to format specialists. From a different perspective, curators have also often dealt with intermediaries authorised by creators to control their archives (often posthumously). However, with institutions and commercial service providers offering the creators of personal digital collections services in their lifetime such as email, social networking and file storage, the decision to pass control of potential archive material to intermediaries is sometimes taken less advisedly, and may lead to further complexities. Digital Lives research will be examining issues concerning rights and storage services in more detail. Finally, there is one aspect that has not been mentioned in this report but constituted part of our qualitative research among creators of personal digital collections. It is that of attitudes to rights issues including privacy, personal control and misuse of information and copyright. Again, these are issues traditionally encountered by repositories that have in the past balanced concerns about privacy, protection of sensitive information and intellectual property on the part of archive creators with that of access to researchers. Our interviewees among the creators of personal digital collections seemed relaxed on the whole about these issues, or otherwise to have given them little thought. In a context of legal compliance, it may also be appropriate to consider issues of cultural change when thinking about how rights issues are to be handled by archive creators and repositories in the future. Again, this is a discrete area of research that is receiving more attention in the Digital Lives Project. Conclusion Issues of acquiring, creating, manipulating, storing, archiving and managing personal digital archives are extremely complex and few patterns emerge from the interviews described. This may be because there are many distinct styles of conducting digital lives or because the scope of what is meant by digital lives lacks adequate definition at present. The sample was made up of people with widely differing backgrounds, and who used computers in a great variety of different ways. There seems to be many distinct styles of conducting digital lives. Our research found significant differences in:

  • Methods and places of storage;
  • Familiarity and expertise with hardware and software;
  • Understanding of the meaning of a 'personal digital collection' (respondents' own views of this concept formed part of the project, and so it was not incumbent on the researchers to provide more than a general explanation);
  • Individual perceptions of what and especially how much is worth keeping (as is the case with conventional archives too);
  • Relative values attached to digital and non-digital items.

While not yet yielding any general conclusions, the study has already highlighted for the researchers some of the issues relating to the deposit of personal digital collections with which, increasingly, repositories will be faced. With further analysis and dissemination, the project findings, will greatly inform the British Library and other repositories. One such issue, for example, is the blurring of the distinction (at least, in the interviewees' views) between what is created or stored online and off-line, and a certain misunderstanding about this issue. This was particularly true of email, where some respondents did not know whether their messages were stored on their own computer or remotely, and, indeed, had never given it any thought. A certain ambiguity was also revealed, regarding 'back-up', 'storage' and 'archive'. In part, this was just a question of terminology, but vague areas were revealed where, for example, back-ups for active documents – often several draft versions – were retained permanently because this was easier than deleting them, even though such dormant and somewhat repetitive documents were not considered part of an archive. Indeed, many interviewees did not regard even their long-term retained documents as an 'archive' of enduring value. The interplay between digital and non-digital artefacts and individual artefacts having both digital and hard copy elements is becoming a big issue for repositories. Our research showed that often hard copy and digital versions of works were not always the same (e.g. in some cases a printout preceded further modifications, which remained in electronic form only), mirroring observations made by the Digital Manuscripts Project at the British Library. There were also examples of major drafts being written only in hard copy, with later or final drafts being committed to computer. This article – and the research to date – has elicited and highlighted some of the major issues. In the next phase of the research, we will attempt to quantify some of the behaviours outlined here, and to explore in more depth the personal digital collection practices of various specific groups by means of a large-scale online survey. This will help to delineate commonalities and differences, to elucidate how they came about, and to articulate implications for library and – in particular – archival professionals. Two related aspects that we are keen to begin to explore and characterise are the questions of:

  • what people want and expect to happen to their digital collections at the end of their lives;
  • what motivates people to share digital files during their lives, and which types of files do individuals share (and not share but retain) and in what circumstances.

Finally, a provisional curatorial response to the tentative conclusions of this paper may include the following points:

  • There is unlikely to be a 'one-size-fits-all' approach to personal digital collections;
  • It is important for curators and archivists to be able to deal with and advise on multiple storage media and file formats, including retrospectively;
  • Creators may benefit from guidance regarding appraisal and selection;
  • Recordkeeping tools may be helpful, but they need to be flexible to support individual requirements and to maintain the character of personal digital collections;
  • Creators may benefit from advice to help determine where elements of their personal digital collections are located and who controls them and access to secure and accessible storage in order to retain control of their personal digital collections;
  • Creators should be advised to migrate information to fresh media to avoid having their digital content marooned on obsolete media, but at the same time encouraged, nonetheless, to retain even obsolete media since, increasingly, new capture and recovery techniques are enjoying at least some success.

Acknowledgement The 'Digital Lives' research project is being generously funded by the Arts and Humanities Research Council (Grant number BLRC 8669). Special thanks are due to Neil Beagrie who conceived of the idea for the project and was its Principal Investigator until leaving the British Library on 7 December 2007. Members of the research team who arranged and attended interviews (in addition to the authors) were: Jamie Andrews, Alison Hill, Rob Perks and Lynn Young, all from the British Library. The authors also wish to thank the interviewees themselves for their valuable contributions to the project.. References

  1. Beagrie, N. (2005), "Plenty of room at the bottom? Personal digital libraries and collections", D-Lib Magazine, Vol. 11 No.6 http://www.dlib.org/dlib/june05/beagrie/06beagrie.html (accessed 8 April 2008).
  2. Bruce, H., Jones, W. & Dumais, S. (2004). "Information behaviour that keeps found things found", Information Research, Vol. 10 No. 1 http://InformationR.net/ir/10-1/paper207.html (accessed 8 April 2008).
  3. Jones, W. (2004). "Finders, keepers? The present and future perfect in support of personal information management", First Monday, Vol. 9 No. 3 http://www.firstmonday.org/issues/issue9_3/jones/ (accessed 3 March 2008).
  4. Bellotti, V., Ducheneaut, N., Howard, M. A., Smith, I. E. (2003). "Taking email to task: the design and evaluation of a task management centered email tool" ACM Conference on Human Factors in Computing Systems (CHI 2003); April 5-10; Fort Lauderdale; pp 345-352.
  5. Whittaker, S. (2005). "Collaborative task management in email", Human Computer Interaction, Vol. 20 No. 1 & 2, pp. 49-88.
  6. Tauscher L,Greenberg S. (1997). "Revisitation patterns in World Wide Web navigation", Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,Atlanta, GA, pp. 399-406. New York: ACM Press.
  7. Whittaker, S., Hirschberg, J. (2001). "The Character, Value, and Management of Personal Paper Archives", ACM Transactions on Computer-Human Interaction, Vol. 8 No. 2, pp. 150-170.
  8. Marshall C, Bly S, Brun-Cottan, F. (2006). "The long term fate of our digital belongings: Toward a service model for personal archives", Proceedings of IS&T Archiving 2006, (Ottawa, Canada, May 23-26, 2006), Society for Imaging Science and Technology, Springfield, VA, pp25-30.
  9. Jones, W., Phuwanartnurak, AJ., Gill, R., Bruce, H. (2005). "Don't take my folders away!: Organizing personal information to get things done", CHI '05: CHI '05 extended abstracts on Human factors in computing systems, pp. 1505—1508 New York, NY, USA. ACM Press.
  10. Marshall, C.C. (2006). Maintaining Personal Information : Issues Associated with Long Term Storage Preservation and Access. http://www.csdl.tamu.edu/~marshall/PIM%20Chapter-Marshall.pdf (accessed 29 March 2008).
  11. John, J. L. (2006) Digital manuscripts: capture and context http://www.dcc.ac.uk/events/ec-2006/EC_Digital_Manuscripts_Jeremy_John.pdf (accessed 14 April 2008)
  12. Ravasio, P., Schär, S. G., Krueger, H. (2004). "In pursuit of desktop evolution: User problems and practices with modern desktop systems", ACM Trans. Computer Human Interaction 11 ( 2), pp. 156-180. http://portal.acm.org/ft_gateway.cfm?id=1005363&type=pdf&coll=portal&dl=ACM&CFID=568618&CFTOKEN=5477059 (accessed 29 March 2008).
  13. Etherton, J. (2006) "The role of archives in the perception of self" Journal of the Society of Archivists 27(2) pp.227-246.

Author Details Peter Williams Research Fellow School of Library, Archive and Information Studies, University College London Email: peter.williams@ucl.ac.uk Web site: http://www.ucl.ac.uk/slais/research/ciber/people/williams/ Katrina Dean Curator of history of science The British Library Email: Katrina.dean@bl.uk Ian Rowlands Senior lecturer School of Library, Archive and Information Studies University College London Email: i.rowlands@ucl.ac.uk Web site: http://www.publishing.ucl.ac.uk/staff-Ian_Rowlands.html Jeremy Leighton John Principal Investigator of the Digital Lives Research Project Curator of e-manuscripts The British Library Email: Jeremy.john@bl.uk Return to top Article Title: "Digital Lives: Report of Interviews with the Creators of Personal Digital Collections" Author: Pete Williams, Katrina Dean, Ian Rowlands and Jeremy Leighton John Publication Date: 30-April-2008 Publication: Ariadne Issue 55 Originating URL: http://www.ariadne.ac.uk/issue55/williams-et-al/

Issue number:

Article type:

Date published: 
Wed, 04/30/2008
Issue 55
issue55_williams_et_al
http://www.ariadne.ac.uk/issue55/williams-et-al/

This article has been published under copyright; please see our access terms and copyright guidance regarding use of content from this article. See also our explanations of how to cite Ariadne articles for examples of bibliographic format.


          Book: Ways of Knowing in HCI (Kellogg and Olson, editors) Chapter on SNA and HCI        
A new book edited by Wendy Kellogg and Judy Olson is now available.  Ways of Knowing in HCI is a collection of chapters on the subject of methods and theories that frame Human Computer Interaction studies. I co-authored a chapter in the book with Professor Derek Hansen from Brigham Young University on the role social network...
          If you like Hidden Figures by Margot Lee Shetterly        
Hidden Figures by Margot Lee Shetterly

This readalike is in response to a customer's book-match request. If you would like personalized reading recommendations, fill out the book-match form and a librarian will email suggested titles to you. Available for adults, teens, and kids.  You can browse the book matches here.

Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race by Margot Lee Shetterly
Before John Glenn orbited the earth or Neil Armstrong walked on the moon, a group of dedicated female mathematicians known as "human computers" used pencils, slide rules and adding machines to calculate the numbers that would launch rockets, and astronauts, into space. Among these problem-solvers were a group of exceptionally talented African American women, some of the brightest minds of their generation. Originally relegated to teaching math in the South's segregated public schools, they were called into service during the labor shortages of World War II, when America's aeronautics industry was in dire need of anyone who had the right stuff. Suddenly, these overlooked math whizzes had a shot at jobs worthy of their skills, and they answered Uncle Sam's call, moving to Hampton Virginia and the fascinating, high-energy world of the Langley Memorial Aeronautical Laboratory. Even as Virginia's Jim Crow laws required them to be segregated from their white counterparts, the women of Langley's all-black "West Computing" group helped America achieve one of the things it desired most: a decisive victory over the Soviet Union in the Cold War, and complete domination of the heavens. (catalog summary)
 

Have you read our Rappahannock Read, Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race by Margot Lee Shetterly? If you have and you're looking for more titles like Hidden Figures, check these out! These selections include: history of the Space Race and women's achievements in science and other fields of STEM.


 

The Astronaut Wives Club: A True StoryThe Astronaut Wives Club: A True Story by Lily Koppel
As America's Mercury Seven astronauts were launched on death-defying missions, television cameras focused on the brave smiles of their young wives. Overnight, these women were transformed from military spouses into American royalty. They had tea with Jackie Kennedy, appeared on the cover of Life magazine, and quickly grew into fashion icons. Annie Glenn, with her picture-perfect marriage, was the envy of the other wives; platinum-blonde Rene Carpenter was proclaimed JFK's favorite; and licensed pilot Trudy Cooper arrived on base with a secret. Together with the other wives they formed the Astronaut Wives Club, meeting regularly to provide support and friendship. Many became next-door neighbors and helped to raise each other's children by day, while going to glam parties at night. As their celebrity rose—and as divorce and tragic death began to touch their lives—they continued to rally together, and the wives have now been friends for more than fifty years. (catalog summary)
 



          Stanford GN Project: A Place Among The Stars        
I've long been fascinated by the Stanford Graphic Novel Project. As far as I can tell, it's a unique endeavor for non art-schools in that it asks a group of students to create an entire, full-length book in just one year. There are further restrictions: it has to be based on real events and have some kind of social justice component. The results have frankly been all over the place, with 2010's Pika-Don (about a man who survived the bomb dropping at both Hiroshima AND Nagasaki) by far the most successful and the best-looking. The time constraints, the fact that many of the students are raw beginners, the distribution of labor (which inevitably is not equal) and many other factors have led to the books being more of a curiosity than something worth reading on its own.

The latest book I've received, 2014's A Place Among The Stars, is right up there with Pika-Don in terms of overall success. Interestingly, it predated the success of the book (and later smash hit film) Hidden Figures, which was about another little-known aspect of NASA, its "human computers" who were women and largely African-American. The book from Stanford is about the Mercury 13: thirteen female pilots who were given an opportunity to train and be evaluated for the possibility of going out into space in the early 1960s. It's an incredibly compelling narrative, and it's unsurprising and unfortunate that it's not common knowledge--especially given the rampant sexism in society and the quasi-military culture of NASA. Another interesting thing about the book is just how little the artists behind the project had to go on with regard to deep knowledge of the program. There were a few reference books, but this is something that really demands an oral history to really get at its deep roots.

That said, given the relative paucity of reference material, the artists did a remarkable job in creating a compelling, fluid narrative by focusing on several key characters and filling in some blanks. Fortunately for them, the key characters were memorable individuals indeed, Jerrie Cobb was a young and accomplished pilot who held a number of world records. Janey Hart was an accomplished pilot who was married to a senator. Randy Lovelace came up with the idea of training female astronauts (thinking they might be better suited to the rigors of space and were smaller than male astronauts) as part of NASA. And Jackie Cochran was the most famous female pilot in America, but a bit past her prime. That simple mix produced a compelling narrative that didn't feel at all dumbed-down, as every character was given shading and nuance.

Reading the end notes, the instructor team of Dan Archer (CCS grad and cartoonist), Scott Hutchins & Shimon Tanaka (writers) made one key change. Instead of having three teams on the book (writers, thumbnailers, artists), they instead made every writer a thumbnailer. Thumbnailing doesn't require drawing skill, but it does require an understanding of cartooning and storytelling. Doing this made it an easier process to translate their initial ideas into a form that was easy for the artists to translate. The actual drawing in the book is frequently shaky, especially with regard to anatomy. However, the cartooning is fine. The characters stayed on model on page after page despite having a number of different pencilers, their characters in relation to space were consistent and body language was well-expressed.

In terms of the writing, the authors did a great job setting up the main characters and their feats as pilots, the excitement of potentially going into space, and the many hurdles they had to face as women. Jealous, alcoholic husbands. Jobs that fired them for taking time off. Taking care of children with no one willing to help. Sexist and flip attitudes from men of all stripes, especially journalists. Indifference and scorn from male astronauts. Being told they weren't qualified because they hadn't flown jets, but being denied that opportunity because it was restricted to the military--which they couldn't join. An interesting twist in the story was that it was Randy Lovelace's idea to begin with, and that a lot of opposition came from a jealous Jackie Cochran, who wanted to be the first woman in space despite not qualifying physically for the opportunity. It all came to a head in a Congressional hearing where Lovelace refused to appear and Cochran stabbed the other pilots in the back. It wasn't just sexism that sank the program, but glory-hogging and grandstanding as well.

Wisely, the authors made sure to include an epilogue that not only followed what the pilots wound up doing after their program was permanently discontinued, but also how the US space program changed to eventually include women. The overall result was a pleasant, page-turning book that was painstakingly researched, nicely-colored in tones that were chosen to match the era. I could easily see a more polished version of this book being published by First Second or Scholastic as part of a historical or science-related YA line. Archer really hit on something by forcing everyone to do at least something that was visual by making the writers thumbnail, and the result was a pleasantly cohesive book that still upheld all of the values of the program.
          Balancing Developers and Testers        
Joshua Allen has an interesting post on the changing balance between development and testing resources, at least at Microsoft. While the general claim that the advent of managed code has made developers so much more productive that the testers are now overwhelmed is pretty significant, the really interesting quote is in the third paragraph: "there is always the possibility that the ever-increasing test expenditures will not coincide with a reduction in the number or severity of high-profile security and quality incidents". To me, that suggests that the skill set for testers is changing. Now I understand that it's typical for MS to hire more development oriented types as testers, but in every company I've worked for, testers tend not to have significant programming experience and are on the whole less technical than the programmers. So testers tend to concentrace on UI and HCI types of tasks and not so much on security analysis. I think that testing for security would require a technical and likely a programming background, so I would expect that companies that are concerned about security would not simply reduce programming headcount, but would end up moving programming resources from development to testing. In the end, assuming that companies manage their people efficiently, I expect that the increased programmer productivity would be a wash, rather than an opportunity to reduce headcount.
          ISIE second day        
Seconda giornata parecchio interessante con finale a sorpresa di presentazione di tutti i poster in tre minuti nell’ambito della così detta poster madness che adesso sappiamo a cosa fa riferimento. Dopo la solita sontuosa colazione nella Great Hall oggi il tutto è iniziato alle 9:45. Dopo una breve introduzione di Richard Harper e Tom Rodden è salito sul palco come previsto Donald Norman. Il vecchietto è stato molto bravo nella esposizione ed ha raccontato in perfetto stile Norman tutta una serie di incidenti piccoli e grandi causati dal cattivo design degli oggetti. La tesi di fondo e che gli ambienti intelligenti non possono esistere perché non vi è una condizione essenziale che Norman chiama Common Ground. Un gruppo di artefatti possono avere un common ground così come un gruppo di persone ma un gruppo misto manca di questa caratteristica fondamentale. Interessante una delle domande (posta da Lucia Terrenghi - a smart girl) che ha messo in dubbio questa distinzione netta sottolineando come gli artefatti siano ormai da tempo immemorabile parte del Common Ground umano di cui Norman parla. Ho visto in guru vacillare un po’ anche se poi ha spiegato che lui parla di artefatti intelligenti e non sono di artefatti. Lo speech di Don Norman (un tipo tranquillissimo che si è fermato tutto il giorno rispondendo alle domande di chiunque) è stato seguito da una sessione denominata Concepts in human computer interaction. Molto teorico ma degno di nota l’ultimo dei tre interventi svolto da Alex Taylor di Microsoft Research che ha era teso ad evidenziare l’importanza del contesto per l’idea di intelligenza. Quello che è più interessante è il modo in cui questo ricercatore ha dimostrato la sua tesi ovvero mostrando una serie di filmati tratti da una ricerca di sfondo di stampo molto etnografico che è consistita nell’installare telecamere multiple in alcune auto-vetture per riprendere cosa accade durante gli spostamenti della vita quotidiana. Molto ben fatto e soprattutto molto visual sociology. Del secondo panel sulle smart homes, denominato Technological and social infrastructure for the home segnalo l’ultimo intervento sempre di un gruppo di ricercatori di Microsoft Research che hanno condotto anche essi una estesa ricerca etnografica per studiare i comportamenti all’interno di un nucleo famigliare con specifico riferimento alla comunicazione e al posizionamento degli oggetti. Ne sono venuti fuori due prototipi. Il primo è una specie di schermo con supporto della scrittura (dotato di connessione GSM) al quale è possibile inviare messaggi SMS e sul quale è possibile scrivere con la penna sia note che risposte ai messaggi. Il prototipo, a differenza della maggioranza di roba che si vede qui, è stato effettivamente sperimentato nella vita di un certo numero di famiglie. Molto interessante la possibilità di svolgere analisi di queste comunicazioni per comprendere meglio le dinamiche di comunicazione e costruzione dell’identità in un ambito familiare. Il secondo prototipo mostrato riguardava una sorta di bowl (una grossa scodella porta oggetti) dove era possibile posizionare tutta una serie di device tecnologici (come si fa di norma con una serie di oggetti casuali). La scodella mostrata è in grado di catturare i file (foto ed immagini) presenti in questi device (cellulari, macchine fotografiche, etc.) e mostrarli sulla scodella stessa (nella parte interna). Le foto possono inoltre essere manipolate come se se la superficie della scodella fosse un touch screen. Fra le poche cose viste che possono far affermare wow dal punto di vista tecnologico. Poi c’è stata la poster madness dove il valoroso Luca ha difeso il nostro poster nel migliore dei modi possibili considerando il tempo a disposizione (3 minuti), l’audience non proprio avvezza a sentire parlare di ambiente dei sistemi sociali (e neanche di sistemi sociali) ed non ultimo il fatto di aver saputo 10 minuti prima le regole della cosa. Ho girato un piccolo video dell’evento. L’audio è penoso ed il video anche ma ve pubblico lo stesso per dimostrare che Luca si è guadagnato la pagnotta. Ultimo speech della giornata è stato il giapponese Ryohei Nakatsu che ha parlato del ruolo dei robot in un ambiente intelligente. Lui è sembrato un po’ suonato ma l’intervento è stato a tratti esilarante (vedere robottini che fanno il tai-chi non è cosa da tutti i giorni). UPDATE: Il filmato di Luca che presenta il nostro poster è adesso online a quattro metri da Donald Norman.
           Exploring the total customer experience: usability evaluations of (B2C) e-commerce environments         
Minocha, Shailey and Dawson, Liisa (2003). Exploring the total customer experience: usability evaluations of (B2C) e-commerce environments. In: Human Computer Interaction - INTERACT'03, 1-5 September 2003, Zurich, Swtizerland.
           Remote Web Usability Testing: a Proxy Approach         
Baravalle, Andres and Lanfranchi, Vitaveska (2003). Remote Web Usability Testing: a Proxy Approach. In: 10th International Conference on Human Computer Interaction, 22-27 June 2003, Crete, Greece.
           Integrating customer relationship management strategies in (B2C) e-commerce environments         
Minocha, Shailey ; Millard, Nicola and Dawson, Liisa (2003). Integrating customer relationship management strategies in (B2C) e-commerce environments. In: Human Computer Interaction - INTERACT'03, 1-5 September 2003, Zurich, Swtizerland.
          British scientist embeds wireless chip in his hand; uses it to infect computer hardware        

If you don’t mind injecting a chip into your body, you can now be a host for a computer virus that can infect hardware around you with a computer virus. A British scientist says he made himself into the first human computer virus, according to the BBC. Mark Gasson of the University of Reading in […]


          Artikel Ilmu Komputer        

Artikel Komputer : Artikel Ilmu Komputer

Artikel Komputer yang akan dibahasa kali ini adalah Artikel Ilmu Komputer, di dalam Artikel Ilmu Komputer ini akan dibahas secara umum tetang Ilmu Komputer.

Ilmu Komputer mempelajari apa yang bisa dilakukan oleh beberapa program, dan apa yang tidak (komputabilitas dan intelegensia buatan), bagaimana program itu harus mengevaluasi suatu hasil (algoritma), bagaimana program harus menyimpan dan mengambil bit tertentu dari suatu informasi (struktur data), dan bagaimana program dan pengguna berkomunikasi (antarmuka pengguna dan bahasa pemrograman).

Komputer digunakan oleh manusia dengan cara mempelajari ilmu yang berhubungan dengan komputer. Seiring dengan perkembangan Ilmu Komputer, dewasa ini banyak sekali peneliti yang mencoba membuat kajian dan melakukan pendefinisian terhadap Ilmu Komputer. Bagaimanapun juga, dasar Ilmu Komputer adalah matematika dan engineering (teknik). Matematika menyumbangkan metode analisa, dan engineering menyumbangkan metode desain pada bidang ini.

Beberapa definisi lain yang lebih abstrak adalah:

Ilmu Komputer adalah ilmu yang mempelajari tentang representasi pengatahuan (knowledge representation) dan implementasinya. Atau definisi lain Ilmu Komputer adalah ilmu yang mempelajari tentang abstraksi dan bagaimana mengendalikan kekomplekan sebuah komputer.

Peter J. Denning mendefinisikan Ilmu Komputer dalam makalahnya yang cukup terkenal tentang disiplin ilmu komputer . Makalah ini adalah laporan akhir dari proyek dan task force tentang the Core of Computer Science yang dibentuk oleh dua society ilmiah terbesar bidang komputer, yaitu ACM (http://acm.org) dan IEEE Computer Society (http://computer.org). Ia mendefinisikan bahwa :

Ilmu Komputer adalah studi sistematik tentang proses algoritmik yang menjelaskan dan mentrasformasikan informasi, baik itu berhubungan dengan teori-teori, analisa, desain, efisiensi, implementasi, ataupun aplikasi-aplikasi yang ada padanya.

Dennings juga mengklasifikasi bidang  ilmu komputer yang terbagi dalam 12 subbidang (versi sebelumnya adalah 9 subbidang), yaitu :
  • Algoritma dan Struktur Data (Algorithms and Data Structures)
  • Arsitektur (Architecture)
  • Rekayasa Perangkat Lunak (Software Engineering)
  • Artificial Intelligence dan Robotik (Artificial Intelligence and Robotics)
  • Interaksi manusia dan Komputer (Human Computer Interaction)
  • Organisasi Informatika (Organizational Informatics)
  • Bahasa Pemrograman (Programming Languages)
  • Sistem Operasi dan Jaringan (Operating Systems and Networks)
  • Database dan Sistim Retrieval Informasi (Database and Information Retrieval Systems)
  • Grafika Komputer (Computer graphics)
  • Ilmu Komputasi (Computational Sciences)
  • BioInformatik (BioInformatics)

Artikel Komputer

Demikian Artikel Ilmu Komputer yang dibahas kali ini, semoga Artikel Ilmu Komputer ini dapat memberikan tambahan wawasan tentang Ilmu Komputer bagi kita semua.

          Using a Smartphone to Predict the Onset of Depression        
Mon, 11/21/2016
Research areas: 

In early October 2016, rapper Kid Cudi made the headlines after writing on Facebook that he entered rehab for treatment for depression and suicidal thoughts. While he’s not as famous as other musicians, his heartfelt message garnered a lot attention and support from celebrities and mental health professionals.

“Yesterday I checked myself into rehab for depression and suicidal urges. I am not at peace. I haven't been since you've known me. If I didn't come here, I would’ve done something to myself,” he wrote.

Kid Cudi’s story highlights what mental health professionals and physicians have long known; it is hard to detect depression, and it is even harder to provide treatment to people experiencing it. His message indicated that he struggled with it a long time before he finally realized he needed treatment.

Kid Cudi is just one of the estimated 16 million people in America who are suffering from depression. While the rapper recognized it and sought treatment, many never get to that point. 

Leveraging the Power of our Smartphones

While the problem of identifying and treating depression stymies experts, Jason Hong and John Zimmerman wondered if the solution to detecting it was in our pockets.

“Depression is a very common kind of issue; it is the leading mental health issue in the world and a leading cause of sick days and disabilities,” says Hong, associate professor and member of the Human Computer Interaction Institute. “Our smartphones are with us almost all the time. Can we use them to predict the onset or exit of depression?”

Hong talked with physicians and mental health professionals about depression—what signs and symptoms best predict whether someone is experiencing it—and used what he found to design an app. Experts agreed that three categories of symptoms, social, physical, and sleep, could accurately determine whether someone is experiencing depression. All Hong needed to do was harness the power of the smartphone to record data about each of these symptoms.

He started with sleep. 

“How much you sleep in terms of quality and quantity is a good predictor of depression,” Hong says.

Looking at the phone, he wondered what he could use on it to understand sleep habits. He decided the accelerometer and microphone could help. The accelerometer measured motion and whether the lights were on while the microphone recorded how loud the surroundings were. It there was too much noise, movement, or light, it indicated a person was tossing and turning rather than sleeping, for example. That information could be sent to doctors so they could see a picture of sleep habits.

“[Doctors] just need to see the trends over time … as evidence,” he says.

Next, they tackled the social aspect.

He wondered whether GPS could provide an accurate estimate of when people left their homes. If it is less than normal that could be a sign of potential problem. And, checking phone and SMS logs could show how often people connect with social supports.

“Using those factors we can create a predictor pretty well,” he says.

While the phone provides a good estimate of how social someone is, it does present an incomplete picture. While GPS might track when people leave the house, it can’t tell if people are being social when they are out.

“Face-to-face, it turns out, that is really hard to do,” he says.

Finally, they examined how physical activity might indicate depression.

Loads of researchers looked at how smartphones can capture that data so they spent the least amount of time on that. But they knew that if a person’s activity level dropped suddenly that could be a sign of a problem.

But knowing how to capture the data only represented one part of the research. How could the data be used? Therapists and psychiatrists thought it might be good to receive regular updates about patients so that they had a better understanding of what patients did between visits.

But also, the app could feature a notification type element—a reminder of sorts to encourage someone to do something they enjoy or call a friend. Say a person likes walking in the park and the therapist recommends it. A person might forget but the phone could send a reminder. Something like “Frick Park is only five minutes away and it is known for its great walking paths” could be an indirect nudge encouraging people to follow treatment.

What's Next for our Phones?

While the research lost its funding because of the government sequestration and Hong could not develop the app, he used some of the lessons he learned with that project for a new one involving health coaches, professional who basically help others transform an unhealthy behavior to healthy one. Again, Hong and Zimmerman wondered if a smartphone could help people.

“If the smartphone is with you all the time can it get you to change behaviors?” Hong said.

The research exists in its early phases but Hong, Zimmerman and their colleagues are looking at how smartphone notification systems could provide incentives for healthy behaviors, much like he thought the notification system could work with the depression app.

Take someone who wants to lose weight. A lifestyle coach would create a plan for the person to follow. Notifications could use social pressure or turn it into a game. The app could alert people that a coworker walked nine miles this week and encourage the other person to try to beat the coworker. Or perhaps it could work like Pokemon Go and people could earn points for engaging in certain activities.

“This is where the smartphone, because it is with you all the time, you can do lots of things with it,” he says.

But the team will need to understand whether the smartphone incentives can truly lead to changed behavior, Hong says. If someone trying to quit smoking fails, her friends and family might be disappointed. This human element can contribute to greater adherence. Few people would care if their phones are disappointed in them. 

“Having that human element is really important. You still need a person,” he says.

As they think of the ways to make the smartphone work best for health coaches, Hong believes that an app could help them reach more people. Maybe instead of seeing 20 or 30 people a week they’ll be able to work with 60 or 70.

“Maybe they can double their reach without inducing stress.”

Read the full paper from Jason Hong or find more research from the HCII's in healthcare.

News type

News
          New Titles        

          Welcome to Human Computer Interaction (soon)        
We all raved about the Apple iPhone and the cool technologies packed in it. It is designed to be an ultimate "one-device to rule everything else" package of very cool Apple products. What makes it "so cool" is the innovative multi-touch screen and its intuitiveness. I hate the menus, keyboard and utterly useless shortcuts in my new Motorola Razr. If you love your cellphone's UI, you haven't seen the best yet. Everyone is amazed at how Apple has redefined the cellphone experience.

Not only that, Apple has started and taken user experience to a whole new level.
If you thought "that's cool but they can't take take this technology too far", you are wrong, buddy. Don't be mistaken Apple can offer only two points-of-contact (POC) on its screens. With iPhone, you can pinch and stretch objects on screen with two fingers. How about using all your fingers to move, stretch, tap, back-rub?! windows on a screen, just like Tom Cruise in Minority Report? Is that possible? your dreams have come true!

Enter
Perceptive Pixel founded by Jeff Han, "a spinoff of the NYU Courant Institute of Mathematical Sciences to develop and market the most advanced multi-touch system in the world." See how big an impact he can make in computing.His demo at TED in February 2007 is just mouth-watering. Take a look






Apple has loads of patents filed under multi-touch screen category in 2004. So, who invented this first? Steve Jobs thinks its Apple. Is Jeff working for Jobs now?
Nope. Not yet. As of today, Jeff is not working for Apple.

Anyway, if I can get rid of my keyboards and "feel" the screen, that will be awesome. I hope either Jeff or Jobs or both make that dream come true.

           Drawn from Memory: Reminiscing, Narrative and the Visual Image         
Wright, Terence (2009) Drawn from Memory: Reminiscing, Narrative and the Visual Image. In: 23rd BCS conference on Human Computer Interaction, Cambridge. UNSPECIFIED. [Conference contribution]
          Is Grapical User Interface an art or a science?        
As a theme creator it is very important for me (and other designers) to fully understand what I'm doing by designing themes. Is it related more to art or to science and engineering? First of let's see what is User Interface (UI): "The user interface (also known as human computer interface or man-machine interface (MMI)) is ...
          (CA) Human Computer Interaction Designer. Torrance Jobs.        
none
          Inside the Virtual Reality Canvas        

Expression or application of human creative skills and imagination is how Art could be defined. Some of these expressions are typically in a visual form such as painting or sculpture, and the works produced are appreciated primarily for their beauty or emotional power. Utilizing digital technologies as media for artistic expression has been around for about 50 years, but the interfaces that allow artistic expression in such media have been limited and the pursuit of a tool or technology (i.e. human computer interface) that would allow for a more natural and expressive relationship with the medium has been ongoing.

With the dawn of commercial virtual reality (VR), we are finally getting closer to a more natural and expressive interface to work in the digital medium. And it is no accident that big technology players such as Facebook are at the forefront of enabling such a big leap in digital art creation. This new VR canvas allows for forms of artistic expressions that are not bound by reality or the laws of physics. Inside this VR canvas, artists' creations are not just appreciated, but they are experienced with a sense of presence and immersion that cannot be expressed in words or by looking at digital art on a web browser. You are placed literally in the art, becoming both an observer and a participant. VR is not just for tours of museum exhibits or playing video games anymore; it is becoming a real medium for artistic expression that can make use of multi-player gaming paradigms for the creation of collaborative art and collaborative shared experiences and human connections. In this way, gaming transforms into the platform and the medium, not the product.

As with all forms of digital media, distribution has benefited from the internet and it has enabled new artists to enter more traditional circles of art and the academy but through very unconventional ways. You could say that the internet has ‘democratized’ the distribution of art, but not without its challenges. Furthermore, the internet has given rise to two very distinct modes of ‘curating’ art—the public and the institution based—where reviews are given by two distinct group of critics and they both coexist for the benefit of the artist, up to the point of having different platforms for giving special distinctions (awards) for their work. Traditional academies should evolve to embrace this new medium as they have in the past with prior forms of digital technology.

Because of the versatility of the VR ecosystem available to consumers, it is not impossible to consider that VR may become its own artistic expressive platform. If embraced by the academy, it could become a new ‘renaissance’ for the arts— a new platform to help promote the cultural work of the arts community to broader audiences, a new platform to create art that is not bound by time, place or space. There is an opportunity to define this medium as it evolves akin to trying to create the paintbrush while creating the painting. VR’s ability to embody the viewer’s own aesthetic, artistic, and conceptual revelations makes it a powerful medium in any arena.


          Supporting Creativity in Networked Environments: The COINE Project        

Geoff Butters, Amanda Hulme and Peter Brophy describe an approach to enabling a wide range of users to create and share their own stories, thus contributing to the development of cultural heritage at the local level.

Cultural heritage has an important role to play in today's society. Not only does it help us to understand our past but it also has an impact on social development, the economy and education. Developments in Information and Communications Technologies (ICTs) have provided new opportunities for the manipulation of cultural heritage. Digitisation of cultural material has widened access beyond the boundaries of traditional memory institutions and has provided scope for adding value to collections. Read more about Supporting Creativity in Networked Environments: The COINE Project

Cultural heritage has an important role to play in today's society. Not only does it help us to understand our past but it also has an impact on social development, the economy and education. Developments in Information and Communications Technologies (ICTs) have provided new opportunities for the manipulation of cultural heritage. Digitisation of cultural material has widened access beyond the boundaries of traditional memory institutions and has provided scope for adding value to collections. The involvement of non-experts in creating recordings of cultural heritage, in whatever medium, so as to capture the experience of 'ordinary citizens' in their own terms, could lead to richer and more illuminating collections as new insights and previously hidden information is revealed. This democratises the creation of cultural heritage, removing it from an elitist monopoly, and provides new perspectives on local, regional, national and international events. Advantages of this approach to building collections include greater relevance to the lives of ordinary people, while individuals gain a sense of achievement from seeing their work published. Technology also opens up new possibilities for both creating and sharing cultural content both locally and globally. Rather than locking up the record of our heritage in institutional collections, it becomes possible for users to identify common interests with other people all over the world. The success of services like YouTube [1] and MySpace [2] testifies to the attractiveness of this concept. What such services lack, however, is authority, provenance and even short-term maintenance or preservation. The creation and sharing of content by the individual was the central objective of a European Commission-funded project, COINE (Cultural Objects In Networked Environments) which was completed in 2005 and underwent detailed evaluation in 2005-6. The project aimed to empower ordinary citizens by providing them with the opportunity to produce and share their own cultural material. During the project a Web-based system was developed to provide the necessary tools to allow individuals to publish their cultural material online, and subsequently to share it on a local, national and international basis. A key theme of the project was accessibility for all and, as such, the system's interface and functionality were simplified to allow everyone to use it, regardless of their familiarity with computers. In doing so it provided the opportunity for many new users to become involved with ICTs and cultural heritage in a relatively easy way. The European Information Society Technologies (IST) Programme: Heritage for All Though the term 'cultural heritage' is becoming familiar, it is one that lacks clear definition. In the DigiCULT Report Technological Landscapes for Tomorrow's Cultural Economy Mulrenin and Geser [3] define 'culture' as a 'product of our everyday life', whilst UNESCO, on the World Heritage Web site, defines 'heritage' as 'our legacy from the past, what we live with today, and what we pass on to future generations.' [4]. Taken together, these perhaps provide an adequate working definition. Cultural heritage has become an important focus of governments worldwide, largely due to its economic potential, particularly in an information-driven society. This point is illustrated by the fact that in many advanced countries the cultural economy accounts for approximately five per cent of GDP [5]. To consider it in purely economic terms, however, would be short-sighted and indeed the importance of cultural heritage in addressing social issues is also widely acknowledged. In addition it intersects in a variety of ways with learning, both formal and informal, as is highlighted by the DigiCULT Report, where it is suggested that

'cultural heritage institutions are in a prime position to deliver unique learning resources that are needed at all educational levels.' [6]

This point of view is reflected in the Archives Task Force Report, Listening to the Past, Speaking to the Future, which notes the potential of using archives to

'enrich and enhance teaching and learning and contribute to raising standards in education.' [7]

Accordingly, funding and policy are directed by governments to ensure that their cultural heritage is exploited to its maximum potential. For memory institutions - libraries, museums and archives - recent developments in ICTs have provided the opportunity to digitise the artefacts and documents that represent our cultural heritage, and so have opened up possibilities both for their preservation and increased accessibility. The COINE Project One of the most important aspects of digital interactivity is the potential it creates for relating collections to personal experience. Trant [8] considers that consumers are more concerned with how cultural objects link to their own lives rather than with the records that link to those objects. The creation of content by the user requires a more complex level of interactivity than search, retrieval and use but it is considered an 'integral part of connecting cultural heritage resources to people's lives' [9], and, as noted in the report of the Archives Task Force:

'The growth of community archives . . . has in part stemmed from a desire by individuals and groups to record and share culturally diverse experiences and stories. This grassroots movement is an expression of the often strongly felt need to celebrate, record, and rebuild the sense of community in our lives today.' [10].

This was very much the vision of the COINE Project: to develop tools for ordinary people to create their own cultural heritage materials. This concept is not in itself new: its popularity is demonstrated by the proliferation of local history societies, and in family history research. In the past material produced by these groups has remained relatively isolated and inaccessible to anyone who might share an interest. Kelly, one of the project staff, noted that:

'None of these things [personal and local collections and societies] provide a simple systematic and efficient way of recording, searching and sharing heritage research and local "stories"' [11]

The COINE Project aimed to address this through the development of a Web-based service that allows the ordinary citizen, someone without expertise in cultural heritage or ICTs, to create, share and use cultural material. Through COINE, people could have the opportunity to write and publish their own stories electronically, thus exploiting the potential of personal experience and history. Images and other objects could be inserted anywhere in the text. Because it was essential that the COINE system was usable by everyone, even by those who had little or no previous computer skills. The system was developed with a very simple interface, hiding much of the functionality. The COINE system was also designed to allow the cultural material being created to be shared among communities. Kelly had highlighted a lack of consistency and difficulties in interoperability between cultural heritage projects. In many cases such systems 'lack coherence, structure and interoperability' [12]. A lack of standards compliance lies behind these problems, but failure to use common terminologies also plays a big part. COINE attempted to overcome the latter problem by using a common set of high-level subject descriptors, also called topic areas, and a shared metadata schema. The latter enabled each user group to develop it own thesaurus, or use one available to it. This approach was seen to be particularly appropriate for a multi-lingual partnership, since the topic areas could be translated into any language very easily while the thesauri were domain-specific - essential when so much material related to local areas and events. As a hosted service, the COINE system was designed to be attractive to small institutions without ready access to technical expertise. However, such institutions often have high levels of professional and sometimes domain expertise (especially in areas such as local history) and utilising this gives them a significant benefit over 'free for all' systems (as MySpace and YouTube have become), because the authority of the institution gives credence to the objects created while mediation enables the worst excesses, such as blatant breach of copyright, to be avoided. Interestingly, it became apparent during the project that an unmediated system would have been totally unacceptable to many groups, including teachers working with children. A further advantage of basing services on libraries and other cultural institutions is that they are there for the long term, so that objects created should not simply disappear when the next innovation comes along. Of course to achieve this the institution needs to be able to make arrangements for the maintenance of its own copy of the stories created by its users, not relying entirely on the hosted service provider. Ultimately COINE intended to address limitations on access to collections and the involvement of individuals in cultural heritage and, as such, to initiate a shift in the role of memory institutions. The system was not, however, seen as a replacement for such collections and recognised the valuable potential of having them 'available side by side with the personal collections and histories of individuals and communities.' [13] The COINE System COINE was based on the concept that each library or other cultural institution would be responsible for one or more 'domains'. A domain was in essence a virtual database, access to which was restricted to people authorised by the domain administrator through a login name and password. An institution could choose to operate more than one domain, each with its own user group, thesaurus and access regime. The COINE system comprised primarily a database server and a Web server. The database server hosted the stories, the embedded objects, associated metadata, the thesaurus, etc. for each domain, and information about its members. The Web server hosted the Web site for each domain, passing data to and from the database server. The two servers could have been anywhere: they could be on the same physical machine, on separate machines in the same office, or indeed anywhere in the world, connected via the Internet. In practice, one technical partner, based in Limerick, Ireland, housed and managed the Web server, and the other technical partner, based in Sheffield, England, housed and managed the database server. This was largely for convenience during development but also demonstrated the feasibility of running the system with multiple servers at different locations. All of the project partners (apart from the technical developers) used a number of test sites (typically small museums or libraries or schools), each having their own 'domain' in the COINE system. The administrator could operate each as a closed service allowing only authorised members to read stories, or could authorise limited access to anyone to search and read published stories. However, contributors of stories (i.e. those with write access) had to be registered and authorised as members by the administrator. COINE aimed to have a simple, easy-to-use and intuitive interface which balanced clean, uncomplicated appearance, with a good level of instruction and guidance, designed to disguise sophisticated functionality. The input of users in the design of the interface was crucial in the development, and its terminology and appearance changed considerably during the project in response to issues raised by demonstration sites. For example, user instructions to 'create a narrative' and 'choose a thesaurus term', were changed to 'create a story' and 'choose a subject term'; 'Saved Searches' became 'My Ways of Finding Stories'; and 'Personalise' became 'Change My Look'. Users entered the COINE system at a registration and login page which also gave brief information about the project. After login came a 'My COINE' page with a range of options including 'Search', 'Change My Look', 'My Ways of Finding Stories', and 'Stories'. The last of these enabled users to see their own previously written stories or to create a new story. A content creation wizard aided the process of creating a story (see Figure 1). Authors were firstly asked for a title - 'What are you going to call your story?' - then asked for a general subject area - 'What's it about?' - to be chosen from the 20 topic areas (see Figure 2). Next keywords could be added, then story text in a simple box (see Figure 3). Crucially, anywhere within the story text, objects could be added by clicking a button. Objects could be anything that existed in a digital file: picture, video, speech, music, document, anything at all. The process of adding objects gave the opportunity to include a description and other information about the object, including usage rights (Figure 4). Objects would then appear as clickable thumbnails in the text of the story. It was quite possible for the story to be incidental and the objects the main focus. For example a story could consist of a video with a text caption. Another button allowed the addition of hyperlinks in the story without the writer having to understand how to format them, and shielding them from unintentional editing. (It is perhaps worth noting that this proved quite challenging, since most applications display urls in a form where the unwary can easily alter them unintentionally.) Finally administrative metadata could be added and then the full story, complete with embedded objects, saved for publication on the system. Publication was mediated by the domain administrator who could authorise the story, or reject it with a message of explanation being sent to the author. Figure 1: Create a story Figure 2: What's it about? Figure 3: What's your story? Figure 4: Add an object Other screens allowed authors to modify their stories, to publish them (either 'globally' throughout all COINE domains, or 'locally' on only the domain being used) or to un-publish them if needed. Simple or advanced searching was available to registered users or non-registered casual users. An administration interface was provided to domain administrators to monitor user registrations, mediate the publishing of stories and take care of other maintenance details. Evaluation A study of the user experience of COINE was carried out after the end of the project using interviews and email questionnaires with project participants, analysis of the independent user testing carried out during the project (by people who were drawn into using COINE but who were not members of the consortium), and an evaluation of the usability of the system. The last of these analyses used heuristics testing, 'an expert evaluation method that uses a set of principles to assess if an interface is user friendly.' [14]. To achieve this a checklist was compiled using questions based on an adaptation of the Centre for Human Computer Interface (HCI) Design's 'Checklist of Heuristics Evaluations' [15]. The findings of these studies are described briefly below. Further detail will be found in the COINE evaluation study [16] from which comments reported below are taken. At the outset, there was a lot of enthusiasm about the COINE concept 'both professionally and in the local community'. Schools were keen to be involved and a local oral history society was 'very excited about being able to digitise their resources and sharing their work'. The independent user testers were found to be very enthusiastic, and much interest in the COINE concept was shown by delegates at a museum sector conference. Some observers felt, however, that certain aspects of COINE would be more valuable than others, with the opportunity to create content being particularly important.

'a lot of people would very much like to be able to take part in creating things online who haven't got the expertise to do that because it was just too complicated and the idea of COINE was to take out that level of complex technology and provide something more simple.'

Several respondents felt that the sharing of stories, community resources and personal interests was the 'most valuable and influential aspect' of COINE. This was considered important as it allowed previously unknown memories, collections and stories to be revealed to a wider community and provided better access than unstructured Web pages would allow. Other respondents considered valuable aspects of the project to be the ability to capture and store local cultural heritage and preserve it for the long term. Yet others felt that the challenge came from creating metadata that was descriptive of resources from all over the world but at the same time was not off-putting for the 'non-expert'. This emphasis on non-expert users recurred in these comments, where key advantages were:

'To enable individuals [to] express their cultural heritage "in their own words". To be able to share their experiences and material in a more relaxed way than followed by a central museum or cultural centre'.

The aspect of sharing stories with others was also emphasised. One respondent suggested that it 'would be a really fun way to understand other people's cultures and other people's lives.' The system had generally been found to be easy to use, especially by people with a reasonable level of computer literacy. One of the primary school sites reported that children seemed to cope quite well in testing the system. Some specific points to emerge were:

  • COINE adhered to the principle that a 'system should speak the users' language, with words, phrases and concepts familiar to the user'. The examples cited above illustrated that the system had used 'real-world' rather than technical language. It was noted that user input had greatly influenced this aspect of the design. However, in the evaluation it was found that the attempt to be easy to understand had not always worked. One instance of this was where the project had chosen to use the words 'Change my look' instead of the more usual term 'Personalise'; most users didn't understand what the former term meant!
  • Metadata creation had its problems: many users did not understand what the term 'keywords' meant and were unclear as to why they had to add keywords to their story. They simply wanted to tell it! The option of making the addition of descriptive metadata a task for the domain administrator was considered but rejected as it would make the system unattractively burdensome to the institutions.
  • Visual design was 'simple' and 'uncluttered'. Deliberately there were no visual images used within the user interface, apart from the first access screen which was illustrated by a photograph. However, this was found to be a mixed blessing and it may be that the developers went too far in attempting to create a clean interface: some users felt that the interface lacked visual appeal.
  • COINE allowed the user some control and freedom in terms of exiting a workflow or changing their minds in the middle of the process. However, there was no facility to save a story mid-way through its creation, and many users criticised this limitation.
  • Nielsen notes that it is advantageous 'if a system is usable without "help" documentation' [17]. Although 40 out of 72 users surveyed for part of the evaluation claimed that they did not use the help facilities, a totally intuitive user interface was not achieved and the help facilities were the only means of understanding some of the processes and terminology. This suggests that further work might have produced a more intuitive interface.
  • At times the system was unstable, which, although understandable in a product under development, led to considerable frustration for users. This was the most common negative observation by users.

Overall the evaluation concluded that COINE had been successful in proving 'the viability and attractiveness of the concept of offering ordinary citizens the tools and opportunities to tell their own stories in a digital environment' [18]. Conclusion As Reid [19] has pointed out, local communities have a great deal of knowledge and expertise to offer. Individuals have a perspective on culture and on society which may be very different from that of professionals and experts, yet is equally valid. ICTs offer the opportunity to surface and expose such stories in ways never before possible. COINE successfully addressed this issue and provided an infrastructure which was broadly welcomed. Recently there has been a vast increase in services which encourage publication of individuals' stories. However, the unfettered publication of everything and anything simply obscures the valuable among a tidal wave of the trivial. Local cultural institutions, by providing training, monitoring content (but not censoring it) and providing a stable base have a great deal to offer their communities. While global systems, with their enormous resources and huge size, may attract those seeking their '15 minutes of fame' the long tail of localised services may yet prove to be of greater lasting value. It is to be hoped that more experiments like COINE will enable local institutions to define a new role for themselves, focused on the creativity of ordinary people, in the networked environment. Appendix 1: Related Projects As part of the COINE Project a number of initiatives were identified which appeared to be, or had the potential to become, 'competitors' to a COINE system. Some of those from the USA, Australia and the UK were :- Moving Here [20] - a service from the National Archives and many other partners in England, which explored migration to England over the last 200 years and gave people the opportunity to publish their own experience of migration. BBC's WW2 People's War [21] - which allowed memories and stories about World War 2 to be published online and shared as part of a commemorative archive. The Statue of Liberty Ellis Island Foundation [22] - allows users to search for immigrants who entered the USA through Ellis Island and to create Family Scrapbooks, illustrated family history stories with photographs and audio recordings. Capture Wales Digital Storytelling [23] - 'Mini-movies' are created by the people of Wales to tell their personal stories and to show the richness of life in Wales: they can then be accessed through the Web site. MyFamily.com [24] - allows users to create their own family album by uploading photos and entering details about them to share with friends and family. Forever LifeStories [25] - users can create their Life Story and a family tree, and add photos, home video clips or text messages, building their Forever LifeStory over time. StoryLink [26] - an online community for digital stories allowing members to store, search for, and save digital stories. The City Stories Project [27] - a city-based storytelling project with a network of city-based personal storytelling sites. The Migration Heritage Centre [28] - aims to recognise the value of cultural diversity, and provides opportunities for people to tell of their achievements, their struggles for belonging, of cultural change, traditions and adaptation. The Montana Heritage Project [29] - young people tell the story of their community - its place in national and world events and its cultural heritage as expressed in traditions and celebrations, literatures and arts. Appendix 2: The COINE Project Partners The main consortium consisted of the following organisations: 1. Co-ordinator: The Centre for Research in Library & Information Management (CERLIM) at Manchester Metropolitan University, Manchester, UK. 2. Technical partners: Fretwell-Downing Informatics Ltd (FDI), Sheffield, UK. The National Microelectronics Applications Centre Ltd (MAC), Limerick, Ireland. 3. Demonstration Partners The Armitt Museum, Ambleside, Cumbria, UK. They worked with: Ambleside Oral History Group; Ambleside Primary School; Cumbria Amenity trust; Mining History Society; Ambleside Art Society; Individual volunteers. eircom Ennis Information Age Town, ENNIS, Ireland. Their COINE users were: Gaelscoil Mhicil Chisiog (Irish Speaking Primary School); East Clare Heritage Centre (Local History Organisation); Clare County Museum; Ennis Online Gallery (Online Photo Gallery of Ennis Through The Years). Universitat Oberta de Catalunya (UoC), Barcelona, Spain. They had test sites at: Biblioteca Pública de Tarragona; Biblioteca Antoni Pladevall i Font de Taradell; Museu d'Història de la Immigració de Catalunya. University of Macedonia Economic and Social Sciences (UM), Thessaloniki, Greece. COINE demonstration users were: Theatrical Organization of 'Aktis Aeliou'; Theatrical Organization of 'Nees Morfes'; Theatrical Organization of 'Piramatiki Skini'; Thessaloniki Design Museum; Photography Museum of Thessaloniki (Photo Archive); Artistic House of Florina; Nikos Koronaios (Graphic Artist); Dimitris Penidis (Student); Anna Theofilaktou; Marina Polyhronidou; Athena Kesidou; The History Center of Thessaloniki; The 16th Gymnasium of Thessaloniki. The Jagiellonian University (UJAG) Institute of Librarianship and Information Science, Krakow, Poland. UJAG used the COINE demonstrator with:- The National Museum in Krakow; National Archive in Krakow; Szkola Podstawowa Nr 114 in Krakow. References

  1. YouTube http://www.youtube.com
  2. MySpace http://www.myspace.com
  3. Mulrenin, A. and Geser, G. (2002). Technological landscapes for tomorrow's cultural economy: unlocking the value of cultural heritage. The DigiCULT report. Luxembourg: Office for Official Publications of the European Communities http://www.digicult.info/downloads/html/6/6.html
  4. UNESCO http://whc.unesco.org/en/about/
  5. Throsby cited in Mulrenin and Geser (2002) p. 69. (See reference 3.)
  6. Mulrenin and Geser (2002) p. 8 (See reference 3.)
  7. Archives Task Force (2004) Listening to the past, speaking to the future. London: Museums, Libraries and Archives Council http://www.mla.gov.uk/webdav/harmonise?Page/@id=73&Document/@id=18405&Section[@stateId_eq_left_hand_root]/@id=4332
  8. Trant cited in Mulrenin and Geser (2002). (See reference 3.)
  9. Mulrenin and Geser (2002) p. 204. (See reference 3.)
  10. Archives Task Force (2004), p. 43 (See reference 7.)
  11. Kelly, M. (2003). The COINE Project: sharing collections and personal histories. Proceedings of the MDA conference, 2003. http://www.mda.org.uk/conference2003/paper04.htm
  12. Brophy, P.(2002). Cultural Objects In Networked Environments - COINE. Cultivate Interactive. http://www.cultivate-int.org/issue7/coine/
  13. Kelly (2003). (See reference 11.)
  14. Centre for HCI Design, City University (2004). Usability studies: JISC services and information environments. London: JISC. p. 61. http://www.jisc.ac.uk/uploaded_documents/JISC-Usability-Studies-Final.doc
  15. Centre for HCI Design, City University (2004), p. 135.
  16. Hulme, A. L. (2006) The COINE Project in Use: a dissertation submitted in part-fulfilment of the requirements for the degree of Master of Arts in Library and Information Management. Manchester: Manchester Metropolitan University.
  17. Nielsen, J. Ten usability heuristics http://www.useit.com/papers/heuristic/heuristic_list.html
  18. Butters, G. (2005). COINE final report. Manchester: CERLIM http://www.uoc.edu/in3/coine/eng/deliverables/final_report.pdf
  19. Reid, G., (2000). The digitisation of heritage material: arguing for an interpretative approach based on the experience of the Powys Digital History Project. Program. 34 (2), 143 - 158.
  20. Moving Here http://www.movinghere.org.uk/
  21. BBC's WW2 People's War http://www.bbc.co.uk/ww2peopleswar/
  22. The Statue of Liberty Ellis Island Foundation http://www.ellisisland.org
  23. Capture Wales Digital Storytelling - http://www.bbc.co.uk/wales/capturewales/
  24. MyFamily.com http://www.myfamily.com/
  25. Forever LifeStories http://www.forevernetwork.com/lifestories/
  26. StoryLink http://www.storylink.com/
  27. The City Stories Project http://www.citystories.com
  28. The Migration Heritage Centre http://www.migrationheritage.nsw.gov.au/
  29. The Montana Heritage Project: What We Once Were, and What We Could Be. Ashley Ball, May 2003. http://www.edutopia.org/php/article.php?id=Art_1048

Author Details Geoff Butters Research Associate Centre for Research in Library & Information Management (CERLIM) Manchester Metropolitan University Email: g.butters@mmu.ac.uk Web site: http://www.hlss.mmu.ac.uk/infocomms/staff-contact-details/profile.php?id=311 Amanda Hulme Former postgraduate student Department of Information and Communications Manchester Metropolitan University Email: amanda_hulme@yahoo.co.uk Peter Brophy Director Centre for Research in Library & Information Management (CERLIM) Manchester Metropolitan University Email: p.brophy@mmu.ac.uk Web site: http://www.hlss.mmu.ac.uk/infocomms/staff-contact-details/profile.php?id=190 Return to top Article Title: "Supporting Creativity in Networked Environments: the COINE Project" Author: Geoff Butters, Amanda Hulme and Peter Brophy Publication Date: 30-April-2007 Publication: Ariadne Issue 51 Originating URL: http://www.ariadne.ac.uk/issue51/brophy-et-al/

Issue number:

Article type:

Date published: 
Mon, 04/30/2007
Issue 51
issue51_brophy_et_al
http://www.ariadne.ac.uk/issue51/brophy-et-al/

This article has been published under copyright; please see our access terms and copyright guidance regarding use of content from this article. See also our explanations of how to cite Ariadne articles for examples of bibliographic format.


          Annie Easley        
From a young age, Annie Easley’s mother had told her that she could be anything she wanted, but she would have to work for it. She became a human computer and then computer programmer.
          Dorothy Vaughan        
Today for #BossDay we’re celebrating the first African-American manager at NASA, pioneering mathematician and human computer Dorothy Vaughan.
          Virginia Tucker        
Virginia Tucker was one of five women hired into the “computer pool” at the NACA, before leading the human computers at NASA.
          INSPEC        

On ENGINEERING VILLAGE platform, covers the world-wide literature (mainly journal articles and conference proceedings papers) in astronomy, physics, electronics and electrical engineering, computers and control, and information technology.

Brief Description: 
On ENGINEERING VILLAGE platform, covers the world-wide literature in astronomy, physics, electronics and electrical engineering...
Access: 
Subscription
Mobile Version: 
No mobile friendly interface available.
Icons: 
Authorized UM users (+ guests in UM Libraries)
New Resource Indicator: 
Not New
Coverage: 
INSPEC grows by about 330,000 records per year. 1896-
Type: 
Article Index
Vendor: 
EiVillage/Elsevier
ST ID: 
UMI01166
Internal Note: 
grocho 20041220; grocho20050503 drpeck 2008-01-04 - removed proxy server prefix from url, as it was requiring on-campus login riin 8/21/09 removed dentistry
Raw ST Data: 
<source_full_info> <source_info> <source_internal_number>000005845</source_internal_number> <source_001>UMI01166</source_001> <source_name>INSPEC [Engineering Village 2]</source_name> <source_short_name>INSPEC [Engineering Village 2]</source_short_name> <source_searchable_flag>Y</source_searchable_flag> </source_info> <record xmlns="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> <controlfield tag="001">UMI01166</controlfield> <datafield tag="245" ind1="1" ind2=" "> <subfield code="a">INSPEC [Engineering Village 2]</subfield> </datafield> <datafield tag="210" ind1=" " ind2=" "> <subfield code="a">INSPEC</subfield> </datafield> <datafield tag="520" ind1=" " ind2=" "> <subfield code="a">Covers the world-wide literature (mainly journal articles and conference proceedings papers) in astronomy, physics, electronics and electrical engineering, computers and control, and information technology.</subfield> </datafield> <datafield tag="500" ind1=" " ind2=" "> <subfield code="a">INSPEC grows by about 330,000 records per year.</subfield> </datafield> <datafield tag="513" ind1=" " ind2=" "> <subfield code="a">1896-</subfield> </datafield> <datafield tag="594" ind1=" " ind2=" "> <subfield code="a">SUBSCRIPTION</subfield> </datafield> <datafield tag="591" ind1=" " ind2=" "> <subfield code="a">grocho 20041220; grocho20050503 ##drpeck 2008-01-04 - removed proxy server prefix from url, as it was requiring on-campus login##riin 8/21/09 removed dentistry</subfield> </datafield> <datafield tag="595" ind1=" " ind2=" "> <subfield code="a">Author searches are done on the last name and first initial of the first name. ## ##Search terms are automatically stemmed.</subfield> </datafield> <datafield tag="110" ind1="2" ind2=" "> <subfield code="a">EiVillage/Elsevier</subfield> </datafield> <datafield tag="TAR" ind1=" " ind2=" "> <subfield code="a">EVII</subfield> <subfield code="f">Y</subfield> </datafield> <datafield tag="ZHS" ind1=" " ind2=" "> <subfield code="a">http://xml.engineeringvillage2.org/controller/servlet/Controller?CID=openXML&amp;SORT=re&amp;AUTOSTEM=on&amp;</subfield> </datafield> <datafield tag="ZDC" ind1=" " ind2=" "> <subfield code="a">2</subfield> </datafield> <datafield tag="AF1" ind1=" " ind2=" "> <subfield code="a">UMICH</subfield> </datafield> <datafield tag="STA" ind1=" " ind2=" "> <subfield code="a">ACTIVE</subfield> </datafield> <controlfield tag="CKB">CKB02471</controlfield> <datafield tag="RNK" ind1=" " ind2=" "> <subfield code="a">3</subfield> </datafield> <datafield tag="CJK" ind1=" " ind2=" "> <subfield code="a">lat</subfield> </datafield> <controlfield tag="FMT">DD</controlfield> <datafield tag="856" ind1="4" ind2="1"> <subfield code="u">http://www.engineeringvillage2.org/controller/servlet/Controller?CID=quickSearch&amp;database=INSPEC</subfield> </datafield> <datafield tag="655" ind1=" " ind2=" "> <subfield code="a">Index</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20040909</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1639</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20040929</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1254</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Jim</subfield> <subfield code="b">00</subfield> <subfield code="c">20040929</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1254</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Ryan</subfield> <subfield code="b">00</subfield> <subfield code="c">20040929</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">2043</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20041202</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1022</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20041220</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1519</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20041220</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1519</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Ryan</subfield> <subfield code="b">00</subfield> <subfield code="c">20041221</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1157</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Ryan</subfield> <subfield code="b">00</subfield> <subfield code="c">20041221</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1158</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20050111</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1358</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20050503</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1043</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20050503</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1043</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20060201</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1501</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20060201</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1501</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20060201</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1531</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="c">20060201</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1531</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20060201</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1551</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20060201</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1551</subfield> </datafield> <datafield tag="856" ind1="4" ind2="3"> <subfield code="u">$856_u</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20060201</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1553</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">David</subfield> <subfield code="b">00</subfield> <subfield code="c">20070129</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1315</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">David</subfield> <subfield code="b">00</subfield> <subfield code="c">20070129</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1317</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20070201</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1127</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20070201</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1127</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20070910</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">0859</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20070910</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">0859</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">David</subfield> <subfield code="b">00</subfield> <subfield code="c">20080104</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1655</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20080723</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1124</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20080723</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1124</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20090108</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1512</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Paul</subfield> <subfield code="b">00</subfield> <subfield code="c">20090108</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1512</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Riin</subfield> <subfield code="b">00</subfield> <subfield code="c">20090821</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1122</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">Riin</subfield> <subfield code="b">00</subfield> <subfield code="c">20090821</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1123</subfield> </datafield> <datafield tag="CAT" ind1=" " ind2=" "> <subfield code="a">umich</subfield> <subfield code="b">00</subfield> <subfield code="c">20100824</subfield> <subfield code="l">DAT01</subfield> <subfield code="h">1649</subfield> </datafield> </record> </source_full_info>
Other Titles: 
INSPEC [Engineering Village 2]

          Natalie Jeremijenko responds in turn        
by
Natalie Jeremijenko
2004-04-01

There are at least two different questions here. The first question: what could we say to things (or them to us)? Or in Lucy Suchman’s terms, what new sociomaterial connections can we invent, that don’t reiterate old subject/object divides? The second question is: what do voice chips say (what do things say to us, and us to them)?

To explore the answer to the first question, I refer the reader to a series of speech recognition interfaces that I built for an exhibition - Neologues - at the Storefront for Art and Architecture (1999), a design project that motivated my fascination with voice chips.

This exhibition was a series of functioning voice chip and speech recognition-based devices. These included a light switch actuated with your voice rather than your finger. In order to toggle the switch, you had to put your hands on your temples and say the words “mind power,” parodying the ambitions of the Human Computer Interaction field. The light switch would toggle on. However, the light nearby would not go on. In order to operate that, you had to say the word “click” brightly. (Crisp plosives are easier to recognize.) As a human, this speech recognition chip made you perform like a switch. Observing one’s own performance in this simple role was entertaining to most participants, or at least self-consciously silly.

There was an elevator plate you operated by saying “up” or “down.” The only trick was that you had to say these in Spanish, which left most viewers going nowhere, as they swallowed the normative function of the speech recognition chip. There was a speech recognition interface for dogs, whose bark the device would translate to a human voice that said, “I am your loyal servant” - challenging the human-computer interface and its privileging of human cognition with a dog-computer interface that dogs seem to be able to use without a user manual. Speech recognition works well on barks.

There was a functioning prototype of an adapted handgun, the safety latch on which unlatched only when it recognizes the word “Diallo” (the young African immigrant who was shot 47 times for pulling out his wallet). Not just a one-liner, this device explored how the particular history of a device might be embedded using a voice chip, and has been proposed to the NYPD.

Another was a bomb that would detonate with the word “Bang,” although - warned the note beside it - the operator would be held liable for any damage or injuries caused. The possibility of operating this not-quite-so-friendly user interface with such childlike ease dramatized the peculiar structure of participation that we take for granted. The entire interaction can be neatly scripted by corporations who stand to profit from this, in the same way that I scripted the interaction with the bomb. But it is the obsequious obedient user, who is behaving in exactly the way intended, who is held responsible for pulling the trigger, liable for the entire sociotechnical system. Such is the fetish of agency.

I elaborate these examples because, in exploring other ways to script interactions, they are the “sociomaterial connections” of the first question. These alternative designs and prototypes exploit the generative aspects of analysis. However, the project of this essay was to try to answer the second question, to make sense of the material culture in which we are currently immersed. Now with regard to this mess, Suchman is right. (Suchman is always right, it seems!) It is her analysis of interaction as the “contingent co-production of the sociomaterial world” that is right (even though I omitted the more structured interaction of interview that I thought more appropriate to apply to voice chip interaction). As soon as we read her careful work, we are struck with a deep recognition drawn from our experience with interaction. Yet the voice chip implementers and patent filers do not seem to know her work - they have it wrong! The model of interaction that is embedded in the voice chip is a parody of our own interaction.

Yet why do they persist? Why do they still appear, reappear, what cultural expectation recharges them, what reinspires designers over and over to deploy them? These voice chips treat voice as simply words that require no identity, judgment or intentionality, no responsiveness to the sequence of exchanges that situate meaning; these voice chips treat interaction as triggered action/reaction that can be implemented with a sensor or two; these voice chips use pre-scripted voices and triggering systems, and do deploy them as human stand-ins. Although this is wrong (are you ever struck by the rightness of an utterance from a voice chip?), it is so obviously wrong that it is funny. But the point is not whether they are right or wrong. The point is: they are there, they persist, and they keep appearing.

So when I claim that the voice chips are direct evidence of interaction, I mean that they are in the sense that a caricature is direct evidence: recognizable, descriptive, reductive and absurd - but not correct, and certainly not comprehensive. The voices in the whole array of products reviewed in the essay, I think, are very effective caricatures of what we expect from information technologies. And what they represent is exactly the idea that there are discrete components (e.g. voices) that assemble into “interactivity,” or compose intelligence. They are exactly an embodiment of what Simon Penny refers to the AI fallacy, and moreover they make it look silly. They parody the idea of pre-scripted interactivity precisely because they perform it; they parody linguistic theory because they fulfill the categories; they mock us all, incessantly. The voice chips have none of the glamour or scientific seriousness associated with sensor effector apparatuses, Artificial Intelligence or User Interface Design. They provide a rare opportunity to examine the technology we produce in which it actually looks patently ridiculous and somewhat stupid. There are few technologies under analysis that have such a general aura of the unnecessary, of excess marginality, and have such a peripheral relationship to “function.” In general we prefer to think about sociomaterial technology through the lens of heroic, gleaming nuclear warhead missiles, or complex neural nets, or simply important work technologies, rather than silly voice chips.

So the reductive move that both Suchman and Penny protest is in fact very deliberate and even the raison d’etre of the work. The singling out of voice chips from other things with interactive properties, like texts and graphics (Suchman), or my efforts to delineate them from technologies with much richer cultural contexts and histories, like the broadcast recording industry, is, as Penny correctly points out, not watertight - because this singling out explicitly provides the opportunity to play a game. The game is called: let’s pretend voice chips are interactive, let’s take them at their face value, let’s take them seriously, let’s pretend that they are interesting to listen to, let’s put aside our well-developed coping skills for tuning out elevators that incessantly state the obvious, and escalators telling us to mind our step - as if they care. Let’s instead play this game and seriously listen to voice chips - as if they were voices with histories and futures and stakes and agency, as if they were the voice of our collective investment in technological material culture, the mirror of our desires.

Okay, now walk into a shopping mall, or go about your daily activity and actively listen to these amputated voices. We start to realize that these voices are an alarm sounding, we start hearing other things in them… we listen for character and we hear a command control logic, we hear the control we have relinquished in trivial but crucial ways (when we think of the mass), we can hear the simplification of interaction that the designers intend, we can hear the voice of (from) the other side. Then the experience of voice chips actually does become enriched, because in the interactive co-production of conversation, we make up for the errors they enact - we compensate just as Lucy Suchman suggests. If we keep playing, perhaps we can question the very future of our technologies, without the glare, glamour and glimmer of complex systems.

Penny and Suchman are two of the most coherent and cogent theorists of the mess of technosocial interaction, and voice chips ratify their work. Voice chips also demonstrate their own limited repertoire of interaction scripts, and if they were to emerge as a genre of interaction there must be, or should be, alternative structures of participation.

======

Neologues: Lightswitch Interface Instructions

To operate this light switch, place hands on temple and clearly say, “mind power.” This will activate the switch (i.e. it will toggle), but does not turn on the light. Other uses of “mind power,” such as computer control through EEGs, also have this concrete command functionality, without the capacity for nuanced verbal control.

Neologues: Light Interface Instructions

To operate this light say, “click” brightly. Configures/scripts the user to perform as if he or she were a switch, like many “interactive” technologies.

Neologues: Elevator Interface Instructions

This elevator recognizes “up” and “down” in Spanish. There is no English language override, leaving many people stuck.

Neologues: Dog Translator Instructions

The appropriate dog growl is translated into human speech that says, “I am your loyal servant.” Addresses abstract reasoning capacities in dogs and in so doing defies human-centric views of interaction.

Neologues: Bang Interface

Tele-operation of a bomb scripts user interaction as if he or she is responsible. Although he or she did not design the interaction, nor place the bomb, and can only obediently follow instructions, it is the user who is considered liable. This is similar to the problematic technocorporate “the person who pulls the trigger” logic. While corporations profit from and script the interactions for obedient users, the user is made responsible for choices that are not entirely theirs.

Bone Transducer Interface

A “located information interface” for delivering information on office hours and availability. The interface requires physical contact between the head (the resonating chambers therein) and the 1” diameter plate, coupling high-fidelity sound that cannot be otherwise be overheard. This transduction technology, elsewhere used in sound-compromised environments (e.g. bite interface in scuba diving) is adapted to provide a private audio environment in a semi-public context. In this case, it is embedded in the wall and positioned at kneeling height, to frame the act of actually receiving information.

Dumb PowerMeter

A domestic power consumption meter with speech recognition. The meter displays nothing until the person guesses the first significant digit. This interaction depends on the user having an idea, being able to make an educated guess, and caring enough to know, rather than delegating the smart appliance to knowing/displaying the power consumption, all the time, to no one.

back to Beyond Chat introduction


          Hidden Figures        

Hidden Figures by Margot Lee ShetterlyHidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race by Margot Lee Shetterly
In the early days of the space program, complex calculations were completed by a pool of “human computers,” research mathematicians who did the math by hand. Few people realize that those jobs were filled by women, including a small group of pioneering African American women who worked at Langley Memorial Aeronautical Laboratory. There were few opportunities available for these brilliant women, and their accomplishments have remained largely overlooked until now. Shetterly uses information gathered from archival documents, correspondence, and interviews to bring this little-known piece of history to life as a compelling narrative. Buzz is already building for the movie adaptation (https://www.youtube.com/watch?v=RK8xHq6dfAo) starring Taraji P. Henson, Octavia Spencer, and Janelle Monáe, which will be in theaters in January 2017. Highly recommended.  From Ingram Library Services (Beth Reinker, MSLS, Collection Development Librarian)


          How to use emotion AI for all the right reasons        
Artificial intelligence (AI), data mining, expert system software, genetic programming, machine learning, deep learning, neural networks and another modern computer technologies concepts. Brain representing artificial intelligence with printed circuit board (PCB) design.

As artificial intelligence (AI) grows, its ability to understand and respond to your emotions is key. If machines, robots, and technology are to make better, more contextual judgments of human behaviors, the next step is ultimately Emotion AI. While emotion AI enhances the human computer interaction, enables brands to gain emotional insight in real-time and... Read more »

The post How to use emotion AI for all the right reasons appeared first on ReadWrite.


           “Privacy-shake”: a haptic interface for managing privacy settings in mobile location sharing applications         
Jedrzejczyk, Lukasz ; Price, Blaine A. ; Bandara, Arosha and Nuseibeh, Bashar (2010). “Privacy-shake”: a haptic interface for managing privacy settings in mobile location sharing applications. In: MobileHCI '10: Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, 7-10 September 2010, Libon, Portugal, ACM, pp. 411–412.
          Review 5 Jurnal Berhubungan Psikologi dan Komputer        

 Interaksi Manusia dan Komputer (Human Computer Interaction) adalah suatu studi yang mempelajari hubungan interaksi antara manusia, komputer dan penugasan. Prinsipnya adalah pengertian bagaimana manusia dan komputer dapat secara interaktif menyelesaikan penugasan dan bagaimana sistem yang interaktif tersebut dapat dibuat. Interaksi Manusia dan Komputer (IMK) merupakan gabungan disiplin ilmu pengetahuandari bidang keilmuan, teknik dan seni . Yang paling penting adalah pengertian cara komputer dapat mempengaruhi dan merubah penugasan yang dapat manusia lakukan Faktor manusia merupakan studi tentang bagaimana manusia dengan tingkah lakunya menggunakan mesin, tool dan membuat teknologi lain untuk menyelesaikan suatu pekerjaan.
IMK difokuskan pada perancangan, pembentukan dan dukungan sistem komputer dengan pemikiran manusia (dalam hal ini pemakai). Sehingga membutuhkan pengertianyang jelas tentang seberapa baikkah perancangan sesuai dengan kebutuhan pemakai dan penugasannya. Pencocokkan perancangan kebutuhan pemakai dan penugasan melibatkan metode analitis dan penelitian dalam psikologi dan ilmu komputer.
Teori psikologi memberikan kontribusi yang besar terhadap pengertian IMK.Psikologi memperhatikan hal-hal tentang pengertian, model, ramalan dan penjelasantentang apa yang menjadi fenomena yang paling kompleks secara keseluruhan, yaitu perilaku manusia. Psikologi mendekatkan studi perilaku manusia dari sudut usaha pengenalan stuktur mental dan memprosesnya. Metode psikologi meliputi observasi,survey, eksperimen laboratorium, studi kasus, simulasi dan bentuk-bentuk penelitian laintentang banyak aspek dari perilaku manusia yang berbeda-beda. Teori psikologi meliputitopik yang lebih besar lagi yaitu motivasi, emosi dan kesadaran, sosial, aspek biologi danorganisasi yaitu pembentukan manusia dan pendewasaan dari lahir hingga meninggal danaspek perilaku manusia yang normal dan tidak normal. Sangat sulit dikatakan bidang psikologi yang mana yang relevan terhadap IMK karena semua aspek dari perilaku manusiamempunyai pengaruh terhadap interaksi manusia ke komputer dan komputer mempengaruhi perilaku manusia dalam segala cara. Perancang dan pembentuk sistemkomputer dibutuhkan untuk membuat keputusan berdasarkan asumsi pengetahuan pemakai. Teknologi internet juga sangat mempengaruhi perilaku manusia, dimana teknologi ini dapat menghubungkan antara satu computer dengan computer lain  dibelahan dunia lain.
Pada suatu penelitian diketahui bahwa individu dalam tahap dewasa awal dengan tugas perkembangan yaitu memiliki hubungan intim dengan orang lain maka hubungan intimacy merupakan unsure pokok dalam kepuasan suatu hubungan, jika individu tersebut tidak berhasil mengembangkan intimacy tersebut maka ia akan mengalami isolasi dan merasaka loneliness (Erikson dalam Tuapattimaja & Rahayu). Saat ini internet dianggap sebagai salah satu cara mengurangi loneliness. Internet telah menghubungkan computer-komputer lain dibelahan dunia lain. Penggunaan internet sebagai salah satu cara untuk mengburangi loneliness . pada individu yang mengalami loneliness apabila ia banyak menghabiskan waktu banyak waktu sendirian di depan computer baik di kantor maupun dirumahnya maka orang tersebut akan menyediakan waktu lebih sedikit untuk hubungan tatap muka di dunia nyata dan mengurangi kesempatannya untuk berinteraksi dengan orang lain.
Internet addiction oleh Young (dalam Tuapattimaja & Rahayu) diungkapkan sebagai sebuah syndrome yang ditandai dengan menghabiskan banyak waktu dalam menggunakan internet dan tidak mampu mengontrol penggunaannya saat online, orang-orang yang menunjukkan syndrome ini akan merasa cemas, depresi,atau hampa saat tidak online di internet serta menyebabkan korbannya mulai menyembunyikan tingkat ketergantungannya terhadap internet tersebut.
 Hal ini semakin kuat dengan adanya  penilitian yang dilakukan oleh Hardie & Tee dalam jurnalnya bahwa Menggunakan internet mungkin bermanfaat atau baik ketika disimpan pada tingkat “normal”, namun tingkat tinggi penggunaan internet yang mengganggu kehidupan sehari-hari telah dikaitkan dengan berbagai masalah, termasuk penurunan psikososial rincian kesejahteraan, hubungan dan mengabaikan domestik, akademik dan tanggung jawab bekerja Sebuah studi epidemiologi baru-baru ini oleh Stanford University peneliti medis menunjukkan bahwa menggunakan internet bermasalah adalah kekhawatiran. Telepon survei mereka dari 2513 rumah tangga mengungkapkan bahwa satu dari delapan orang Amerika menunjukkan penanda potensi masalah untuk penggunaan internet yang berlebihan.
Neurotisisme dan dukungan dari jaringan sosial online adalah prediktor signifikan penggunaan internet yang berlebihan. Pengguna yang berlebihan ditemukan lebih muda dan kurang berpengalaman dalam menggunakan komputer dari pengguna rata-rata atau kecanduan..
Penggunaan internet yang berlebihan mencapai presentase 52% sangat jauh berbeda dengan yang kecanduan internet yang hanya mencapai 8% saja. Walaupun masalah kecanduan internet hanya mencapai presentase yang sedikit, tetapi melihat presentase penggunaan internet yang berlebihan mencapai 52% perlu diperhatikan lagi permasalahan ini, karena kecanduan internet bermula dari keasyikan kita berlama-lama menggunakan internet, lambat laun kita akan merasa cemas dengan tidak bermain internet, dan lama-kelamaan akan menjadi pecandu internet yang sulit lepas dari internet dan berdampak kurang baik dalam aspek psikologis (neuroticism, extraversion, kecemasan sosial, kesepian emosional, kesepian sosial, dukungan sosial, dan dukungan sosial internet).
Perkembangan internet juga menyuguhkan banyak penawaran yang menarik alih-alih menggunakan internet untuk menyelesaikan tugas sekolah atau pekerjaan, kenyataannya pada game online. Remaja membutuhkan efikasi diri akademik dan keterampilan social untuk memenuhi tanggung jawab perkembangannya dalam berprestasi dan relasi social yang positif dan perilaku adiksi game online pada remaja yang marak saat ini diduga berhubungan dengan kurangnya kompetensi-kompetensi tersebut dalam diri remaja, hal ini tampak dari fenomena  adiksi game online di salah satu kota di Indonesia yang menunjukkan dampak negative. Pencurian oleh empat orang remaja yang nekat mencuri karena kecanduan game online Point blank, permasalahan yang mungkin terjadi dalam kehidupan nyata tersebut yaitu kepercayaan diri yang rendah, gambaran diri yang buruk, kurang mampu mengontrol hidup. Merasas tidak berguna dan mempertahankan relasi. Hal-hal tersebut meninmbulkan tekanan pada diri seseorang bentuk-bentuk permasalahan tersebut menjadi motivasi remaja untuk menggunakan waktu dan terjadi keterikatan diri terhadap game online yang memungkinkan antar pemain dapat berinteraksi  menambah peluang individu membangun relasi melalui dunia virtual.
Hipotesis pada penelitian ini yaitu adanya hubungan negative antara efikasi diri akademik dan keterampilan social dengan perilaku adiksi gameonline,adanya hubungan negative antara efikasi didi akademik dengan perilaku adiksi gameonline dan hubungan negative antara perilaku keterampilan social dengan perilaku adiksi gameonline.
Akan tetapi komputer dan penggunaan internet tidak semuanya memberi dampak negative saja tetapi ada juga dampak positivnya jika digunakan dengan normal hal ini dapat diketahui dari salah satu penelitian bahwa pengenalan komputer pada anak usia dini dapat memberikan manfaat dari hasil penelitian bahwa anak dengan interaksi komputer yang lebih intensif menunjukkan dapat meningkatkan iq dengan selisih yang cukup tinggi dari standar dari kesimpulan tersebut diperoleh bahwa teknologi khususnya komputer berpengaru terhadap perkembangan psikolohi anak.
Sumber :
1.      Pratiwi,P.C. Andayani,T.R&Karyanta,N.A(…) Perilaku adiksi game online ditinjau dari efikasi diri akademik dan keterampilan social pada remaja di Surakarta. Jurnal. Surakarta: Universitas sebelas maret. . diakses tgl 26 November  2012 http://www.google.co.id/url?url=http://candrajiwa.psikologi.fk.uns.ac.id/index.php/candrajiwa/article/download/27/17&rct=j&sa=U&ei=ZUyyUITTOMfWrQeQm4CwAg&ved=0CB8QFjAF&sig2=7w5opI0ZghqrALZWw3SaDQ&q=abstrak+jurnal+kecanduan+internet&usg=AFQjCNHBwuOup8h8-iM3gFG2FvuLewBezA
2.      Jurnal loneliness : . diakses tgl 26 November 2012 http://isjd.pdii.lipi.go.id/admin/jurnal/42094954.pdf
3.      Setiawan,M,A. Widyastuti,A.& Nurhuda,A (2005).Pengaruh pengenalan komputer pada perkembangan psikologi anak: studi kasus taman balita salman al farisi. Jurnal.Yogyakarta: Universitas Islam Indonesia. . diakses tgl 26 November 2012 journal.uii.ac.id/index.php/Snati/article/viewFile/1308/1067
4.      Hardie & Tee (2007). Excessive Internet Use : The Role of Personality, Loneliness and Social Support Networks in Internet Addiction Vol 5. Journal.Australia:  Swinburne University of Technology.. diakses tgl 26 November 2012 http://www.swinburne.edu.au/hosting/ijets/journal/V5N1/pdf/Article3_Hardie.pdf
5.      Agusinta D,R.&Pratiwi D.Mengenal Interaksi Manusia dan Komputer . Jurnal: Universitas Gunadarma.  http://www.scribd.com/doc/76914403/Jurnal-IMK-Dan-Psikologi. diakses tgl 26 November 2012





          Advances in Usability and User Experience        

Advances in Usability and User Experience

Advances in Usability and User Experience: Proceedings of the AHFE 2017 International Conference on Usability and User Experience, July 17-21, 2017, The Westin Bonaventure Hotel, Los Angeles, California, USA By Tareq AhramChristianne Falcao
English | PDF | 2018 | 718 Pages | ISBN : 3319604910 | 64.84 MB
This book focuses on emerging issues in usability, interface design, human computer interaction and user experience, with a special emphasis on the research aimed at understanding human-interaction and usability issues with products, services and systems for improved experience. It covers modeling as well as innovative design concepts, with a special emphasis to user-centered design, and design for special populations, particularly the elderly. Virtual reality, digital environments, heuristic evaluation and feedback of devices' interfaces (visual and haptic) are also among the topics covered in this book.


          Motion Feature Augmented Recurrent Neural Network for Skeleton-based Dynamic Hand Gesture Recognition. (arXiv:1708.03278v1 [cs.CV])        

Authors: Xinghao Chen, Hengkai Guo, Guijin Wang, Li Zhang

Dynamic hand gesture recognition has attracted increasing interests because of its importance for human computer interaction. In this paper, we propose a new motion feature augmented recurrent neural network for skeleton-based dynamic hand gesture recognition. Finger motion features are extracted to describe finger movements and global motion features are utilized to represent the global movement of hand skeleton. These motion features are then fed into a bidirectional recurrent neural network (RNN) along with the skeleton sequence, which can augment the motion features for RNN and improve the classification performance. Experiments demonstrate that our proposed method is effective and outperforms start-of-the-art methods.


          Hidden Figures        

Hidden FiguresIn 1943, Virginia’s Langley Memorial Aeronautical Laboratory had a problem: It needed computers to help engineer better airplanes to guarantee American success over the aerial battlefields of World War II. The computers required were not the electronic devices we use today; instead, they were women with comprehensive mathematics backgrounds. Women who have largely been forgotten by history despite their role in shaping it.

 

And a core group of these "hidden figures" were black.

 

Using research and interviews, Margot Lee Shetterly highlights the lives of three “human computers” in particular — Dorothy Vaughan, Mary Jackson and Katherine Johnson — who worked at Langley during the war and, once it was established, the National Aeronautics and Space Administration. In doing so, she returns these women and their fellow “computers” to their proper place in the tale of one of mankind’s greatest achievements: space travel. The intertwined stories of each woman provide a deeper insight into the ingenuity, hard work and determination from all involved — male or female, black or white — that took us from airplanes to space shuttles.

 

Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race isn’t just about a group of mathematicians and engineers whose efforts helped break the sound barrier and put a man on the moon. Shetterly also delves into how the environment these women worked in was impacted by the racial and sexual politics and tensions of the 1940s, ’50s and ’60s and what it meant for each woman to gain the position she did. She celebrates these women and what they achieved despite the discrimination they faced due to their skin color and gender.

 

When you’re finished with the book, you can check out the movie, starring Taraji P. Henson, Octavia Spencer and Janelle Monáe, in theatres January 5, 2017. Also, readers wanting more information on the contributions of African Americans and women to the space race should check out We Could Not Fail by Steven Moss and Rocket Girls by Nathalia Holt.


          Annie Easley        
From a young age, Annie Easley’s mother had told her that she could be anything she wanted, but she would have to work for it.  She became a human computer and then computer programmer.
          Dorothy Vaughan        
Today for #BossDay we’re celebrating the first African-American manager at NASA, pioneering mathematician and human computer Dorothy Vaughan.
          Virginia Tucker        
Virginia Tucker was one of five women hired into the “computer pool” at the NACA, before leading the human computers at NASA.
          Hidden Figures        

Hidden FiguresIn 1943, Virginia’s Langley Memorial Aeronautical Laboratory had a problem: It needed computers to help engineer better airplanes to guarantee American success over the aerial battlefields of World War II. The computers required were not the electronic devices we use today; instead, they were women with comprehensive mathematics backgrounds. Women who have largely been forgotten by history despite their role in shaping it.

 

And a core group of these "hidden figures" were black.

 

Using research and interviews, Margot Lee Shetterly highlights the lives of three “human computers” in particular — Dorothy Vaughan, Mary Jackson and Katherine Johnson — who worked at Langley during the war and, once it was established, the National Aeronautics and Space Administration. In doing so, she returns these women and their fellow “computers” to their proper place in the tale of one of mankind’s greatest achievements: space travel. The intertwined stories of each woman provide a deeper insight into the ingenuity, hard work and determination from all involved — male or female, black or white — that took us from airplanes to space shuttles.

 

Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race isn’t just about a group of mathematicians and engineers whose efforts helped break the sound barrier and put a man on the moon. Shetterly also delves into how the environment these women worked in was impacted by the racial and sexual politics and tensions of the 1940s, ’50s and ’60s and what it meant for each woman to gain the position she did. She celebrates these women and what they achieved despite the discrimination they faced due to their skin color and gender.

 

When you’re finished with the book, you can check out the movie, starring Taraji P. Henson, Octavia Spencer and Janelle Monáe, in theatres January 5, 2017. Also, readers wanting more information on the contributions of African Americans and women to the space race should check out We Could Not Fail by Steven Moss and Rocket Girls by Nathalia Holt.


           Exploring the motivations involved in context aware services         
CHRISTOPHER, Roast and XIAOHUI, Zhang (2012). Exploring the motivations involved in context aware services. In: HCI 2012 : 26th BCS Conference on Human Computer Interaction, Birmingham, 12-14 September 2012. Birmingham UK, BISL, 274-279.
          Emotiv EPOC and the Event Store – Streams of Consciousness        
I’ve found a way to combine two of my interests; the Emotiv EPOC headset and the Event Store. The EPOC is “a revolutionary personal interface for human computer interacton”. Couple this with “The awesome, rock-solid, super-fast persistence engine for event-sourced applications” that is the Event Store and you have: Streams of Consciousness The project is […]
          Canine-Centered Computing        
I thought it might be time to give an update on the FIDO (Facilitating Interactions for Dogs with Occupations) project as recently I helped author a survey paper on the subject for Foundations and Trends in Human Computer Interaction called Canine-Centered Computing. The survey includes my work to understand how dogs might best use touchscreen […]
                  
Vampires poke around social media in exclusive 'Shadows' deleted scene

By Brian Truitt

What Spinal Tap did for dim heavy-metal rockers, What We Do in the Shadows (on Blu-ray and DVD Tuesday) does for the bloodsucking contingent. Written and directed by stars Taika Waititi and Jemaine Clement (of Flight of the Conchords fame) the mockumentary follows vampire flatmates in New Zealand and how they get by in the world — in the case of Clement’s ornery Vladislav, that means 862 years of rockin’ it fang style. Like everyone, they have to get up to speed for modern times, and Vlad learns about social media from their human computer-programming guy Stu (Stu Rutherford) in this exclusive deleted scene from the horror comedy. There’s a woman named Caroline who is Vlad’s friend on Facebook who’s not really his friend per se — we’ve all been there, Vladster! — and he decides to “poke” her though he’s a little irked at what she might do in return.



http://entertainthis.usatoday.com/2015/07/18/vampires-poke-around-social-media-in-exclusive-shadows-deleted-scene/
          Why Apple’s HomePod won’t just collect dust on your shelf        
 Apple brought its smart speaker into the world with a shoddy name and an unconventional pitch, but anyone brash enough to cast the device to the side so easily will surely pay the price. Apple, unlike Amazon and Google, understands that selling glorified intelligence-in-a-box as a method of human computer interaction lacks foresight — people want a product, not a technology. Launching… Read More
          Mitsuku wins Loebner Prize 2013        
Steve Worswick, botmaster of Mitsuku, was awarded the bronze medal and $4000 cash prize for creating the world's "most human computer" in the Loebner Prize Contest 2013, an annual Turing Test.  The contest this year was held at the Ulster University, Magee Campus, Londonderry/Derry, Northern Ireland.  Steve Worswick is a native of Yorkshire, UK, and has worked on Mitsuku for 9 years.   Mitsuku is based on AIML and hosted at Pandorabots.com.

This year the Loebner Prize Contest attracted 15 entries from around the world.   Pandorabots submitted 6 of those entries, based on the results of an internal Divabot contest to select the best, most unique AIML bots hosted by Pandorabots.   Of these six, 3 were selected for the Loebner contest finals.  In fact, 3 out of the four finalists were all Pandorabots.

Each of the four finalists was interrogated by four judges and ranked on a scale of 1 to 4, from most human to least human.   The judges, selected for their expertise in artificial intelligence, simultaneously interrogated a bot and a human confederate, and were asked to decide which entity was human and which was a robot. None of programs fooled any of the judges into thinking that the bots were human, so the real contest became which bot ranked highest.   The final results of the competition were:

1. Mitsuku (Steve Worswick - AIML and Pandorabots)
2. Tutor (Ron C. Lee - AIML and Pandorabots)
3. Rose (Bruce Wilcox - ChatScript)
4. Izar (Brian Rigsby - AIML and Pandorabots)

The contest day this year also featured, for the second time, a Junior Loebner Contest with teenagers serving as judges and human confederates.  In the junior contest, the results were:

1. Tie for first place (Mistuku and Tutor)
2. Tie for second place (Rose and Izar)

The AIML bots all ran on a version of the open source Program AB, the reference interpreter for AIML 2.0., modified for the Loebner Prize contest.  Specifically, the contest program implements the Loebner Prize Protocol, an obscure character-mode communications protocol specific to the contest.  But because the bots were developed on a Pandorabots server running AIML 1.1, none of the finalists used any new AIML 2.0 features.   Mitsuku however has some clever implementations of knowledge bases and deductive reasoning, using AIML 1.1 alone.

We are pleased that another AIML bot besides ALICE has won the Loebner Prize.  This result shows the strength of the underlying technology for creating award-winning bots.  AIML is an excellent tool for designing high-quality, content-rich AI chat bots.  The finalists in this year's Loebner Prize contest, and its winner Mitsuku, demonstrate the quality of bots that can be written in AIML.

Links:









          History of computer        
History of computer hardware
The Jacquard loom was one of the first programmable devices.

It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device.

The history of the modern computer begins with two separate technologies - that of automated calculation and that of programmability.

Examples of early mechanical calculating devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.

This is the essence of programmability.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer.[4] It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour,[5][6] and five robotic musicians who play music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed every day in order to account for the changing lengths of day and night throughout the year.

The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. However, none of those devices fit the modern definition of a computer because they could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine". Due to limited finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.

Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation, which later became IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.


          Web Accessibility Revealed: The Museums, Libraries and Archives Council Audit        

Marcus Weisen, Helen Petrie, Neil King and Fraser Hamilton describe a comprehensive Web accessibility audit involving extensive user testing as well as automatic testing of Web sites.

In 2004, the Museums, Libraries and Archives Council (MLA) commissioned a Web accessibility audit from City University London. MLA is the national development agency working for and on behalf of museums, libraries and archives in England and advising government on policy and priorities for the sector. The audit was inspired by a study conducted by City University London in 2003/2004 on the accessibility of 1,000 general Web sites for the Disability Rights Commission (DRC) [1]. Read more about Web Accessibility Revealed: The Museums, Libraries and Archives Council Audit

In 2004, the Museums, Libraries and Archives Council (MLA) commissioned a Web accessibility audit from City University London. MLA is the national development agency working for and on behalf of museums, libraries and archives in England and advising government on policy and priorities for the sector. The audit was inspired by a study conducted by City University London in 2003/2004 on the accessibility of 1,000 general Web sites for the Disability Rights Commission (DRC) [1]. This was the largest and most comprehensive Web accessibility audit undertaken and unusual in prominently involving extensive user testing as well as automatic testing of Web sites. MLA wanted a similar methodology for the audit of museum, libraries and archives Web sites, thus contributing to the creation of baseline data of unprecedented scope and breadth. At the Back of the Client's Mind Why did MLA commission this audit of 325 museum, library and archive Web sites? In the Higher Education sector, where disability legislation has had a profound impact on the development of equality of opportunity between disabled and non-disabled students, UKOLN has undertaken Web accessibility audits. But a Web accessibility survey of this scope within the cultural sector has not yet been undertaken in the UK or overseas. The motivation springs from MLA's mission. Museums, libraries and archives connect people to knowledge and information, creativity and inspiration. MLA's mission is to lead the drive to unlock this wealth for everyone. MLA has also developed a widely respected transformational framework for museums, libraries and archives being learning organisations accessible to all [2]. The 'Inspiring Learning for All' framework and tool emphasise social inclusion and access for disabled people. An accessible Web site is an integral part of an accessible museum, library or archive. MLA thus needed to find out how accessible museums, libraries and archives are currently. The policy context is provided by the Disability Discrimination Act (1995) in which the provision of goods and services cover Web sites, although these are only mentioned specifically in the relevant DRC Code of Practice [3]. It is also now widely known that e-government policies [4] require that public sector Web sites meet the World Wide Web Consortium's Web Content Accessibility Guidelines (WCAG) Level AA [5]. The findings of the audit should also allow MLA to consolidate its existing commitment to making ICT and ICT services in museums, libraries and archives accessible to disabled people. For example, 72% of 4,000 public libraries taking part in the People's Network, an MLA-led project, have installed assistive technology. A couple of years ago, MLA advised the New Opportunities Fund to require NOF-digitise/EnrichUK projects to meet WCAG level AA. We also produced basic guidance for developers of NOF Digitisation fund Web sites, which looks at how online cultural content can be made accessible to disabled people - as this is clearly beyond the scope of the Web Content Accessibility Guidelines and an area in which the cultural and educational sector can make a unique contribution [6]. MLA is a member of the EU-funded Minerva Consortium, a network of European organisations whose aim is to discuss, correlate and harmonise activities in the digitisation of cultural and scientific content. The Minerva Consortium has developed a Quality Framework for museum, library and archive Web sites that emphasises Web accessibility [7]. We expected that the findings would provide us with evidence on the basis of which future MLA action to support Web accessibility in the sector could be planned. This would complement the wealth of MLA's guidance for the sector to develop services which are inclusive of disabled people. Methodology for the Audit Data were collected from two samples of Web sites: 300 Web sites from museums, libraries and archives in England and an international comparison sample of 25 Web sites from national museums from around the world. The 300 MLA Web sites covering a variety of categories in each of the three main sectors are shown in Table 1, below. Table 1: MLA categories in each sector included in the audit MuseumLibraryArchive Academic Local authority Independent National Academic Public Specialist Academic Business Local Authority National Specialists Selection of the samples was undertaken by City University following criteria set out by MLA. 100 Web sites were chosen from each of the three sectors. They reflected the different types of institutions within each sector, the geographical distribution of these institutions and the size of their Web sites (i.e. number of pages). Automated Testing of Web Site Home Pages The home pages of the 325 Web sites were assessed against those WCAG checkpoints that can be automatically tested. It should be noted that only some of WCAG checkpoints can be automatically tested. A number of tools are available to conduct such testing [8]. For this audit, the accessibility module of WebXMTM [9] was used. Following this initial audit, a representative sample of 20 English museum, library and archive Web sites was selected for in-depth automated and user testing. The selection criteria for the 20 sites was based upon the sub-categories of each sector, the varying popularity of the sites, whether they were embedded into a host site and the results of the automated testing. For these 20 Web sites, up to 700 pages from each site (or the whole site if smaller) were tested with the WebXMTM accessibility module. User and Expert Testing of Web Sites A User Panel of 15 disabled people was established, composed of equal numbers of people from three disability groups: blind, partially sighted and dyslexic. Previous research conducted into Web site accessibility by City University has shown that these three groups are currently the most disenfranchised users of the Web [10]. The Panel members reflected, as much as possible, the diversity of English people with disabilities in terms of a range of relevant factors: age, sex, technology/computing/Internet experience, and assistive technologies used. Each panel member assessed four Web sites, undertaking two representative tasks with each site. The representative tasks were selected by MLA and City University experts. The tasks were representative of what users might typically attempt when visiting the site, such as establishing the opening times for an institution. Evaluations were run individually at City University. Panel members were provided with any assistive technologies they would normally use such as JAWS (a screenreader which converts text on a Web page into synthetic speech for blind users [11]), ZoomText (software which allows partially sighted people to enlarge information on a Web page and change parameters such as text and background colour [12]) or ReadPlease (software which allows dyslexic people to make a range of adaptations to information on a Web page [13]). All 20 sites were evaluated three times - once by a member of each of the three disability groups, in a randomised order. After undertaking the tasks, Panel members were asked a range of questions to gauge their views as to the accessibility of the site, such as how easy it was to perform the tasks. Results of the Automated Testing The 14 WCAG guidelines comprise 65 checkpoints, and each checkpoint has a priority level (1, 2 or 3) assigned to it based on the checkpoint's perceived impact on accessibility. Thus violating Priority 1 checkpoints are thought to have the largest impact on a Web site's accessibility, while violating Priority 3 checkpoints are thought to have less impact on accessibility. If a Web site has no Priority 1 violations, it is said to be Level A-conformant; if it has no Priority 1 or 2 violations, it is said to be Level AA-conformant; and if it has no Priority 1, 2 or 3 violations, it is said to be Level AAA-conformant. Priority 1 Conformance (Level A) Of the 300 MLA home pages tested 125 home pages (41.6%) had no WCAG Priority 1 checkpoint violations that automated testing could detect. However, all of these 125 home pages did possess at least two WCAG Priority 1 manual 'warnings' (that is the automatic testing tool suggests you ought to conduct a manual check as it has detected something that might be a violation of a checkpoint). For pages to be WAI Level A-conformant they must also pass these manual checks. It is almost certain that some of these home pages would have failed some of the manual checks. The 100 Web sites from the archive sector achieved the best results with 51 of the home pages satisfying automated Level A conformance. This compares to 34 in the museum sector and 40 in the library sector. Priority 1 and 2 Conformance (Level AA) A total of 10 homepages (3.0%) from the 300 Web sites audited had no detectable Priority 1 and Priority 2 checkpoint violations, so were automated Level AA-conformant. Once again the archive sector was the strongest with 6 sites recording no automated AA violations, compared to 1 museum and 3 library sites. However, these sites did carry a minimum of 19 Priority 1 and 2 manual 'warnings', so may not have been AA-conformant. Priority 1, 2 and 3 Conformance (Level AAA) Only one Web site from the 300 MLA sites tested achieved AAA conformance, having no automated Priority 1, 2 or 3 checkpoint violations. It must be noted though that the site generated 32 manual 'warnings'. The Web site was from the archive sector. Frequency of Violations The average number of different WCAG checkpoints violated per page, and the total frequency of violations per page are shown in Table 2, below. Table 2: Average number of checkpoints violated and total frequency of violations per Web page Type of checkpoint error Average number of different checkpoints violated Frequency of violations Automated Manual 'warning' 5.9 34.3 56.9 159.0 Total 40.2 215.9 The average MLA home page has nearly 216 instances of potential stumbling blocks to users. This is a particularly worrying situation when we consider - as revealed by the user and expert testing results below - many of the problems users actually encounter when using Web sites are warnings of possible violations of the checkpoints that do indeed require manual checking. In relation to the number of checkpoint violations of the three individual sectors, Library Web sites had the most Priority 1, 2 and 3 automated and manual checkpoint violations. Archive sites had the least Priority 1, 2 and 3 automated violations and instances. Museum sites had the least Priority 1, 2 and 3 manual 'warning' violations and instances (see Table 3). Table 3: Number of checkpoints violated and frequency of violations for the different sectors Sector Number of different Checkpoints violated (automated) Instances of checkpoint violations (automated) Number of different Checkpoint warnings Frequency of checkpoint warnings Museum Library Archive 5.8 6.2 5.6 49.4 66.8 54.5 32.4 36.5 34.1 123.8 200.6 152.2 Total 5.9 56.9 34.3 159.0 International museum 6.9 67.9 35.2 171.5 The sub-categories within each of the three sectors also revealed some clear patterns: The sub-categories within each of the three sectors also revealed some clear patterns:

  • Museums - National museum Web sites had the largest average number of Priority 1, 2 and 3 automated and manual checkpoint violations (44.0 per page). Academic (36.9), local authority (35.9) and independent (37.7) museums fared better.
  • Libraries - Academic libraries had fewer violations, and substantially fewer instances of Priority 1, 2 and 3 manual checkpoint violations (111.4), than public (241.8) and specialist (220.9) libraries.
  • Archives - No substantial differences between sub-categories.

The 25 International museum Web sites were also evaluated using the accessibility module of WebXMTM. A large number of violations (42.1) were recorded, comparable with the English national museum findings, hence both sub-categories showed a similar poor level of conformance with the guidelines. Overall, the results of the automated testing show that MLA Web sites are not highly conformant to the appropriate accessibility guidelines, with slightly less than half (41.6%) passing the basic accessibility level (Level A) and very few (3%) passing the government target of Level AA. These results are very similar to those found in a survey of UK university Web sites undertaken in 2002 [14] in which 43.2% of homepages achieved Level A and 2.5% achieved Level AA. However, it must be noted immediately, that these figures compare very well with those from the DRC study, in which only 19% of general Web site home pages achieved Level A and 0.6% achieved Level AA. User Testing The 15 members of the panel were asked to complete a total of 120 tasks with the 20 Web sites selected for in-depth testing (20 Web sites x 2 tasks per Web site x 3 evaluators per Web site =120). Of these 119 (99%) were logged and analysed. Each evaluation was observed by experts at City University who recorded if a task was successfully completed, any problems that occurred and the participants' responses to a set of questions. The Panel members succeeded in 75.6% of the attempted tasks and failed in 24.4% of them. Blind participants experienced the most difficulty with a success rate of only 66.7%, compared to a combined average of 80.0% for the other two user groups. Failure to complete tasks was not attributed to a minority of the participants, but from a broad cross-section of each User Panel. Between the three MLA sectors there was also a notable difference in success/failure rates, with archive sites resulting in the most task failures (30.6%). This failure rate is almost 9% higher than the combined average of the other two sectors (21.7%). Table 4: Task success rates for the different user groups User Group Tasks successfully completed Blind Dyslexic Partially sighted 66.7% 82.5% 77.5% The members of the Panel were also asked to rate the ease of navigation when attempting a task. The mean for all groups was 4.6. No significant effects were noted between the different user groups, but more than half of the Panel members did feel 'lost' on at least one occasion when exploring the Web sites, especially in relation to library and archive sites (60% of panel felt lost at least one time when using sites in these sectors). The Panel members, when asked about the extent to which their impairments were taken into account, gave a mean rating of 3.4 on a scale of 1 to 7. This is not a ringing endorsement of MLA organisations' attention to accessibility. At best we might conclude that the User Panel was 'non-plussed' with the Web sites they used in terms of the extent to which they thought the sites took their impairments into account. The problems observed by the experts at City University and the problems reported by the Panel members were collated and categorised. Overall, 189 instances of problems were identified during the user testing evaluations. 147 (78%) directly related to checkpoints in the WAI guidelines, and 42 (22%) were not covered. Table 5, below, outlines the most common problems that users encountered. These problems undoubtedly explain the failure rates summarised earlier. Table 5: Key problems experienced by the User Panel (all disabilities combined) Problem No. of Instances In WAI? 1. Target of links not clearly identified 30 Yes 2. Information presented in dense blocks with no clear headings to identify informational content 17 Yes 3. Inappropriate use of colours and poor contrast between content and background 14 Yes 4. Navigation mechanisms used in an inconsistent manner 13 Yes 5. Links not logically grouped, no facility to skip navigation 10 Yes 6. Text and images do not increase in scale when browser option selected 7 Yes 7. External information and navigation on page, not associated with page content 6 No 8. Important information not located at top of list, page etc 6 Yes 9. ALT tags on images non-existent or unhelpful 6 Yes 10. Graphics and text size too small 5 No 11. Distraction and annoyance caused by spawned and pop-up windows 5 Yes 12. Labels not associated with their controls 5 Yes 13. Images and graphical text used instead of plain text 5 Yes The 13 problems listed in Table 5 constitute 68% of the total number of problems uncovered during the user testing. It is also worth noting that over half of these problems relate to orientation and navigation (problems 1, 2, 3, 4, 7, 8 and 12). In fact, of the five most frequent problems - that alone account for 44% of the total number of instances, four are orientation and navigation problems. The Panel members identified many of the same problems, and these were also concentrated around orientation and navigation issues. Five Most Frequent Problems Poor Page Design Poor page design (in terms of layout) led to a recurrent orientation problem for all the user groups involved in the evaluations. Both the experts at City University and the members of the User Panel considered many sites to have overly complex and 'cluttered' pages with dense blocks of text. No clear indication of main headings, secondary headings and so on was a recurring problem throughout the museum, library and archive domains. While sighted users could infer some of this logic from text sizes, colour coding, etc, blind users did not have access to this and so pages were deemed 'illogical', meaning they lacked a logical structure. Ambiguously Named Links Ambiguously named links that led to unexpected content were responsible for many of the navigation problems users encountered i.e. opening times were often found under 'Contact Us'. As one dyslexic user commented "... important information like opening times and disabled access should not be hidden under other obscure titles ... why can't they just put a link saying 'Opening Times'?" The Panel members also uncovered issues that were specific to their individual impairment, for example blind users identified that ALT tags for images, pictures and graphical text were often non-existent or unhelpful. For example, one site used graphical text for their 'Accessible Site' link but failed to provide any form of ALT tag to it, therefore blind users where unaware that this option even existed. Colour Scheme and Contrast Colour scheme and contrast used for page designs accounted for many of the complaints from the dyslexic and partially sighted members of the User Panel. While some of these complaints were of a purely subjective nature, the colour scheme often affected these users' ability to perform tasks, particularly when the contrast between the text and the background was inadequate. Pale text on pale backgrounds was a common problem. Moreover, different users benefit from different colour schemes. For example, while many partially sighted users appear to benefit from a very strong contrast such as yellow text on a black background, one dyslexic user found this 'too glaring' and preferred black text on a pastel blue background. Although colour schemes can be changed by users (e.g. by attaching their own style sheets to their browser) very few users seemed to be aware of this. No 'Skip Navigation' Link No 'skip navigation' link at the top of pages enabling blind users to jump to the main content of a page (by-passing the page's top navigation) was a specific problem for Panel members who used screen readers. When such links were missing, blind participants were compelled to listen to the navigation elements that commonly appeared at the top of pages: repetitive information they often describe as audio 'clutter'. It was obvious that the users found moving through this clutter very frustrating and exhausting. While Jaws, the most common screen-reading software the Panel members used, does have some support for skipping over this clutter relatively efficiently, very few users were seen to use this function. External Navigational Links In respect of external navigational links, the pre-evaluation research conducted by the experts at City University identified that numerous academic and local authority museum, archive and library sites are integrated (relatively) into a host institution's external site. This specific issue was addressed in the user testing evaluations, where it was noted as causing confusion to all user groups. The user was commonly unaware that the external navigational links did not directly relate to main content of the page; "keeps giving me information about other things... information about Civic Centre. Think I must keep wandering off" (comment by partially sighted participant). Positive Aspects In addition to the specific problems they encountered, the Panel members were also asked to report what they particularly liked about the sites they evaluated. Perhaps unsurprisingly, many of the positive aspects were the opposite of the problems outlined above. For example, partially sighted participants appreciated "good use of colours to highlight visited links". Blind users enjoyed logically structured pages, and as one user put it; "proper links, labelled individually and properly mean no trawling is necessary." The other user groups appeared to share these sentiments, with users liking sites that had clear navigation mechanisms, logical page layouts, clear contrast, reasonably sized text and straight-forward language. MLA's Initial Response to the Findings City University presented MLA with a set of recommendations which can be summarised as follows: 1. Museums, libraries and archives should make Web accessibility an integral part of the Web development process, audit current accessibility, develop policies and plans, make Web accessibility a criterion in the Web design brief and involve disabled people 2. Promote guidance and good practice 3. Give consideration to user groups whose requirements are not documented in the WAI guidelines 4. Recommendation 3: harness the unique contribution of museums, libraries and archives and their presentation and interpretation of their collections in accessible ways to specific groups of disabled people MLA endorses all these recommendations. A planned approach to change is part of the 'Inspiring Learning for All' transformational vision for the sector (Recommendation 1). MLA has coordinated the Jodi Mattes Web Accessibility Awards 2005 [15] for museums, libraries and archives, working in partnership with the Museums Computer Group [16] and the Department for Museums Studies of Leicester University [17]. The aim of the awards is to promote good practice on accessibility in the sector (Recommendation 2). The idea that deaf people should be able to access information in British Sign Language (BSL) has been neglected for too long, probably because we still use a tick box attitude to Web accessibility and limit its meaning to meeting (or not meeting ) guidelines such as WCAG (Recommendation 3). The Milestones Museum [18] Web site, one of the first to provide visitor information systematically in BSL, won a Commendation for Innovation at the Jodi Mattes Awards. It demonstrates what should become commonplace in the future (BSL was recognised as an official language of the UK in March 2003 [19]). Recommendation 4 deserves everyone's attention in the cultural and educational sectors. An accessible Web site is but the gateway to the enjoyment of accessible online collections and learning resources. These make our sectors' Web sites different from any other Web sites. There is no reason why disabled people, including blind and partially sighted people, should be excluded from the enjoyment of online collections and interpretation. Visual descriptions of online exhibits can be provided. High-contrast images and illustrations can be provided for partially sighted people and tactile representations of many kinds can be provided for blind people. For some this may still sound like science fiction, but this is precisely what the highly innovative i-Map Web site of the Tate Modern has aleady done [20]. In the first month of its opening, some 3,000 images suitable for reproducing in tactile format for blind people were downloaded. In conclusion, the MLA sector does comparatively well at Web accessibility, better than the Web as a whole, though not quite as well as the Higher Education sector. However what stands out is the scale of the task that lies ahead, as well as the exciting promise for outstanding educational and creative applications. We need a thousand i-Maps and Milestone museums. References

  1. Disability Rights Commission. (2004). The Web: Access and Inclusion for Disabled People. London: TSO. Available at: http://www.drc-gb.org/publicationsandreports/report.asp
  2. Inspiring Learning for All http://www.inspiringlearningforall.gov.uk
  3. The Disability Rights Commission - Codes of Practice http://www.drc-gb.org/thelaw/practice.asp
  4. Illustrated Handbook for Web Management Teams (html) http://www.cabinetoffice.gov.uk/e-government/resources/handbook/html/htmlindex.asp
  5. Web Content Accessibility Guidelines 1.0 http://www.w3.org/TR/WAI-WEBCONTENT/
  6. Good Practice Guide for Developers of Cultural Heritage Web Services http://www.ukoln.ac.uk/interop-focus/gpg/
  7. Minerva. Ten Quality Principles http://www.minervaeurope.org/publications/tenqualityprinciples.htm
  8. A list of accessibility testing tools can be found at: Evaluation, Repair, and Transformation Tools for Web Content Accessibility http://www.w3.org/WAI/ER/existingtools.html
  9. Watchfire http://www.watchfire.com/
  10. Disability Rights Commission. (2004). The Web: Access and Inclusion for Disabled People. London: TSO. Available at: http://www.drc-gb.org/publicationsandreports/report.asp
  11. Freedom Scientific http://www.freedomscientific.com/
  12. Ai Squared Home Page http://www.aisquared.com/
  13. ReadPlease http://www.readplease.com/
  14. An accessibility analysis of UK university entry points. Brian Kelly, Ariadne, issue 33, September 2002 http://www.ariadne.ac.uk/issue33/web-watch/
  15. MLA - Disability http://www.mla.gov.uk/action/learnacc/00access_03.asp
  16. Home Page of museums computer group http://www.museumscomputergroup.org.uk/
  17. University of Leicester - Department of Museum Studies http://www.le.ac.uk/museumstudies/
  18. Milestones Museum Home Page http://www.milestones-museum.com/
  19. ePolitix.com Forum Brief: Compensation culture http://www.epolitix.com/EN/ForumBriefs/200303/
  20. i-Map http://www.tate.org.uk/imap/

Author Details Marcus Weisen Museums, Libraries and Archives Council Web site: http://www.mla.gov.uk/ Helen Petrie Centre for Human Computer Interaction Design City University London Web site: http://hcid.soi.city.ac.uk/ Neil King Centre for Human Computer Interaction Design City University London Web site: http://hcid.soi.city.ac.uk/ Fraser Hamilton Centre for Human Computer Interaction Design City University London Web site: http://hcid.soi.city.ac.uk/ Return to top Article Title: "Web Accessibility Revealed: The Museums, Libraries and Archives Council Audit" Author: Marcus Weisen, Helen Petrie, Neil King and Fraser Hamilton Publication Date: 30-July-2005 Publication: Ariadne Issue 44 Originating URL: http://www.ariadne.ac.uk/issue44/petrie-weisen/intro.html

Issue number:

Article type:

Date published: 
Sat, 07/30/2005
Issue 44
issue44_petrie_weisen
http://www.ariadne.ac.uk/issue44/petrie-weisen/

This article has been published under copyright; please see our access terms and copyright guidance regarding use of content from this article. See also our explanations of how to cite Ariadne articles for examples of bibliographic format.


          Focus on African American History        
Focus on African American History

Central Rappahannock Regional Library’s Rappahannock Reads runs throughout the month of February and is an opportunity for everyone in the community to read and discuss the same book. CRRL’s 2017 Rappahannock Reads title is Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race, by Margot Lee Shetterly, which tells the true story of the African American female mathematicians who went to work as “human computers” at the National Advisory Committee for Aeronautics (NACA) in Hampton, Virginia, during World War II.


          Book Club        

Our Book Club returns at 6:30 p.m. on Tuesday, September 19.  The first book to read is "Hidden Figures" by Margot Lee Shetterly.

This is the riveting true story of four exceptionally talented African American women who were called into service during the labor shortages of World War II.  These dedicated female mathematicians, known as "human computers," used pencils, slide rules, and adding machines to calculate the number that would launch rockets, and astronauts, into space.  Their career spanned overly nearly thee decades and changed our country's future.

Stop by the library soon to pick up your copy of "Hidden Figures."

Date: 
Tue, 09/19/2017 -
6:30pm to 7:30pm
Event Type: 

          BWW Feature: Strong Women Get Their Due on Stage in August        

If you were captivated by the film Hidden Figures and its story of the women known as "human computers" who saved John Glenn's NASA space mission, get ready for Lauren Gunderson's play SILENT SKY, opening this month at the Long Beach Performing Arts Center.

Jennifer Cannon stars as astronomer Henrietta Swan Leavitt (1868-1921), an astronomer and "human computer" hired by the Harvard Observatory to study the variable luminosity of the stars. She discovered more than 2,400 variable stars; about half of the known total at the time, and her intense observation of the Cepheids, in particular, would later allow Edwin Hubble to determine the presence of galaxies beyond the Milky Way. She also developed a standard of photographic measurements that was accepted by the International Committee on Photographic Magnitudes called the "Harvard Standard." All this while not ever being allowed to handle a telescope (it was truly a man's world).

"In the play, the very real mathematical relationship discovered by Leavitt is explained not with numbers, but with notes," says Gunderson. "Henrietta's sister, Margaret, is a pianist and just when Henrietta can't stare at the tables of measurements describing her Cepheid variable stars any longer, she listens...then looks up...then sees/hears what she's been searching for: a pattern. That moment is what made me write this play, because it could only work in a play. It's theatrical, it's musical, it's not a moment of dialog but a moment of overwhelm, everything changes in this moment."

The International City Theatre production of SILENT SKY runs August 23 - Sept 10 (opening night 8/25 includes a post-show reception with the actors). InternationalCityTheatre.org

Coincidentally, another one of Lauren Gunderson's works - ÉMILIE: LA MARQUISE DU CHATELET DEFENDS HER LIFE TONIGHT - opens the same weekend at the Greenway Court Theatre on Fairfax, produced by Coeurage Theatre Company. In this play, Gunderson examines the life of the 18th century French physicist and mathematician, whose commentary and translation of Newton's Principia on the basic laws of physics is still used today.

Émilie, played by Sammi Smith, returns for one night to defend her legacy and finish the groundbreaking work for which she was denounced until after her death. The witty, sexy, and passionate exploration recounts her intellectual and romantic entanglements, including an affair with Voltaire.

Director Julianne Donelle says, "To be a woman directing a play about a woman that's written by a woman is a dream. There are so many incredible female stories to be told, and yet, so few are brought to life. As a female director, I want to tell stories I can connect to. I may not have anything in common with a physicist from the 18th century but, at its heart, it is a tale of humanity, of one woman's struggle to make strides in a male-dominated society tied in with all the trials of life. It's a story of legacy, loss, love, and heartbreak that transcends time and space, and it feels so relevant and timely even though Émilie lived almost 300 years ago.

ÉMILIE: LA MARQUISE DU CHATELET DEFENDS HER LIFE TONIGHT runs August 26-September 17. Coeurage Theatre Company's productions are all Pay What You Want and fulfill their mission of making impassioned theatre accessible for everyone. Coeurage.org

At Laguna Playhouse, Kelly McIntyre is Janis Joplin in the Tony-nominated rock musical A NIGHT WITH JANIS JOPLIN, based on the life of the explosive singer. McIntyre joined the first national tour in 2016 and went on to headline three more productions at Capital Repertory Theatre, Barter Theatre and A.C.T.

"Through these past two years playing Janis Joplin," says McIntyre, "I have learned endless lessons about myself, my career, my relationships, and most importantly, about love. The balance between all of those facets is something I'm still learning how to master but being the medium for Janis has given me the chance to look deep inside myself at who I really am, the way Janis so proudly did for herself. As a young performer, it would be easier for me to just take on the traits that the business wants to see, but I have, in Janis' footsteps, actively chosen to stand up here and tell you the truth. Be myself. It's an endlessly fun and educational journey."

Artistic Director Ann E. Wareham adds, "The roof is about to be blown off the Playhouse with this spectacular show!" which runs August 16 - September 10 (opening 8/20). If songs like "Me and Bobby McGee," "Piece of My Heart," "Mercedes Benz," and "Cry Baby" were part of your journey to adulthood, this musical has your name all over it. The show celebrates Janis and her biggest musical influences from Aretha Franklin to Etta James, Odetta to Nina Simone, and the great Bessie Smith. LagunaPlayhouse.com [Pictured: Kelly McIntyre as Janis Joplin. Photo by Randy Johnson]

And finally, Echo Theatre Company celebrates its 20th anniversary with an evening of five world-premiere short plays by female writers who tackle misogyny and the treatment of women in today's political climate in NEVERTHELESS, SHE PERSISTED, August 23 - September 4 (opening night 8/26). Plays featured include YAJU (written and directed by Mary Laws), SHERRY AND VINCE (written by Charlotte Miller, directed by Tara Karsian), AT DAWN (written by Calamity West, directed by Ahmed Best), DO YOU SEE (written and directed by Sharon Yablon) and VIOLET (written by Jacqueline Wright, directed by Teagan Rose).

Wright's ability to stay open during the writing process was an important part of her play's journey. She says, "I wrote a different play before finally sitting down to write VIOLET, the one that frightened me. I wanted to tackle the issue that most haunts me as a woman, rape. But I did not see the point of writing what has already been well expressed. And I didn't want to use it as a dramatic device, or to gain sympathy for my character. I wanted the play to be about my characters, not the rapist or the salacious details of the event."

"As I sat at my desk, feeling vulnerable in the darkness of not knowing what to write, the characters opened up. And as I wrote, I discovered what really engaged me was the friend on whom the duty fell to 'be there.' So the play became about 'how we show up.' To allow another person to see you sick, or hurt, or violated, is courageous. And being the friend who witnesses that vulnerability is also courageous, and vulnerable. That's what this play is about - the friendship. Not the event. But the silent, awkward and all too important and un-salacious relationship between women, when there can be nothing said. Or fixed." EchoTheaterCompany.com


          Björn Hartmann and future design tools        
I recently had the opportunity to meet with Björn Hartmann, a human computer interaction researcher who is currently finishing his PhD in Computer Science at Stanford and will soon be teaching at Berkeley.