Wednesday, May 4, 2011

I am proud to be within a field which is fundamentally changing not simply the way we communicate, but also the very nature of being.

Edward Nanno
IST 649, Human Computer Interaction

Introduction
My initial reason for taking IST 649 was based on feedback that I had received from colleagues that the Professor was expert in this new area called Human Computer Interaction.   I was also told that the course would be engaging as it crosses over interdisciplinary areas while focusing on the utilization of technology to our maximum advantage.   Some initial cursory research indicated that both of these statements were true and that I would benefit from taking such a course.  In fact, in one of my previous jobs at a large custody bank in New York City (The Bank of New York), process design and reengineering were duties assigned to me.  While we were able reengineer specific processes for optimal performance, I lacked the tools to apply these to computing system interfaces.  I believe if I had taken this course, my understanding of how to measure performance within the system would have been invaluable.  But I digress.
Learnings
Upon opening the course textbook, I was immediately captivated by the story about the NYSE computing upgrades.  I was immediately hooked as I could relate to the topic.  HCI is focused upon goal orientated, real world contextual solutions.  Within this framework creating fundamental matrices, which help guide the development of effective HCI within an organization, is essential.  TSSL and ‘Fit’ are two methodologies created to assist the creator of an HCI throughout the life-cycle process.
The TSSL model is an important aspect of the process map.  As it breaks down the system into the task, semantic, syntactic and lexical levels, an ontology appears.   This is usually a combination of the system designer and the programmer.  The relationship because these two parties is unknowable but if the interface is a good one, then the designer and programmer must be using some, if not all, of these principles. 
The ‘Fit’ model seems to acknowledge that humans (surprisingly) use computers.  This human element seems to have been overlooked in many of the legacy designs that companies created in the past.  This oversight has led to many physical problems.  Why else is ergonomics necessary?  In motion studies done in the industrial age, observations about the movement of workers led to improvements in how the work was done.  These improvements led to greater efficiency, a main goal in any organization.
These two models gave me some fundamental tools to work with when creating my first web page (assignment one).  Having worked on computers since the early 1980’s in high school, many of these design and format issues never crossed my mind.  In fact, in the early evolution of computer software, the human component seems to have been an afterthought.  Clearly, such semantical pointers such as a printer icon were not feasible early on, yet one was not really engaged with the system and have a feeling of control over the software as we are now.   So as I went about my task of creating the web page, I felt control over the process.  I could choose the colors, the amount of scrolling required, and add tabs that I wanted to lead the user to different information.  I felt, for the first time, an amount of control over a system, one which I had never experienced before.  Not only did I have control, but I knew my decisions were based on actual research so these were sound decisions.   This was a great feeling since most people feel powerless in the face of technology. 
The second assignment of evaluating two websites using HCI concepts build upon my initial satisfaction.  After having creating my own page, I could now look with an educated eye at other designs  and formats.  What I found is that while there are some websites which seem to cater to the potential user (I chose Aljazeera news), there are a plethora which seem to have never thought about HCI considerations at all!  In my analysis of Zynga’s website and their popular social networking game, Mafia Wars, I was shocked to see 19 different colors on the landing page with no obvious ‘correct’ next move.  
The third assignment was profound in that Morae gives an objective and subjective analysis of the usability of a web site.  Having a tool which records mouse movement, time between click, position of the pointer on the page and time spent on task really objectifies the analysis.  It is not debatable whether a person clicked on the wrong link or took twice as long on task as another participant.   This gave me a sense that real ‘hard’ science was happening in my evaluation.  I wasn’t just saying ‘so and so’ didn’t interact well with the site, I actually had data to confirm the operation.   The subjective, video profile, could add some confirmation but really is an ancillary aspect of the system since it really needs interpretation of a frown or puzzled look.

The group project was very successful in that we all came together on task with varying views on how to proceed.  It was quite easy to design and create the first ‘bad example’.  It was a little harder to create the second ‘bad example’ as we chose a design format similar to the final ‘good example’.  Making the decisions as to why this drop down box should go here rather than there, we focused on the TSSL model and on our view of Aesthetics and Fit.   This is where the group conversation and interaction became quite interesting.   While there are certain ‘fixed’ principles guiding our decisions, often the subjective aesthetic decisions were the most controversial.  It didn’t really surprise me, being a philosopher, since Ethics and Aesthetics are the most uncertain topics in philosophy.  So discussing ‘how the user should interact’ and ‘what is the best method for obtaining that interaction’ made me conclude that we were, indeed, on the right track. 

Turning to the class discussions, these were engaging and clearly handled by an expert.  What I found most riveting was the compelling nature of the presentation.  Prof. Zhang didn’t just go over the textbook, but actually expanded upon the topics.  This led me to believe that her work was ongoing and made me look at some of her recent publications.  We were given the framework laid out within the text, but she made the field come alive in the lectures.   The additional information given in class, crossing various disciplines, yet ultimately unified in this science known as HCI was truly a pleasurable experience. 
Within the class presentations, I learned quite a bit about emerging technologies and how these are affected by HCI.  Some of the presentations made it clear that this new field, HCI, is about to burst open with a flood of new ideas.  Taking the evoMouse presentation for example, it was clear that the genesis of the project was to make a more ‘human friendly’ mouse interaction.  All computer users to some degree, suffer from hand strain from using a mouse.  This solution seems to bypasses the damage caused from mouse overuse (carpal tunnel syndrome).  In fact, some of my classmates suffer from this condition so the appropriateness of looking at this field was very real. 


Objectives
Since I had no prior experience, to my knowledge, of Human Computer Interaction, I had no learning objectives to meet.  I am interested in knowledge and how that knowledge can be applied to real world situations.   While the coursework helped learn the foundations of this science, the application of the concepts in the individual and group work cemented this knowledge firmly.
The moment that the theory and practice came together for me the first time, was within the computer lab testing exercise.  I realized at that point, that the theoretical framework which we were learning at the time, can be applied in some very valuable ways.  The use of distracting devices while trying to perform a task led me to an appreciation of why people spend so much time on the internet.  While we don’t usually just sit down in front of our computers and say, I want to spend some time reading up on the news, we tacitly think this within our brains.  However, the creators of internet content do not want us to efficiently perform the tasks that we intend to perform.  They want us ultimately to reach our task goal, but only after giving themselves many chances to generate revenue from our engaging their site(s).  Karl Marx said it the best, economics is the simple most important determining factor in history.   So our goals and the content provider’s goals are only marginally synchronized.  They want us to find the information, but like the rat in the maze, the cheese is only obtained after working through a plethora of possible channels.  For some reason, this thought had never occurred to me and thinking of myself as a lab rat chasing cheese, was somewhat distasteful.
So, ultimately, while my main goal in taking any University class is to learn something about the subject, often these learnings are serendipitous. 
Final Thoughts
The main take-away from this course is the integral importance of HCI considerations in computer interface and back-end design.  This could be summed up by an HCI term, usability. Improving users’ task performance and reduce their effort can be partially accomplished by automating user activity.  Striving for a ‘Fit’ between the information represented needed and presented can be achieved through a ‘cognitive fit’.  That is, the designer’s conceptual model, the user’s mental model and the actual system display support the efficiency of the system.  Providing and constraining affordances help to capture real world knowledge.  It is the proper use of affordances which draws the user’s attention to performing one action while de-empasizing other possible actions.  The principle, design for error, acknowledges that errors will occur.  The designer, while trying to avoid having the user make an error, should create an indicator that tells the user an error has been made.  Designing the system for an enjoyable and satisfying interaction is a lofty goal, but is paramount in terms of usability.  After all, if the system is not satisfactory, it will go unused. 

Often times, computing systems are an amalgamated combination of legacy systems forced to interact with modern software.  Making these legacy systems interoperable is one thing (like when one bank buys another and they are running on different platforms).  How the user interacts with these various systems is left to corporate trainers.   However, there is now a step added within the process of computing, training, using and performing.  This is a step where an IT professional, utilizing the principles of HCI, can reengineer process design with the user in mind.  This will lead to greater efficiency, greater morale and ultimately, greater profits for the corporation. 
There is however, a deeper philosophical concept at play here.  It is not simply noble to maximize efficiency, morale and profits but to also utilize technology to our maximum advantage as human beings.  In an essay published in 1954 as “The Question Concerning Technology”, the German philosopher Martin Heidegger questions the very ontology of technology, that is, what is its purpose.  He proposes that technology at its very ‘essence’ is related to the ancient Greek concept of ‘physis’, a concept meaning to ‘bring forth’.  He maintains that there is a teleology inherent in technology and our goal as humans is to bring forth into being, that world which nature has unfolded.  Kevin Kelly, in his latest book, What Technology Wants, pushes this concept further and states this is what technology desires. 
            “In 1949, John von Newmann, the brainy genius behind the first useful computer, realized what computers were teaching us about technology: ‘Technology will in the near and in the farther future increasingly turn from problems of intensity, substance, and energy, to problems of structure, organization, information, and control.’  No longer a noun, technology was becoming a force – a vital spirit that throws us forward or pushes us against us.  Not a thing but a verb.” (Kelly 2010, p.41).
I am proud to be within a field which is fundamentally changing not simply the way we communicate, but also the very nature of being.





References

Heidegger, M. (1977). The question concerning technology, and other essays. Harper Torchbooks.

Kelly, K. (2010). What technology wants. Viking Adult.

Te'eni, D., Carey, J. M., & Zhang, P. (2006). Human-computer interaction: Developing effective organizational information systems (1st ed.

Tuesday, February 15, 2011

The Annoyed Librarian?

In another Syracuse course within the library curriculum, our Professor noted an 'infamous' blog dealing with librarianship.

I have decided to post my thoughts on "The Annoyed Librarian".

http://blog.libraryjournal.com/annoyedlibrarian/2011/02/14/libraries-for-people-who-dont-need-libraries/

This most recent post deals with how to capture the interest of those who are not using libraries. A nice summary statement for the blog entry states:



"Even with childhood literacy, libraries aren’t necessary for a lot of people. Children’s books are cheap, and middle class people concerned with their children’s education often buy lots of them and read to their children themselves. They don’t need public libraries, and the ones who aren’t concerned with their children’s education don’t want libraries."

This is an important point. While the price of 'books' has declined, it is not only the well-heeled who find no use for borrowing from libraries, but now the 'middle class'.

My contention is that this author is missing the point. Libraries are not concerned with social stratification, nor have they ever been. In fact, it is the impoverished who NEED access to information in our society. The initial mandate created by Carnegie when founding the library system was to reach out to those most in need, not the other way around.

Additionally, those who campaign around the non-utility of libraries have little at stake in the equation. This is sad, if not reprehensible, within our communities. Libraries exist to serve the 'public good', regardless of the social standing of the patrons.

I continue to be amazed at the indifference our communities hold towards those who have real and daily needs.

Monday, January 31, 2011

Information scenario game for the Egyptian situation

In our Information Science class today, we played an information scenario game for the Egyptian situation. In our outcome, the Mubarek government steps down, leaves the country and interim rule by ElBaradei is established. Elections to be held at a future date. My team, representing the U.S. interests via Secretary of State Hillary Clinton, suggested to Mubarek that this was not the preferred outcome.

There were 9 parties; 3 information outlets (twitter, vodaphone, aljerrza), 3 potential governments ( Mubarek, ElBaradei, radical Islamists), 3 voices (Egyptian army, Egyptian young radical movement, US). We were given 10 pages of information sources constituting hundreds of tweets, news articles, photos, etc. from across the internet.

Comments


Jeff March:  If the leadership of the country falls into the hands of the muslim brotherhood, then that's bad news for everybody. 

Jeff Nason: I agree that Muslim Bro is NOT our friend, but what is the preferred outcome your team argued for?

Ed:  Jeff Nason, we talked about 30 yr alliance with Egypt, all Egyptian military is outfitted with US weaponry, and most of all $1.5B in aid yearly to a country with $166B GDP. In our mind, while revolution may be preferred, we would rather deal with the devil we already know. As Jeff March notes, a vacuum potentially filled with a radical extremist group such as the Muslim Brotherhood, is not in the US interests. Maybe the Nobel Prize winner ElBaradei is the answer, but remember, we were assigned the role of Clinton.

Sunday, January 30, 2011

Using social media to promote libraries.

If there is a catch phrase for our latest decade, it is social media.  While we are all familiar with this media and its various forms, the utilization of technology to create interest and mobilize folks is gaining momentum. 

The first eye-opener was the election of President Obama.  During the Presidential campaign, "Hillary Clinton's camp had about 20,000 volunteers at work in Texas. But in an e-mail, ["Internet impresario"] Trippi learned that 104,000 Texans had joined Obama's social-­networking site, www.my.barackobama.com, known as MyBO. MyBO and the main Obama site had already logged their share of achievements, particularly in helping rake in cash. The month before, the freshman senator from Illinois had set a record in American politics by garnering $55 million in donations in a single month. In Texas, MyBO also gave the Obama team the instant capacity to wage fully networked campaign warfare. After seeing the volunteer numbers, Trippi says, "I remember saying, 'Game, match--it's over.'" http://www.technologyreview.com/web/21222/?a=f

The second eye-opener was the price assigned to social networks by Wall Street.  Depending on your source, these estimates are staggering;  Facebook ($80 Billion), Zynga ($5 Billion), and LinkedIn ($2 Billion) just to name a few.   This implies that there is "capital" intrinsic to social networks.

The final eye-opener was this weeks riots in Egypt which promoted the government to shut down access to the Internet and email.   This implies that there is "power" intrinsic in online communications, especially in the dissemination of information.

So, when perusing the blogs which I am subscribed to, I was taken aback when reading this very skeptical post regarding mobilization of communities to promote libraries using social media.

Mr Tay writes, "I think he has a point, does 15,000 fans of "save library X" fanpage really help if most just click "like" and forget about it?  Does that sway the powers that be? More to the point, in this day and age, does having 15,000 fans of "Save a library" with no other action really make sufficient news for traditional mass media to take notice and help to spread the word out? I don't know. In recent months, I have noticed however a couple of campaigns that seem to have resulted in actions that go beyond the purely online realm." http://musingsaboutlibrarianship.blogspot.com/2011/01/4-successful-social-media-campaigns-for.html

Social media was instrumental in the election of Barack Obama, it is very valuable as a consumed item and it is clearly feared by those who govern by fiat.  In conclusion, when harnessed correctly, social media is precisely the avenue which should be chosen by libraries to plead their case.  As a phenomenon, it is clearly the newest incarnation of the old adage, "the pen is mightier than the sword".

DREAM ACT Resolution-ALA MW 2011

Background detail

http://librarian.lishost.org/?p=3377

WHEREAS, The American Library Association (ALA) strongly support the protection of each person’s civil liberties, regardless of that individual’s nationality, residency, or status; and that ALA opposes any legislation that infringes on the rights of anyone in the USA or its territories, citizens or otherwise, to use library resources, programs, and services on national, state, and local levels (ALA Policy 52.4.5); and...

My Commentary

I am currently taking a course "New Directions in Public Librarianship".  I will be blogging throughout the term on issues "that impact the functioning of the public library".

Our society provides services to those members of our communities who have little access to these services outside of the community.  The proper functioning of a community must involve those with no access to basic services without reference to citizenship or otherwise.  Indeed, the library provides services to those who are impoverished and susceptible to alienation.  This proposal, which was recently resolved at the ALA midwinter meeting, is a strong step forward in providing direction for public policy going forward. 

Many of the benefits of our public libraries are instantly noticeable.  These include literacy, community, access to technology, reference, and the means to research issues such as citizenship and residency.  It is shameful that any person or body would decide to restrict access to these basic needs within our society!

The founding of the public library system was the result of the vision, faith, and philanthropy of Andrew Carnegie.  His vision was simple, to offer "surplus wealth ... [to the] industrious and ambitious; not those who need everything done for them, but those who, being most anxious and able to help themselves, deserve and will be benefited by help from others and the extension of their opportunities at the hands of the philanthropic rich" (Carnegie, 1889, p. 686).  The decision of Carnegie and others to share their "surplus wealth" with those who are not burdened by such issues, was and should be at the forefront of our public library system.

References:
Andrew Carnegie, "The Best Fields for Philanthropy", The North American Review, Volume 149, Issue 397, December, 1889.

Thursday, December 16, 2010

Frank Turner- “Libraries have changed more in the last 15 years than they have in the last five centuries.”

Libraries as "glorified study halls"?

Steward of the Once and Future Book


How does it feel to become University Librarian at Yale when all around you are predicting the end of the book as we know it? Intellectual historian Frank M. Turner '71PhD, who became interim University Librarian in January, has now formally assumed the leadership of Yale's vast library system—just as the apocalypse of the printed book is being discussed by his counterparts around the country. There are dark suggestions in journals and at conferences, he says, that "in less than 25 years, libraries will be glorified study halls," each with "one vast computer furnishing electronic materials."

But he's not worried: "The book won't disappear, and, in fact, our circulation remains high." Turner—the John Hay Whitney Professor of History, a former Yale provost, and since 2003 the head of the Beinecke (a post he'll keep until his replacement is hired)—readily acknowledges the reach of the digital revolution. "Libraries have changed more in the last 15 years than they have in the last five centuries," he says. Yale's library system "contains exemplars of everything: from the Beinecke, with its enormous breadth and depth of traditional print materials, to the medical library, which, except for its historical component, is virtually all electronic."

The rise of such virtual collections, along with digital devices and high-speed wireless Internet access, is changing a fundamental aspect of the library. "We've always thought of the library as the heart of the university, as a distinct place," Turner says. But librarians need a new perspective on the Sterling system. By enabling researchers to invent their own fresh ways of using the collections, "the library of the future will have to go into the heart of the user."

Nature study talking about the use of digitized books to uncover language clues

Cultural goldmine lurks in digitized books

'Culturomics' uncovers fame, fortune and censorship from more than a century of words.
bookAnalysing decades of books can reveal important cultural trends.FRANCK CAMHI / Alamy
The digitization of books by Google Books has sparked controversy over issues of copyright and book sales, but for linguists and cultural historians this vast project could offer an unprecedented treasure trove. In a paper published today in Science1, researchers at Harvard University in Cambridge, Massachusetts, and the Google Books team in Mountain View, California, herald a new discipline called culturomics, which sifts through this literary bounty for insights into trends in what cultures can and will talk about through the written word.
Among the findings described by the collaboration, led by Jean-Baptiste Michel, a Harvard biologist, are the size of the English language (around a million words in 2000), the typical 'fame trajectories' of well-known people, and the literary signatures of censorship such as that imposed by Germany's Nazi government.
"The possibilities with such a new database, and the ability to analyse it in real time, are really exciting," says linguist Sheila Embleton of York University in Toronto, Canada.
"Quantitative analysis of this kind can reveal patterns of language usage and of the salience of a subject matter to a degree that would be impossible by other means," agrees historian Patricia Hudson of Cardiff University, UK.
"The really great aspect of all this is using huge databases, but they will have to be used in careful ways, especially considering alternative explanations and teasing out the differences in alternatives from the database," adds Royal Skousen, a linguist at Brigham Young University in Provo, Utah. "I do not like the term 'culturomics'," he adds. "It smacks too much of 'freakonomics', and both terms smack of amateur sociology."

Half a trillion words

Using statistical and computational techniques to analyse vast quantities of data in historical and linguistic research is nothing new — the fields known as quantitative history and quantitative linguistics already do this. But it is the sheer volume of the database created by Google Books that sets the new work apart.
So far, Google has digitized more than 15 million books, representing about 12% of all those ever published in all languages. Michel and his colleagues performed their analyses on just a third of this sample, selected for the good quality of the optical character recognition in the digitization and the reliability of information about a book's provenance, such as the date and place of publication.
The resulting data set contained over 500 billion words. This is far more than any single person could read: a fast reader would, without breaks for food and sleep, need 80 years to finish the books for the year 2000 alone.
Not all isolated strings of characters in texts are real words. Some are numbers, abbreviations or typos. In fact, 51% of the character strings in 1900, and 31% in 2000, were 'non-words'. "I really have trouble believing that," admits Embleton. "If it's true, it would really shake some of my foundational thoughts about English."
According to this account, the English language has grown by more than 70% during the past 50 years, and around 8,500 new words are being added each year. Moreover, only about half of the words currently in use are apparently documented in standard dictionaries. "That high amount of lexical 'dark matter' is also very hard to believe, and would also shake some foundations," says Embleton. "I'd love to see the data."
In principle she already can, because the researchers have made their database public at http://www.culturomics.org/. This will allow others to explore the huge number of potential questions it suggests, not just about word use but about cultural history. Michel and colleagues offer two such examples, concerned with fame and censorship.
ADVERTISEMENT
Nature Physics Insight: Physics and the Cell

They say that actors reach their peak of fame, as recorded in references to names, around the age of 30, while writers take a decade longer but achieve a higher peak. "Science is a poor route to fame," they add. Physicists and biologists who achieve fame do so only late in life, and "even at their peak, mathematicians tend not to be appreciated by the public".

Big Brother's fingerprints

Nation-specific subsets of the data can show how references to ideas, events or people can drop out of sight because of state suppression. For example, the Jewish artist Marc Chagall virtually disappears from German writings in 1936-1944 (while remaining prominent in English-language books), and 'Trotsky' and 'Tiananmen Square' similarly vanish at certain sensitive points in time from Russian and Chinese works respectively. The authors also look at trends in references to feminism, God, diet and evolution.
"The ability, via modern technology, to look at just so much at once really opens horizons," says Embleton. However, Hudson cautions that making effective use of such a resource will require skill and judgement, not just number-crunching.
"How this quantitative evidence is generated and how it is interpreted are the most important factors in forming conclusions," she says. "Quantitative evidence of this kind must always address suitably framed general questions, and employed alongside qualitative evidence and reasoning, or it will not be worth a great deal."