A new interface built for the net that is designed to help people think and collaborate








This is a work in progress.
Please reach out for more detail.


Augmenting Human Intellect


The pioneers of human-computer interface technology were united by a shared goal of helping people manage the complexity of working with large amounts of information so we can think more clearly, communicate more broadly, and work more effectively.




Why a new interface?


We are dealing with a world characterized by an unprecedented level of complexity, brought about by information technology, network dynamics, and computational power.

I want to think more clearly, work more efficiently, and help other people do the same.

I want to organically grow epistemological gardens and working environments. I want to be able to move freely between them. I want to reference information from other projects not to get in the way of what I am currently focused on. 

There has hardly been any innovation in the general purpose interface space in the past thirty five years. Having developed a theory of why it hasn’t happened, and designed what I believe is an innovative and practical general purpose interface, I want to find out if innovation is still possible.




What kind of new interface?


A general purpose interface designed for a connected world in which computers play a vital role in all aspects of life. It enables people to organically create spaces for working, learning, and collaborating. The whole interface is designed for the way we use computers today.




Similar Efforts


Notion

Mac OS

Are.na

Urbit

Chrome OS



Calm and Peaceful


In a world of information competing for attention, a new general purpose interface should encourage peaceful, calm, goal oriented interaction.

Collaborative


Communication and collaboration are key to how computers are used today - this should be enabled and celebrated system wide.

Net-First


The internet is integral to computing today. It should be treated as such by the operating system.



For Learning and Making


Like the innovative interfaces of the past, a new interface should emphasize augmenting human intellect, and enabling people to build things together.






Do everything you do on an old computer, but with more space


Eliminate clutter and too many overlapping windows with an interactive scrolling desktop.





Grow information spaces about the things that matter to you


Effortlessly generate multiplatform information environments for everything in your digital life. Work on a major project, manage your finances, plan a trip. Do everything you want to do without losing anything.





Share multiplatform computing environments with other people


Sharing doesn’t need to always be limited to individual software programs. Share your work across platforms with friends, family, and coworkers, or publish it on the web to share with the world.





Access everything from any screen with an internet connection


New interface is built on and for the internet. It’s designed to be your primary computing environment. But it doesn’t need to be tethered in any way to a particular device.





This is a work in progress.
Please reach out for more detail.






A New Interface Layer


There is certainly no lack of difficult problems awaiting solutions. Mathematics provides plenty, and so does almost every branch of science. It is perhaps in the social and economic world that such problems occur most noticeably, both in regard to their complexity and to the great issues that depend on them. Success in solving these problems is a matter of some urgency. We have built a civilization beyond our understanding and we are finding that it is getting out of hand. Faced with such problems, what are we to do?

- W. Ross Ashby

Speaking to reporters at the entrance to Mar-A-Lago alongside a flag-waving Don King, then President-elect Trump observed that “Computers have complicated lives very greatly… This whole age of computer has made it where no one knows exactly what’s going on.” Trump, who emerged as president amidst a humid fog of Russian cyber psyops, self-described alt-right “meme warriors,” scandalously deleted emails, fake news, and internet subcultural ideologies from Cult of Kek to accelerationism, is uniquely qualified to make such an observation. But he’s hardly the first. Dutch political designers-as-artists Metahaven, whose work includes The Sprawl and Information Skies, create work that explores the notion of virtual reality not in the sense of an Oculus headset but “the psychological condition of VR, or truth bubbles, in our lives, and in society. We are thinking of VR as a social phenomenon, confronting 'legal truth'—something we previously explored in the documentary The Sprawl (Propaganda About Propaganda). VR as belief.”

While they have been disrupting traditional notions of reality and political sovereignty, computers are also influencing the cognitive processes of people who use them extensively. In his book The Shallows: What the Internet is Doing to Our Brains, Nicholas Carr writes that “What the Net seems to be doing is chipping away my capacity for concentration and contemplation. Whether I’m online or not, my mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.” Computers are incredibly useful, but computer users, like drug users, grow frustrated with their long-term effects. Consider this April 2018 viral tweet from @Kristen_Arnett:  “why does my neck always hurt” i wondered as i contorted my body exorcist-style to peer at a tiny glowing misery rectangle



The key researchers in this process were united by a shared goal of using computer technology to help people manage the complexity of working with large amounts of information so we can think more clearly, communicate more broadly, and work more effectively.




During the sixties, seventies, and early eighties, research into computational hardware and interface technology proceeded in tandem. While computational and network technologies determine what is technically feasible, interface architecture determines how people engage with those technologies and the way they manifest in our lives. The pioneering work in human-computer interface technology can be traced to four nodes: Vannevar Bush, ARPA’s Information Processing Techniques Office, Douglas Engelbart’s Augmentation Research Project at SRI, and Xerox PARC. The key researchers in this process were united by a shared goal of using computer technology to help people manage the complexity of working with large amounts of information so we can think more clearly, communicate more broadly, and work more effectively.

Computer interfaces have been wildly successful in some ways, but as what Benjamin Bratton calls the “accidental megastructure” of planetary scale computation grows larger and more ubiquitous, things are starting to fall apart in ways these early pioneers (especially Engelbart) would surely have found highly disturbing. How did this come to be? Studying the ideas and prototypes that led to today’s interface paradigm helps us understand what went right and what went wrong. More importantly, it provides clues for how today’s interface designers can address contemporary challenges in human-computer interaction.



The Memex would take the form of a desk with three slanting translucent screens that “instantly bring files and material on any subject to the operator’s fingertips.”




Vannevar Bush, as Director of the U.S. Office of Scientific Research and Development (OSRD), oversaw nearly all military-related research and development in the United States during the Second World War. Having marshaled the forces of more than 6,000 scientists to develop technologies like radar and the atomic bomb that were crucial to the war effort, Bush set his mind to figuring out what all these scientists should do now that the war had ended. In July 1945, he published an article in the Atlantic called As We May Think, arguing that organizing scientific knowledge so that it can be of use is the primary challenge of our time. “Publication has been extended far beyond our present ability to make real use of the record,” creating the need for a “mechanized private file and library... in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.” I could not agree more. Additionally, the mind should be freed from repetitive tasks like performing standard calculations. “A mathematician,” Bush wrote, “is primarily an individual who is skilled in the use of symbolic logic on a high plane, and especially he is a man of intuitive judgment… All else he should be able to turn over to his mechanism… For this reason there still come more machines to handle advanced mathematics for the scientist. Some of them will be sufficiently bizarre to suit the most fastidious connoisseur of the present artifacts of civilization.”

This speculative device, which Bush named the Memex, had specific and advanced interface characteristics. It took the form of a desk with three slanting translucent screens that “instantly bring files and material on any subject to the operator’s fingertips.” While sitting at the Memex desk, Bush imagined that it would be operated using “a series of buttons and levers” and a keyboard for direct input of information and notes in the margins. Because the Memex has multiple screens, items can be left in place and consulted alongside others.




Vannevar Bush’s illustration of his Memex device, 1945



Someone who has access to all the world’s information, but lacks a way to make sense of it, is apt to “become bogged down part way there by overtaxing his limited memory.” The Memex’s “essential feature” therefore, is the ability for its operator to create named “trails” of information and ideas that can be reviewed alongside one another by “flipping pages” as though one had created a new book. “And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own Memex, there to be linked into the more general trail.”

These information trails are often seen as an early example of hypertext, but this analysis obscures the real power of the concept. Bush’s information trails are designed first to be marked by a Memex operator, who is able to build personal narratives and “idea mazes” that can be stored, consulted later, and possibly shared. Rather than facilitating easy, passive consumption of information, the Memex is a creative information management tool, which encourages active engagement with data and the creation of knowledge. With this early vision, Bush introduced ideas that would heavily influence a generation of research into information systems and human-computer interaction. One person who was influenced by Bush is JCR Licklider, who would go on, as the director of ARPA IPTO from 1962-1964 (Information Processing Techniques Office), to develop ideas that led to the funding and development of three pivotal advancements in information technology: the creation of computer science departments at several major universities, time-sharing, and networking (Garreau, 2006).


Benjamin Bratton and Metahaven’s diagram of The Stack, a real result of the system predicted (and funded) by Licklider

In 1960, JCR Licklider published Man-Computer Symbiosis, which opens with a discussion of the relationship between fig trees and Blastophaga grossorum, the insects that live in the fig tree’s ovary, pollinating it in the process. Rather than imagining computers as tools to be used by humans, Licklider proposes a symbiotic relationship between people and machines, two very different kinds of intelligence living together in tight harmony.  The resulting partnership, he writes, “will think as no human brain has ever thought and process data in a way not approached by the information handling machines we know today.” Licklider looks even farther forward, to a “distant future” in which “cerebration is dominated by machines alone” but reassures the reader that “there will nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working in intimate association.”

By understanding human and computer intelligences to be of two fundamentally different types whose mutual interaction will lead to a greater and ever-closer whole, Licklider anticipates the philosophies developed by contemporary scholars including UCSD’s Benjamin Bratton and CCA’s Haakon Faste. Bratton contends that the Turing test is an intolerant measure of intelligence and that by forcing artificial intelligence to pass as human, we are committing the same grave error as the British when they forced Dr. Turing, a homosexual, to pass as a straight man (Bratton, 2015). Faste argues we must overcome these human-centered notions and other “myths of our contemporary era in which human and machine systems are separate and distinct” in order to ensure a smooth transition from human-computer symbiosis (as opposed to human use of computers) to a posthuman or transhumanist society (Faste, 2010).

In the following pages of Man-Computer Symbiosis, Licklider outlines the challenges that will have to be overcome to realize symbiosis. These include speed asymmetry between men and computers, memory hardware requirements, memory organization requirements, the language problem, and input and output equipment (Licklider, 1960). Building on this work in 1962, he wrote a series of memos outlining what he called the “Intergalactic Computer Network,” described as “an electronic commons open to all, ‘the main and essential medium of informational interaction for governments, institutions, corporations, and individuals.” This vision led to ARPANET, the direct predecessor of the internet.

Robert Taylor, founder of Xerox PARC's Computer Science Laboratory and Digital Equipment Corporation's Systems Research Center, noted that “most of the significant advances in computer technology—including the work that my group did at Xerox PARC—were simply extrapolations of Lick’s vision. They were not really new visions of their own. So he was really the father of it all.”

In the fall of 1945, a young radar technician named Douglas Engelbart was sitting in a hut at the edge of the jungle on Leyte, one of the Philippine islands, reading a copy of the Atlantic the local Red Cross chapter had let him borrow. He would later write in a letter to Vannevar Bush that the ideas he encountered in that article had “influenced him quite basically” (Bardini, 2000). Upon returning home, a newly engaged Engelbart had something of a quarter-life crisis, realizing it was about time to figure out what he was going to do professionally. He decided to maximize how much good he could do for humanity as the primary goal, which he would characterize as a “crusade.” As he set about trying to figure out what crusade to get on, he says the answer came to him in a flash: “The complexity of a lot of problems and the means for solving them are just getting to be too much. The time available for solving a lot of the problems is getting shorter and shorter… The complexity/urgency factor had transcended what humans can cope with. I suddenly flashed that if you could do something basic to improve human capability to cope with that, then you’d really contribute something basic.” (Engelbart, 1996)

With this in mind, Engelbart embarked upon his crusade. He decided to go back to graduate school at Berkeley, where he tried to share his interest in symbolic logic and the possibility of using computers to structure information (instead of just doing numeric computations) with the computer scientists. But, he recalls, most of them were just not interested. While his PhD would be in computer science, Engelbart took courses in the logic and philosophy departments, and recalled that he “didn’t particularly travel within circles of engineering students,” preferring instead to hang out “with the English Lit majors” although they didn’t really understand what he was trying to do either.


Engelbart’s illustration of the importance of tools to intellectual progress


Engelbart, an engineer himself, felt that “more engineering was not the dominant need of the world at that time.” As a result, he was always a bit of an outsider. But if what he was really trying to do wasn’t computer engineering, what was it? Using information technology to augment human intellect is a worthy goal, but it is certainly not part of the discipline of computer science and is hardly a discipline itself. In a 1961 letter to Dr. Morris Rubinoff of the National Joint computer committee, he gives a hint: “The impact of computer technology is going to be more spectacular and socially significant than any of us can comprehend. I feel that comprehension can only be attained by considering the entire socio-economic power structure, a task which the people in the know about computer technology aren’t equipped for, and a job about which the people who might be equipped properly are not yet stimulated or alerted… In an instance where something looms on the horizon as imposingly as does computer technology, we should be organizing scouting parties composed of nimble representatives from different tribes - e.g. sociology, anthropology, psychology, history, linguistics, philosophy, engineering - and we shall have to adapt to continual change.” Engelbart seems to be saying that there’s a job to be done that involves designing and prototyping the character of human-computer interaction to ensure that its effects on society are desirable.

At the Stanford Research Institute, Engelbart set up a lab called the Augmentation Research Center (ARC) where he set about doing this job. First, he built a conceptual framework to “orient us toward the real possibilities and problems associated with using modern technology.” He did this by examining how people currently go about managing what he called the complexity/urgency factor, assuming that carefully observing the problems that people have - what designers now call pain points - would suggest ways that technology might be able to help them solve those problems more effectively. This reveals the areas where research will be possible. (Engelbart, 1962)

In a 1962 paper titled Augmenting Human Intellect, Engelbart outlined the results of his preliminary research, starting with the observation that human ability to manipulate the world depends on four basic augmentation means: artifacts, language, methodology, and training. It is through what he refers to as the H-LAM/T system (Human using Language, Artifacts, Methodology, in which he is Trained) that people manipulate concepts and symbols, and where the opportunity to augment human intellect lies. The paper also outlined a research method called ‘bootstrapping’ in which the the ARC team would use the equipment they were building, and in the process test it to make sure it would work. A target demographic was identified - knowledge workers - with a specific and deeply considered emphasis on computer programmers as the initial users of the system. Numerous scenarios are written imagining how a particular person in a specific situation might use the system. What is the discipline that Engelbart was searching for? A combination of the arts and sciences, or an approach to computer science that uses the humanities as a starting point to move into prototyping technologies that help people work effectively with computers to make the world a better place? A close reading of Augmenting Human Intellect reveals SRI’s ARC as the first Interaction Design lab.

This paper would serve as a framework for Engelbart and his team of about 50 people to design and build a system for augmenting human intellect. The system, called NLS (or oN-Line System) was revolutionary. Its 1968 demonstration at Brooks Hall underneath San Francisco’s Civic Center is known as the Mother of All Demos, for good reason. Here, Engelbart introduced for the first time almost all the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a real-time editor for digital collaboration across distances. While NLS laid the groundwork for today’s GUI interfaces, it differed in many ways from the systems we are now accustomed to. In addition to the mouse and keyboard, the NLS interface also used a chorded keyset that worked in conjunction with the other input elements to provide a powerful way to, in Engelbart’s words, “fly through the interface.”


Douglas Engelbart presenting in the “Mother of all Demos”


NLS built upon Bush’s sophisticated understanding of hypertext to create a highly customizable and collaborative linking, annotation, and editing process that was designed to augment and shape the cognitive processes of the people using the system, organizing enormous datasets into a coherent, understandable whole. Engelbart writes “symbols with which the human represents the concepts he is manipulating can be arranged before his eyes, moved, stored, recalled, operated upon according to extremely complex rules - all in very rapid response to a minimum amount of information supplied by the human, by means of special cooperative technological devices.” (Engelbart 1962) This was referred to as “coevolution” - a process by which humans and computers evolved, together, their abilities to work with information efficiently and effectively.

NLS was designed as a powerful information working tool to augment human intellect. Like learning to drive a car or operate other kinds of sophisticated and powerful equipment, it would take ten or fifteen hours to learn how to use it. But in the late sixties and early seventies, it was not at all clear that such tools were necessary to interface with computers in the basic use cases that existed at the time. Some people at Engelbart’s lab and elsewhere began to feel that the complexity of Engelbart’s interface was actually serving as a barrier to entry for people interested in interfacing with computers, but there wasn’t another comparable system to test this hypothesis. Enter Xerox PARC - the second major Interaction Design lab.

In the words of Alan Kay, a key figure at PARC, “Engelbart, for better or worse, was trying to build a violin. Most people don’t want to learn the violin.” (Bardini, 2000) Alan Kay and others at PARC, including its director Robert Taylor, also approached computing technology from a place of wonder at the possibilities of working with information. They were serious thinkers. Kay’s PhD thesis involved prototyping how a GUI might enable people to navigate through what he called ‘Ideaspace’ and incorporated inspiration from WH Auden, JS Bach, and Kahlil Gibran - “You would touch with your fingers the naked body of your dreams.” (Hiltzik, 1999) But the researchers at PARC set out to build a GUI that was simple and easy to use, rejecting Engelbart’s emphasis on coevolutionary learning. Furthermore, they rejected Engelbart’s network vision in which multiple screens were hooked in to a single server on which people worked collaboratively - essentially networked computing or cloud computing - in favor of a very new idea, the personal computer.

In 1973, PARC introduced the Alto - the first personal computer designed to support a GUI operating system. The Alto also pioneered what we now know as the WIMP interface, which stands for Windows, Icons, Mouse, and Pointer. It turned out to be phenomenally intuitive and understandable. Steve Jobs, co-founder of Apple Inc., visited PARC in December 1979 and was astonished to see that Xerox hadn’t brought it to market. So he poached some of the researchers at PARC, notably Larry Tesler, and brought it to market himself. Apple’s Lisa, released in January of 1983, was the first mass-produced personal computer with a WIMP GUI - copied from the Alto. It was far too expensive and a massive commercial failure. In 1984, it was followed by the Macintosh.

In the years since the introduction of the personal computer, the WIMP GUI has provided an interfacial foundation upon which computing has grown into what Benjamin Bratton calls an “accidental megastructure” of planetary scale computation. Information processing hardware has become cheap, highly miniaturized, and fast. The net has become vast and infinite. New software programs designed to leverage that sophistication in innovative ways continue to be invented.

Of all the inventions of humans, the computer is going to rank near or at the top as history unfolds and we look back. It is the most awesome tool that we have ever invented.


Today, computers are regularly used to design and manufacture highly complex objects and structures, to coordinate the efforts of global organizations, to broadcast and consume information, and so on. Anyone with an internet connection can access far more up to date information than anyone could access, anywhere, for nearly all of history. If social communication and access to tools are the defining characteristic of humanity, a computer connected to the internet is perhaps (probably, easily) the most powerful tool for learning, communicating, and building we have ever created. In the words of Steve Jobs: “Humans are tool builders. We build tools that can dramatically amplify our innate human abilities. We ran an ad for this once that the personal computer is the bicycle of the mind… Of all the inventions of humans, the computer is going to rank near or at the top as history unfolds and we look back. It is the most awesome tool that we have ever invented.”

In many ways, the dreams of the early computer pioneers have been realized. The interfaces they designed have augmented human cognition and communication in transformative ways. But there are some major problems.  At the same time that computers have come to play a huge role in many people’s personal and professional lives, serious concerns about the damage they may be doing to governance, local communities, and the minds of those who use them have become widespread. (Oren 1991, Carr 2010, Morozov 2011, Bratton 2016, Clinton 2017) Digital spaces, especially those inhabited by knowledge workers for whom computer interfaces were originally designed, have become overwhelming.

What happened? The operating system designed at PARC and brought to market primarily by Apple and Microsoft was designed in the early seventies as a simple, intuitive way for people to start using computers. Its use case was limited to early personal computers that stored hundreds of files whose network connection was limited if it existed at all. The internet did not really exist. Even by the early nineties, it began to seem that “the purely user directed browsing style of the desktop is approaching its limits of utility, with the number of files on a single user’s machine reaching 10,000 and with easy access to even more information across networks.” (Oren, 1991)

Since then, the basic architecture of the WIMP GUI in Mac OS and Windows has remained unchanged. They use color now and have incorporated device-level search (Spotlight on Mac OS) that intends to help people discover and open files on the local device. An important new window called the internet browser gives access to the whole internet. Browsers innovated the concept of tabs, which also exist throughout the OS. But these are hardly structural adjustments. We have become trapped in our thinking about desktop computers. There are several reasons for this, but the net result is that the existing interface architecture has become totally overwhelmed by the massively expanded circumstances in which it is used. Van Dam writes that “the newer forms of computing available today necessitate new thinking about fourth generation interfaces, what I call post-WIMP user interfaces… They rely on, for example, gesture and speech recognition for operand and operation specification. (van Dam, 1997)

One factor contributing to stagnation is that Apple and Microsoft have little incentive to try and create a new interface architecture, particularly one that might be more difficult for people to learn. Together, they have a monopoly on the market and disrupting things would only make their lives more difficult. But why hasn’t another organization come around to address the need for more powerful information working environments designed to respect people’s privacy, attention, and time? Such a system would clearly be in line with the basic founding principles of interface design - to remove the cognitive load of dealing with large amounts of information so that people can focus on, and make use of, that information.

What is the real character of these interfaces, which have become so important in shaping our world? In general, interfaces are thought of as being things that are interacted with, or “used,” by people. The reality is somewhat more complicated. The environment that we exist in fundamentally shapes our physical stimuli and responses, and our cognitive ones as well. We might think of software as being as much a part of our physical environment as the physical structures we live in. Surely it is even more a part of our cognitive environment.

We might think of people as an integrated whole of body and mind. Similarly, we might think of computers as an integrated whole of their networks, software, and hardware. We live together as part of a larger planetary cognitive environment. In reality, the interface exists in the space between people and computers, and it is a two way relationship. People design computers and influence the way computers develop - both intentionally and unintentionally. Similarly, these interactions with computers influence the way humanity thinks, moves in the world, and develops as a species. The fact that the interface exists in a psychological, semiotic, symbolic space between the human and the computer speaks to the importance and the value of design or thoughtfulness in designing that relationship, a key part of the space of cognition which is the realm of design.



With the growing need for software to be designed, new academic disciplines have emerged with names like user experience design, user interface design, and information architecture. Interaction Design in particular has become a unifying discipline that refers to “the practice of designing interactive digital products, environments, systems, and services.” (Cooper, 2007) Unfortunately, the discipline is taught in design schools. Hugh Dubberly, who as Vice President of Design at Apple Computer designed an innovative interface architecture called Knowledge Navigator, outlines the problem in his 2001 Proposal for the Future of Design Education. The discipline of design largely emerged out of preindustrial craft tradition. During the industrial revolution, the process of planning for making became separated from the making. Designers became the planners, but their disciplines - notably industrial and graphic design - remain rooted in the craft era, as do our “strategies for design education.” (Dubberly, 2001).

As making-focused programs, design schools tend not to be intellectually serious. Reading, writing, and thinking is a secondary or tertiary aspect of the curriculum, if it exists at all. Thinking is relegated to a process of “thinking through making” while the production of explicit knowledge from tacit knowledge is ignored entirely. This has been detrimental to our discipline. As a professional discipline, we must focus more and more on understanding data and metrics, understanding the businesses we are a part of, psychology, and the social and philosophical implications underpinning the things that we are making.

A different future is possible. Throughout the history of Interaction Design, its pioneers - Vannevar Bush, JCR Licklider, Douglas Engelbart, Alan Kay, and others - led with ideas. They developed a vision for how computers and humans could work together, then technology was shaped to achieve that vision. Engelbart’s 1968 NLS prototype, for example, was preceded in 1962 by a 144 page essay called Augmenting Human Intellect in which Engelbart made clear the social outcomes his work was seeking. We must return to a considered, thoughtful approach to technology in which revolutionary, sophisticated, and powerful methods for managing complexity are envisioned, then software and hardware are shaped to realize that vision.