An ultimate Zet Universe User Experience is provided only as interplay of both hardware and software.
Sign up at http://www.zetuniverse.com/signup
We are continuing the series of posts describing Zet Universe Interface Language. In this post we will cover the basics of things movement and navigation inside Zet Universe space.
Space: The final frontier. These are the voyages of the Starship Enterprise. Its five-year mission: To explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.
Today we will cover the basics of navigation in Zet Universe space.
As it was noted in the previous post, it is a two-dimensional zoomable infinite space that plays the fundamental role in user interface interactions. In the analogue with the real world’s Universe, this space contains everything in it. In Zet Universe language we use “thing” to describe any living concept from the real world; a thing is located the space.
Zet Universe is being designed to be used using different input methods, including mouse, pen and multitouch (in the beginning). There’s a dilemma on how to design these interactions for multiple input methods; we can either try to optimize interactions for each method, or use the same interaction gestures across all methods. Both approaches have their own advantages and disadvantages; in order to better understand them, there is a need to clearly distinguish them from each other. Hal Berenson, ex-Microsoftie, who until recently was Distinguished Engineer in the company, wrote an excellent article covering this topic, stating that there are three main attributes that are defining applicability of input method to the given task:
These three attributes, density (how much information can be conveyed in a small space), precision (how unambiguous is the information conveyed), and how natural (to the way humans think and work) can be used to evaluate any style of computer interaction. The ideal would be for interactions to be very dense, very precise, and very natural. The reality is that these three attributes work against one another and so all interaction styles are a compromise.
The way navigation in two-dimensional zoomable infinite space is employed heavily depends on the distance required for user to cover to get from the beginning point to the destination’s one. Zet Universe enables user with a simple dragging metaphor that is the same across all three input methods (currently supported) in order to finish the navigation process to reach the final position. Unfortunately, also it is a simple and effective way when navigation is needed to be done within one-two, maybe three screens from the current position; it becomes complicated to make a long-distance “jump” as user needs to drag through many screens to get the final point. This problem is solved by providing the so-called “Big Picture” view that enables user to see the higher-level map where only names of things clusters are shown:
|Infinite space at normal scale; all things are visible; we call this normal mode an “Infinite Space”
|Infinite space at semantic zoom scale; only group headings are shown; we call this mode a “Big Picture”|
Thus, user is able to navigate using mouse, pen or touch by directly clicking (pressing stylus down, tapping) on the part of the space (free of things), moving and unpressing the mouse button (or unpressing stylus, or finishing touch gesture) to finish the navigation in the space. This is working in an absolutely same manner in both modes – “Infinite Space” (a normal one) and “Big Picture”. Zooming gesture (mouse wheel for mouse, pinch-and-zoom for touch) ensures seamless change of the view from big picture to details view and vice versa. It is important to note that we do not provide any simple way to zoom in/out for pen input method.
In the old world of desktop environments the typical approach to choose a thing on the desktop was to point-and-click. In the modern NUI world it is simply tap. We support both approaches to make the interface natural in both interaction modes:
But how can you choose several things simultaneously? In the old world you’d just click in the free area and make a rectangular selection using your mouse. What about NUI world? Your fingers are good enough to move things around, but drawing a thick line to draw a free-form selection by fingers is a hard job. This is the task that requires precision. Thankfully, in Windows slates you are empowered with a pen (or stylus), and that’s the way we provide users with this functionality.
To choose more than one thing, use the “Selection Things” button in the Actions Menu:
Once clicked, it becomes green and you are now in the “lasso selection mode”:
Selecting Things: Drawer
Once thing(s) are selected, the drawer part of the interface appears:
The drawer is playing the role of the visual clipboard, helping user to know what’s selected right now. This part of the system is heavily influenced by the approach common among realtime strategy games where selected units are shown on the “drawer” for exactly the same purpose.
Navigation in both modes is the same; the process of moving things on the short and long distances is not. Why?
The process of moving things on the short distance is pretty similar to the one used for space navigation. Point-and-click (tap, stylus down), drag, release mouse (pen, touch). Done. However, when the space has sufficiently large amount of things in it the need to find a better metaphor to move things on the long-distance becomes more important. To find one we started research in several directions:
- We wanted to find the easy way to transfer things that is already known to the audience,
- We wanted to make the metaphor itself easy,
- We wanted to make sure it will fit into the NUI vision of Zet Universe and modern NUI trends (interaction is done directly with content).
One of the easiest ways to transfer things across a long distance is the one used in Real-time Strategy (RTS) Games.
It is known that RTS games initially used some ideas of desktop environments, namely the technique of “clicking and dragging” to move units around. However, the task of moving things around has different meaning in these games, and the idea of “click on unit, move on map, right-click to point unit to get to the new location” quickly became the standard in these games.
However, in Natural User Interfaces paradigm user expects that all content is directly interactive; specifically, user can drag content with her fingers. At the same time as it was noted above, it is annoying to drag the same thing over a long-distance, thus we needed to find the compromise.
Below is described the approach we’ve taken based on these ideas and considerations.
Short Distance – “Tap-and-Move”
When one thing to be selected:
- tap on it,
- directly drag it within the boundaries of the screen being as precise as the input method permits that,
- leave it at the desired place
Done, this thing is moved to the new location.
When many things to be selected:
- click on “lasso selection” button,
- draw a free-form line around these things as described above,
- make either a right-click or a long tap.
Done, the group of selected things will be “teleported” to the new destination, and their relative positions will be kept. We care about positions very carefully, because they represent the meaning for our user.
Long Distance – “Teleportation”
The same approach is used for both one and many items in case of long distance transfer:
- User should select one or more things,
- get to the new destination using a series of pan-and-zoom operations,
- and make a right click or a long tap in the destination point to get all selected things “teleported” to the new destination.
So, today we discussed the way user can navigate in her Zet Universe of information, select one or many things and move them within the short distance (within the screen boundaries) or within the long distance.
Now, If only we could teleport to a new geographical position in the Earth with the same simplicity in the almost zero-time as you can move information in your Zet Universe!
Zet Universe is designed to be a transparent, ubiquitous environment within which user is left with content and actions she can apply to it, removing the complexity of the underlying technologies.
In these series of blog posts we will be describing the interface language of Zet Universe.
User interface uses a language as the medium to translate the desire of the user into actions that are recognizable by the underlying system.
Zet Universe provides such a language by using the following metaphors:
“Space” is the basic element of the interface as everything else is happening inside it. It is a two-dimensional (2D) zoomable space; we call it “infinite space”. Also it is impossible to see absolutely everything at the same time because nobody can embrace the unembraceable, as it was said by Kozma Prutkov; it gives a working metaphor of infinite space with no practical limitations of number of elements user can have on the space (the only limit is the disk space). Space is designed to be friendly for navigation done using both mouse and touch.
Second basic element is a “thing”. We live surrounded with things. These things can be physical ones coming from our environment, or they can be products of our imagination, or, finally, be digital objects created as elements of virtual world built by computer applications, websites and games. Things we deal with, starting from documents and other files in our folders, to users in Facebook and emails in our inbox, seem to be natural to us. We constantly make different interactions with these things, switching from physical world to a virtual one back-n-force, referencing to them in our endless collaboration with friends and work colleagues. The patterns of interacting with digital objects were brought from the physical world by Internet and computer pioneers, and now these patterns are highly interconnected as we constantly transfer our experiences between our real and digital lives.
Instead of computer strict understanding of classes and instances, things in Zet Universe are more familiar to the user; as in the Metro Design Language, things are content.
In the current alpha development milestone there are several kinds of things user is able to add to her space in Zet Universe *:
- Web Pages.
(*We plan to add more kinds into the system as we move forward with Zet Universe development)
Topic is a central element in Zet Universe kinds map, as it helps to define a part of the area, giving it specific meaning as per user’s need. It is conceptually similar to a name of the area on the geographic map:
|An example of topics, or labels, on the Bing Maps||An example of topic on the Zet Universe infinite space|
File is second foundational element in Zet Universe kinds map, providing interoperability with existing information stored in the various information silos across user’s computer devices, as well as enabling compatibility with existing applications that are using individual files as information containers.
File is represented with a thumbnail and its display name. Any Windows application that is using standard Win32 APIs to work with files is able to work with files stored in Zet Universe by design, including creation, editing and deletion of files. This means that if user added an image, or document, or PDF file into Zet Universe, she can safely open it from there, edit and save; its contents will be kept inside the system.
Web page is third foundational element in Zet Universe kinds map; it is a text-only replica of the existing web page, acting as the next step of favorites in web browsers. When user pins a web page to her infinite space, she is then able to recall the web page as she would do that with a link to a web page stored in her browser’s favorites/bookmarks.
Actions are the third metaphor of Zet Universe Language. In the language philosophy, actions performed on kinds are called “speech acts”. Speech acts are the way in which language is used to accomplish things: asking questions, making requests, taking positions, committing one, and so on. In Zet Universe, they are implemented in the forms of gestures and other modes of direct manipulation. Actions, therefore, are the binding between speech act, kind and gesture. Each action is recorded by Zet Universe with its specific meaning (“remember”, “pin”, “create”, “link”, “open”, “forget”, etc.).
Actions & Gestures
This is the foundational gesture used to add various content types to Zet Universe. To activate this gesture user should perform either a double click using mouse or touchpad, or a double tap using finger or pen on any part of the space that is free from existing content.
Create, Remember, Pin, Select, Capture
These actions are available from the Space Menu enabling user to add new content to Zet Universe or make operations on it. We decided to use different words to these actions based on the corresponding kinds to focus user attention on different meaning that these actions have; for instance, create topic means that a new topic is created on the surface of the infinite space while remember file(s) enables user to choose one or several existing files from her computer to let Zet Universe remember them, pin web page enables user to naturally pin an existing web page to her infinite space, capture photo enables users to make a quick capture of a photo in the context of the current activity, finally, select things helps users to make a lasso-like selection of things (read the second part of this blog series for selection of things).
Link is a property describing explicit relationship between any two things, helping user to define her own ontologies. To create a new link between two things user should perform one of the gestures described below depending on the distance between these things.
In case of a small distance user can simply drag and drop one thing over another.
In case of a large distance user should use a two-step approach that is similar to one used in the real-time strategy games:
- The first step is to select one thing on the surface by either making a click using mouse left button, or making a touch tap on the object. Once a first step is done, a “drawer” area will be populated with short information about the selected thing:
- Second step is to make a right-click or perform a hold touch gesture on the second thing. A dialog will appear asking if user wants to link selected things together.
To Be Continued.
We updated screenshots for our core product features. We also uploaded first screenshot of our “Visual Search” feature that highlights search results in user information space as she types the search query.
Take a look and give us your feedback!
Of particular love is our visual search J
I keep hearing people refer to the the Samsung Series 7 Tablet PC as being the same hardware as the device that was handed out at our developer event “BUILD”.
This is not true. The Series 7 Slates are technically different devices in comparison to the devices handed out at BUILD. Here’s a chart that goes over what the different technical specifications between each:
Well, some folks asked me on what’s the relation between the famous Productivity Future Vision video produced by Office Labs, and Zet Universe, the project I’m working on now at Neocyte Labs. Guess it’s time to give some explanations.
First of all, I need to say that I’m a bit crazy IT guy who strongly beliefs that he can change the world. This belief is based on several success stories I’ve read since I was a young kid – stories by Mark Twain, Jack London, biography of Bill Gates and videos and blog posts about Steve Jobs success; and several great teachers from middle & high schools, in university and in business school.
And, I have a dream: for a long time I want to have all the knowledge of the world in its digital form to live inside the virtual world.
This blog post will tell several (relatively short) stories of my life dedicated to the creation of Zet Universe, including:
- ThinkWeek Paper
- Microsoft Context Awareness Initiative
- NTO Incubation Team
- Windows 7 Touch Team
- Productivity Future Vision
- Project Universe
- Zet Universe
This blog post doesn’t include everything I’ve been doing around Context-aware Computing vision evangelism at Microsoft but only covers stories that were related to Zet Universe project beginning.
So, the story begins with WinFS.
One of big dreamers in the area of knowledge management was Bill Gates, with his ideas about Integrated Storage (see Road Ahead Excites Gates, eWeek, 2003 and Bill Gates on WinFS, PCMag, 2008). His vision of WinFS (another one implementation of the vision) back in 2003 captured my mind, and together with visualization of the data stored in WinFS, provided by Windows Longhorn, it was sold to me.
“Bill Gates, in his own words: “There is a famous quest of mine called integrated storage, where you have not just a file system but more of a flexible object-type database: Things like your contacts, calendars, favorites, your photos, your music—and how you rate those things—are stored in a structured environment.” WinFS was this system, the next-gen underpinning to Windows, and it was planned as part of Cairo, the code name for Windows 95. It’s still a great idea. But making it happen? Not so easy.” (Bill Gates on WinFS, PCMag, 2008)
I’ve got first bits of Windows Longhorn (4008, 4051, 4074 builds), and tried to code against WinFS. Then I tried to re-implement it based on Microsoft Access, then based on SQL Server 2000 and, later, Yukon. Lots and lots of prototypes.
As you know WinFS later was cut from Windows Longhorn, Windows Longhorn itself was reset in 2004, and Windows Vista never got WinFS at all. WinFS though WinFS itself was released as Beta and then as Beta Refresh in 2005.
Back in 2005 I’ve created a small unofficial group dedicated to WinFS evangelism in Russia; several slide decks, prototypes, blog posts etc. I’ve made a paper on creation of knowledge management software based on WinFS and published it at Microsoft’s Student Conference; talked with slides on multiple student conferences across several Moscow-based universities.
In the summer of 2006 the WinFS project was killed before its second Beta release.
For me it was a dream just killed by Microsoft, and when Bill Gates was last time in Moscow (October 2006), I’ve asked him a question about WinFS; he said that “WinFS will find itself reappeared in multiple Microsoft Products later on”, and eventually his was right – ADO.NET Entity Framework was born as part of WinFS, Microsoft Sync Framework, finally all hard work around Win32-support of BLOBs storage inside SQL Server 2008, 2008R2 and 2012 is all based on WinFS legacy.
In the summer of 2006 Alexander Lozhechkin (who was Evangelism Manager, DPE at Microsoft Russia back on that time) invited me to join his team as MACH student (MACH stands for Microsoft Academy for College Hires). I passed several interviews and tests, and my first working day was on January 9, 2007. That was the moment when I started to collect all information about WinFS I could ever find at Microsoft internally, including specs and bits; I even talked to Quentin Clark and several folks from old WinFS Team (btw Shishir Mehrotra is now YouTube VP at Google). I was totally crazy about bringing WinFS back to Windows team. I’ve even asked a few crazy questions on this topic to Steve Ballmer in the summer of 2007. I was young and passionate about WinFS; thanks that didn’t cost me my job
To summarize this section let me share my own post-mortem of WinFS. It’s quite short as it highlights root product problems.
|Vision||Vision was too broad – Active storage platform for organizing, searching and sharing data||Focus on storage & search first|
|Killer App||No killer app after Windows cut WinFS back in 2003/2004||Windows Shell should be a killer-app|
|Schedule||Schedule was originally connected to Windows Longhorn schedule which itself wasn’t perfect; when WinFS got out of Longhorn ship it was easier to plan things||Build WinFS first on its own in collaboration with Windows Shell team without any commitment to Longhorn – to understand all problems and solve them independently of Longhorn schedule|
|Key Customer||This is connected to problem of project being cut from Windows (who was supposed to be its key customer) and lack of killer app (which was supposed to be WinFS)||Same as above|
|Concepts||The team underestimated the difficulty of the concepts (for instance, a more modern NEPOMUK European project which also has a WinFS-like storage system – it took a lot of time and involved a lot of industry and academia expertise to build a thing); the next problem gives an example of underestimating the difficulty of the concepts||Ship more often to test ideas, research semantic web ideas, deep-dive into philosophical books to better understand what kind of problems team got after their goal to become a generic storage for everything|
|Static Schemas for Everyday Things||Approach works well inside static corporate knowledge ontologies environment, but is not applicable for the open world of myriads apps; as objects in WinFS have been envisioned to be used by myriads of apps, it was impossible to agree upon common schema for common things like “Message”, “Person”, etc.||Use minimal schemas for basic information entity (that’s how we call them in Context Storage); use Semantic Web triples approach to make data representation schema-free|
I think that the main problem of WinFS was its lack of deep integration with Windows Shell (from product development perspective). And that’s why we work on Context Storage & User Experience all together.
As you can see I’ve made a research in order to better understand what was wrong with the implementation, strongly believing into the Integrated Storage Vision.
For a sometime there was a special, unique facility at Microsoft, called Microsoft Center for Information Work (CIW) (opened in 2002). One of the great folks behind it was Russ Burtner. His team produced a lot of great videos (for instance, “Microsoft CIW Prototype Demo“, “Center for Information Work – The Desk”), in addition to the facility interiors, hardware and software setup showcasing the future of IW as envisioned by Microsoft Office group back at that time.
July 2007 was then time when new organization, Office Labs, was getting started inside Microsoft’s Business Division, and CIW was transformed into the new group, Envisioning Team. Russ was working in that new team.
Well, this is not something that’s often discussed outside of Microsoft, so to keep this part of the story short I’d advise you to read this article from Seattle Times on what is (or was) the BillG’s ThinkWeek. Anyway, in May 2007 I and my friend Andrew Webber (UK) from MACH program were passionate about WinFS & S+S ideas and Microsoft itself, and we decided to create some prototypes to visualize our ideas. As we were in US in the summer of 2007, I was able to talk to the famous Russ Burtner from CIW and lucky enough to get him involved into our ideas that morphed into the ThinkWeek paper. Needless to say that Russ is a fantastic guy (currently he works in PNNL, here is his Precision Information Environments projects video, highlighting his latest work). We also got another employee to publish his ideas as part of that paper. The paper got several comments including Director of Engineering from Windows team and another Microsoft’s executive, Donald Ferguson (CTO) (who was Chief Software Arhitect at IBM Software Group prior to Microsoft; he later moved to CA). Unfortunately I can’t publish this paper here, but just to highlight it had same ideas I’ve been advocating for a sometime now – Microsoft needs its S+S vision with its own unique storage (similar to WinFS concepts) to store everyday things, and new immersive user experience (like CIW sketches made by Russ Burtner and some of my sketches of WinFS-based desktop).
So, Productivity Future Vision was directly influenced by our ThinkWeek paper, as Russ Burtner later said. Specifically the infinite desktop concept shown in the video was started with drawings we’ve made in the ThinkWeek paper.
Microsoft Context Awareness Initiative
That ThinkWeek paper and more or less positive executives comments framed out the vision for the Context-aware Windows Platform and the vision for Microsoft’s Future that I’ve been advocating while working there. I’ve continued building prototypes and collaborating with Russ Burtner on his Productivity Future Vision video that he had worked on with the rest of Office Labs Envisioning Team. As it was not about me but about vision, I’ve formed an Initiative called “Microsoft Context Awareness Initiative” that lately included about 50+ people across Microsoft product, research, sales, marketing & services groups across the world.
It is important to highlight that all these activities were done in spare time, often during nights and trips to US; they were not technically a part of my job at Microsoft.
NTO Incubation Team
In March of 2008 I’ve got a first big partner at Microsoft Russia, Mikhail Matveev, who was National Technology Officer (NTO) at Microsoft and replaced Igor Agamirzyan, a famous Microsoftie who worked a lot trying to bring Microsoft Research and R&D to Russia since early 2000′s. Since 2008 my activities were done as part of NTO activities in Russia. More than that, Mikhail Matveev was an advisor for my diploma work that was also about context-aware computing topic. Needless to say that he was and still is my good friend.
At the time I was working in DPE team, and was continuing these activities in spare time, as before.
After my second trip to US I’ve formed a small incubation team called “NTO Incubation Team” to build a prototype of Augmented Reality-based application for Windows 7 as a proof-of-concept. This team included my good old friend Alexander Popov (who is now working at Microsoft) and Vladimir Borisov, who was the key engineer behind the project. Our project was supported by Director of Engineering from Windows team (who originally commented our ThinkWeek Paper); the project was later shown to MSR CVP Dan Ling (as one of Russian NTO projects) and to Windows 7 CVP (Bill Mitchell) and got good reviews.
Thanks to recommendation from Dan Ling I received an invitation for a meeting with Mary Czerwinski, Research Area Manager of VIBE Team, back in May, 2009, so I made my third trip to the US.
Windows 7 Touch Team
As I had (and still have) a huge interest around Natural User Interfaces, I’ve bought a first multitouch laptop, Dell Latitude XT, to try out Multitouch APIs of Windows 7. That brought me to become an internal beta-tester of Multitouch Experiences in Windows; I’ve been even involved in some decisions around level of support of multitouch in Windows Explorer (thanks to Bert Keely!).
Productivity Future Vision
At the same time I’ve been collaborating with Office Labs Envisioning team and I’ve met the team twice before my third trip to US. One of the things I’ve been working on was joining together my work on prototyping the “Integrated Storage” with Russ Burtner’ s ideas around Infinite Desktop concept. The Productivity Future Vision was finally released to the public circa June-July of 2009; below you can see a screenshot of the “Infinite Desktop” with topographic clusters as it was originally envisioned in our ThinkWeek paper. Certainly, that original concept transformed several times and took different forms before it got the point of the screenshot as seen below.
See the Productivity Future Vision video here.
In May of 2009 the first real working prototype of Project Universe (called as “Context-aware Shell UX” back then) was built in collaboration with Office Labs folks (I’ve worked with Russ Burtner and Christian Schafleitner from Envisioning Team, as well as with Nathan Fish and few other folks from other Office Labs teams). The prototype was based on ideas of WinFS, CIW, our original ThinkWeek paper, papers on building Digital Work Environment published in the book “Beyond the Desktop Metaphor” (whose co-editor was Dr. Mary Czerwinski), Productivity Future Vision visuals, Zoomable User Interfaces concepts by Jeff Raskin, and so on. Below you can see the screenshot of final prototype as it was later shown on multiple events inside and outside of Microsoft.
As we can see, European Union’s researchers also have been involved in similar work with their famous NEPOMUK project. Other similar projects started to appear later on. Back then, most of those at Microsoft I’ve discussed these ideas with were skeptical, except the few folks who helped me to build the original prototype.
That was the moment when the original “Project Universe” started.
Zet Universe is actually a direct continuation of the “Project Universe”. It is still based on same ideas – an Integrated Storage (Context Storage), Infinite Desktop, Multiple Workspaces, Semantic Web, etc. I’ve incorporated a lot of ideas into the original prototype, and tried to get as much lessons as possible from failures of other similar projects, including WinFS, Microsoft’s Semantic Engine, Apple’s OpenDoc, and so on.
(Zet) Universe was presented by me as a possible direction for Digital Work Environments at UX Russia 2010 conference right a week after end of my internship as HCI Researcher at Microsoft Research. I’ve got several positive reviews but at the same time was finalizing my interviews with Google so I put my activities on hold for some time. In November of 2010 I’ve been invited to join Greenfield’s Harvest 11/10 (a startup weekend-like event) and presented a project inspired by Universe but designed in the form of the web browser. We’ve got first place in the competition and I’ve got my first co-founder, Elena Goidina.
That’s how Universe (later Zet Universe) started in its current form. Yet the story of Zet Universe’s development is a completely different one, so it is not included in this post, but you can see it being posted in this blog anyway.
To sum up, work on all of these side activities around Context-aware Computing was very inspiring for me. I’ve got a chance to work with such famous and great people as Gordon Bell, Erik Horvitz, Mary Czerwinski, Dan Ling, Russ Burtner, and many others during my tenure at Microsoft. I’ve been able to participate in multiple internal Microsoft activities around innovation, and even organized my own Microsoft Context-aware Computing Workshop back in March of 2010 (quite successful according to participants’ feedback). Not everything was done right, of course, but it was a great time.
And all of these activities have been accumulated into the Zet Universe project. Now you can see why I’m so dedicated to it, how much energy and passion is involved into its making, and understand why I’m working on this project instead of joining some great startups and worldwide companies here in Russia or in Silicon Valley.
It all started with a dream.
This white paper is also available in the PDF format via this link: The Post-PC World Future Vision – Neocyte Labs
It’s almost a week after Microsoft’s exciting Windows 8 Consumer Preview launch and it’s a good time for reflection on usage experience. It’s the time to share some thoughts on what’s good, what’s bad and what could be the Next Big Thing of the Post-PC World. Why Microsoft brought such a radical change to Windows to life? As usual, let’s start with history lessons.
The Information Age: The PC & Post-PC World
Alvin Toffler, one of philosophers of the XX century, wrote in his “The Third Wave” book that the future will be the world where people will primarily work with information. He predicted that the most usual job will be the job of “Knowledge Worker”. Alvin Toffler was right. He predicted the new Information Age. And it started with the Rise of PC.
The Rise of PC
As you might know, the first killer application for the PC was the Lotus 1-2-3 app, or electronic tables. Its success shaped the popularity of the PC – it started the productivity market and made the PC to enter the corporate environment. Microsoft’s success was not its Windows operating system but its productivity suite, Microsoft Office. Its huge popularity required Microsoft and many other companies in the industry to focus their research on the new category of people, “information workers”. While these people are not exactly the same as Toffler’s knowledge workers (see Mark Bower, “What’s in a name? The Information Worker, The Knowledge Worker and the Structured Task Worker”, 2005), they are actually those who moved the paperwork into the digital form, building the information spaces for the private businesses and governments.
Key PC Value: A Tool for Information Consumption toward Information Creation
To conclude, up until recently (iPad appearance) the main focus of the average PC user was on the information creation & consumption, the whole experience. You used the productivity tools like Microsoft Office, or Basecamp, etc., you wrote to your blog or, more recently, to your social network, you have been creating new information and dealing with it. Average PC user was information worker, and as more and more processes became automated, the need for Data Entry specialists started to be smaller and smaller while the need for Knowledge Workers arise. From another perspective, there is, of course, a trend of using information created by other Knowledge Workers, be it reading articles & books, or listening to music, or watching videos, or looking through social network updates. And then…
The Rise of Tablets: A Brave New Post-PC World
Tablets – The Second Coming
Bill Gates, co-founder of Microsoft, was a big fan of tablets back in the beginning of the XXI century, and Microsoft invested a lot into building a special product line to make his dream happen (see Microsoft Tablet PC). Unsuccessfully. Why? The two biggest issues of the original tablets was that they haven’t been optimized for the new kind of the interaction they introduced, and they had lack of apps ready for that new interaction model. Despite of lots of investments made into the pen computing (handwriting recognition, etc.), it was a finger-driven touch that became the successful interaction model for tablets. In combination with new, simplified apps designed for finger-driven interaction, new, Post-PC tablets quickly started to change the traditional PC market.
Introducing First Post-PC Habits
The new, “Post-PC” tablets introduced not only new interaction models, but also new habits of dealing with information. Cause tablets usually don’t have any physical keyboard and mouse, fast information input and precise pointing features are not available to the modern tablet users. Need for long battery life and use of large controls (making it easy to deal with UI using fingers) driven the need to make tablets suitable for a specific range of tasks commonly called as content, or Information Consumption (see “The State of the Tablet and eReader Market”, Mashable, 2011). Modern tablets brought the concept of natural user interfaces to the market; but they also turned the Information Consumption into the new user experience (UX) mode. But as this new mode appeared, it didn’t completely replace the existing one, focused on information consumption. A typical workflow for the modern user is the following:
Mode: Information Creation mode
Device: PC/Mac This mode consists of several activities including consumption (reading, analysis) and creation itself (making notes, creating new documents, reviewing existing, etc.). Web browser with multiple open tabs, several text documents, electronic tables, slide decks are usually open, and a lot of information is transferred back and force between different content editors to foster new information creation.
Mode: Information Consumption mode
Device: iPad/Android-tablet This mode consists of several activities largely focused in consuming existing information for entertainment purposes. It is not that users of tablets never create information, they do – they post status updates in Facebook, they triage emails (short answers given the lack of fast information input mechanism), they post new photos etc., but the focus is largely on information consumption.
The Post-PC World before Windows 8
To summarize the first part of this article, the Information Age got several remarkable periods, of which most notable were the Rise of PC and the Rise of Tablets. In other words, it was the rise of information creation & consumption user experiences introduced by PCs, followed by consumerization of IT with Post-PC devices focused on stunning and delicious information consumption user experiences.
It is clear, however, that with all the stunning new user experiences introduced by tablets these devices are unable to address the need of Knowledge Workers who still need the software to create information that they will be able then to consume. Let me quote here a recent article on TechCrunch (Google Drive and Cloud Wars, TechCrunch, 2012) highlighting this need: ”…with Google Docs, we bought into Google’s vision for reinventing how we create and interact with data. At least many of us in the tech community did. I remember thinking that Google Docs (and Writely before it) would easily take over Microsoft Office as the choice solution for creating and editing new information. How could it not? But it turns out that lawyers still needed to share detailed, structured documents. Investment bankers wanted to access complex spreadsheets. Doctors had to review medical records. Reality set in that most people still created content using local apps like Photoshop, Autodesk and – gasp! – Microsoft Office“. (Emphasis is mine – D.K.) This is important. There is a strong need to create information, and new Post-PC habits can’t address this need. So what did Windows bring to the table?
Windows 8 Consumer Preview
Microsoft seems to be late in the show: Apple is getting ready to announce its third generation of iPad, Google’s Android-based tablets became almost a de-facto standard after iPad. But Windows 8 is a slow yet more coherent answer to the market needs. Windows 8 is a chance for the company to survive and gain back its share in the PC/Post PC market by applying Microsoft’s beautiful Metro design language to Windows 8 and introducing that new Information Consumption UXmode that made modern tablets so popular.
Actually, though in a different way, Apple is porting its Information Consumption UX to its Mac (which is a PC from the form-factor perspective), and getting not that much love it was expecting (see Mac OS X Lion Review: This Is Not the Future We Were Hoping For, Gizmodo, 2011 and Mountain Lion Review: What Happened to Apple’s Innovation, Gizmodo, 2012).
In contrast to iPad and Android tablets which are designed for only one information consumption user experience mode, Windows 8 tablets are designed with two different user experience modes in the same machine –the modern Metro and the classic Desktop, to address not only the need for new, Post-PC user experiences, but also the need the need to create information. The company’s rhetoric is that “Windows 8 will offer a no-compromise experience, the best of consumption and creation, of portability and power, of new and familiar” (see Steve Ballmer’s Speech on CES 2012, Steve Ballmer, Microsoft, 2012). There are, however, several problems with the Windows 8 that I feel a need to highlight in this blog post.
Conflict of Two Information Processing Modes
In PC/Mac + iPad/Android-based tablet case the experience is physically split between two different devices. Experience is different, but devices are also different, and it is much easier to adapt to new tablet experience for information consumption purposes and continue with existing habits using PC/Mac device to perform familiar information creation activities. In contrast, Windows 8 aims to deliver both of the worlds in one physical device; unfortunately transition between experiences is not as strongas it is in the first case – you can easily switch to “Information Consumption” (Metro) mode and back to “Information Creation” (Desktop) mode, back and force. And you are supposed to do that. However, there is a problem: information consumption and information creation are not different things; creation requires consumption, consumption is the integral part of creation. And thus the combined user experience is a conflict of two information processing user experiences:
- They involve different user interface paradigms,
- They introduce different interaction models,
- And because information consumption is integral part of information creation mode, two new user experience modes introduced by Windows 8 enforce fast context switching where these mentioned differences are so huge that the cost of each context switch will be way too high.
Open Questions of Windows 8 World
To summarize, Windows 8 is a necessary step towards creation a truly new Post-PC user experience supporting both information processing modes – creation and consumption, but the way it is addressing this problem is not sufficient enough to satisfy the needs of the new generations who felt in love with their iPads and Android-based tablets. Classic Desktop User Experience mode of Windows 8 is its Achilles’ heel.
The Next Big Thing
It seems to be obvious now, given the points discussed above, that the Next Big Thing after the Rise of PC and the Rise of Tablets will be the Rise of the new devices and corresponding software that will not necessarily be looking like PC or Tablets but will address both the need for new stunning, natural user experiences and the need for both information creation and consumption.
The New Devices of the Post-PC World
We already see the signs of these devices; their focus is shifted to minimize the very costly transition between information creation and consumption user experiences. Industry is continuously looking for new form-factors to address this problem. There are several approaches and some of them are documented below.
Tablets & Slates
Also modern tablets based on ARM processors are gaining more and more computational power but have OS-based limitations of software that makes it impossible to handle heavy information creation tasks leaving them to their PC-counterparts, slates. There are several examples of slates that are combining the sleek look-n-feel of modern tablets but slates are evolutionary belonging to the old Tablet PCs camp. A typical example is a Samsung Slate 7 Series which is combining the power of the traditional PC, sleek user experience and wireless keyboard with dock-station, helping its owner to make a transition between these two user experiences.
Convertibles are also evolutionary belonging to the old Tablet PC’s camp. For instance, an HP 2760p laptop is a good example of such convertible device, providing support for traditional user experience of laptop in combination with a modern tablet user experience with a one turn of the display.
A Samsung’s slate, called Samsung Sliding PC 7 Series, tries to decrease the cost of this transition by providing a sliding metaphor.
Dell’s Inspiron Duo addresses the context switch between two modes in a way similar to the one used by convertibles, but in a truly innovative manner:
There are several examples of the so-called transformers out there, like Acer Iconia W500.
After the well-known industry story about Microsoft’s secret tablet, Courier (that never shipped), some focus shifted to two-screen tablets, like an Iconia; however these devices seem to not being able to attract user’s interest as they lack simple and effective ways of entering information as quick as it is possible with other keyboard-enabled devices.
Search for New Devices Form-Factor Is Just Started
It is clear that the search for a form-factor of new devices that will provide stunning natural user experiences and address the need to support both information creation and information consumption user experiences is not finished. The key success factors for the winning device will be the speed and transparency of the transition between these two user experience modes – information creation and information consumption. It is also clear that this device will need to have a PC-like computing power to support running multiple applications and multiple documents, be occasionally-connected to the network and have large storage memory to support offline scenarios, support new natural user interaction models like Kinect and holographic projection.
While industry is still in search for a new device form-factor, the binding connection between the old and new worlds will be the new software that will help users to make a transition from old devices to the new ones. What will this software look like? What specific problems should be addressed? The Post-PC Software is a Digital Work Environment that is designed to support both information creation and information consumption user experience modes, have a transparent or zero transition between these modes, provide the natural user interface(s) and work on both existing PCs and Tablets, as well as be native application for new Post-PC devices discussed above. It can be characterized as such:
- This software will extend the beauty and touchy experiences of modern information consumption applications to the creation mode without having a compromise on features,
- This software will be designed to support touch input as a first-class citizen for information analysis tasks,
- This software will use attractive, actionable visualizations enabling fast and direct interaction with information,
- This software will work in both online and offline modes, enabling working and being entertained everywhere without constant need to be online,
- This software will be cloud-connected, which means that its data will be securely backed up by cloud services; it will also make it quite easy to transition between different devices with need from user to only provide their credentials to set everything up and run,
- This software will provide user with ability to work on multiple documents of the same and different kinds and quickly transfer information between them; it should continue on ideas lying in Android Intents and Windows 8 Charms which could be also safely called as system-wide semantic actions;
- This software will support new information creation on-the-fly, and as its philosophy will continue the directions of natural user interfaces, content will be the king; this software will have content and people as first-class citizens as opposed to apps that are in the focus of modern Post-PC ecosystem; and any kind of content (people, articles, documents, companies, places, etc.) should be pinnable on the new infinite desktop of this digital work environment;
- This software will be intelligent; it will provide semantic and contextual recommendations based on current users activities, enable a cross-content search (in a manner similar to Windows 8);
- This software will enable users to focus on their current activities and being saved from constant interruptions (see Reading Books on a Tablet is Dumb, Gizmodo, 2012); at the same time this software will help users to see the big picture and be never lost in the massive information piles stored in their devices; this software will help users to quickly switch between their different activities ranging from home to work, and quickly restore the context of each activity by providing access to all the content users worked with before they left that activity to transition to something else;
- This software will have integrated communication support to enable information sharing and people-to-people communications also as first-class citizens of the system;
- This software will provide collaborative work environment as its integral part, extending the power of existing collaborative tools like Google Docs to the complex information creation software;
- The software components enabling content consumption and creation (the new “apps”) should be easily suspended and restored when needed;
- This software will help users to instantly link various content together, as well as make link traversal along these links to discover the relations between different information pieces of the modern complex information world.
The Work on Making The Post-PC World Happen is Just Getting Started
In the past 20 years we’ve seen how Information Age introduced new job role of “Knowledge Workers”, we’ve seen the Rise of PC that introduced the information creation and information consumption user experiences that powered the “Knowledge Worker” activities across the world; we’ve seen how the Rise of Tablets shifted the perception of computers and made the modern Post-PC an user-friendly, simple, playful device making it easy to play and consume information. We see that the transition to Post-PC world is still in its beginning, and we see how industry is searching for those device form-factors which will finish the transition from PC phase to a brave new world of natural user experiences. We see that software companies are in constant search for new user experiences that combine information creation and information consumption modes and we see how Microsoft is addressing this problem with its Windows 8 software; we see, finally, that even Windows 8 is not answering to the need for introduction of new, modern information creation natural user experiences that the Post-PC world is dictating for. And it is clear that we still don’t have new hardware of the true Post-PC world that will support information creation and information consumption user experiences all together based on natural user interfaces principles, and, finally, we see that we are yet to get new software of the Post-PC world.
Zet Universe as the New Digital Work Environment Software for the Post-PC World
At last, I want to highlight that we at Zet Universe are working hard on making this new software dream happen, as our own goal is to realize this vision of the Post-PC Digital Work Environment in our software called Zet Universe. Feel free to read about our vision of Zet Universe, check out our work on latest Product Features, and subscribe to our Alpha Testing mailing list to be the first who’ll get the access to the software bits of the Post-PC digital work environment.