My favorite Robert Scoble’s suggestion for the Next Microsoft CEO:
“Get someone who loves the future. A CEO shouldn’t just need to be a builder (like you said, someone who architects, runs software teams, etc) but also needs to stand up in front of the world and get everyone to believe. Ballmer NEVER did that for me. ScottGu? Yeah. David Sacks? Yeah. But needs to be someone who understands contextual systems (mobile, local, social, sensors, wearables). At least for the consumer side of the fence.”
From my perspective of someone who is a CEO of an innovative company and loves the future, it is not enough to do that.
You need to firmly believe in the future you envision. You should live
in that future inside your mind, and bring it to today’s world as if it already existed.
You should be a builder who could make that future happen on his own. Describe the vision. Design it. Write code. Ship it.
Then, with such a team as Microsoft, you can scale your ability to make that future happen.
What kind of future I see for Microsoft?
John F. Kennedy said once:
“We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.”
The first time I have read Bill Gates books, he was writing about intelligent software that becomes the digital neuron system for the enterprise. Software that becomes a digital assistant for everyone. Bill Gates wanted Microsoft to bring the power of computers to every desk and to every home. Many companies are now crazy about consumers, and Microsoft did a lot to transition half of it into the consumer company. But Microsoft started as the company for hackers, for creators who built their first computers from scratch (remember BASIC for Altair). A lot changed since that time, with single OS across all devices and services, with S+S productivity suite, first-class development platform and tools, etc. But that passion to create software for those who create has always been at Microsoft’s heart. And I believe that Microsoft should be a company for those whose job is to create new things.
Some people believe that going after somebody else’s business success is a good business practice, but I believe that Microsoft had a very strong team of visionaries who had enough data, and passion, and energy to envision a different future, with the roots in the strengths of Microsoft, and its focus on creators.
I have participated in several internal events focused on innovation called ThinkTanks. I organized a unique Microsoft Context-aware Computing workshop together with Microsoft Research in Redmond, WA, in 2010, and seen many talented engineers and researchers, who built so many innovative prototypes and designs, who wanted to help the company to make a breakthrough into the new, contextual world, where hardware has sensors, and software is intelligent, and becomes our digital assistant. But nobody wanted to listen to them.
I believe that the New Microsoft should never allow closure of such innovative projects as Courier. It should never allow such talented people as those behind the Office Envisioning Team to leave the company. It should never allow those in charge of existing businesses like Windows to eliminate their underpowered challengers inside the company.
I believe that the new Microsoft’s CEO should bring the culture of creators into the company. He should bring the power to creators, and make everything possible to help them to unleash their creativity, to reimagine Microsoft products, to reshape the IT world, to make the future vision a reality.
New Microsoft’s CEO should make it the company of creators, the company for those who create the future.
Zet Universe, Developer of Visual Collaboration and Sharing App Zet Universe, closes pre-seed round with 3 VC firms, led by Moscow State University (MSU) Business Incubator.
BOSTON, Aug. 14, 2013 /PRNewswire-iReach/ — Today at the Demo Showcase for Autodesk Executives, Zet Universe announced closing of pre-seed round of financing from 3 VC firms: MSU Business Incubator, LETA Capital and Altair.VC. The investment would help the company to complete its first visual collaborative and sharing app for Intel-powered ultrabooks and tablets running Windows 8.
Being the finalist of MassChallenge 2013, Zet Universe currently works on its first version of the product by tailoring the research prototype into the market-acceptable application. With experience from the leading high-technology companies, the team came together to create Zet Universe. Aspiration of the team is to change the way people use Intel-powered ultrabooks and Windows 8 touch devices (primarily slates).
The app will provide a new dimension for productivity on the touch devices. Zet Universe introduces a natural, touch-first, visual way to organize and share documents for the mobile information workers.
“A year ago, we came together as a group of creators from Microsoft, Google and Microsoft Research, and beyond because we needed better tools for visual information organization and sharing. Today we welcome MSU Business Incubator, LETA Capital, and Altair.VC teams to help us build an innovative suite of mobile productivity software”, said Daniel Kornev, CEO at Zet Universe, Inc.
Investors are optimistic about Zet Universe. “Zet Universe may open a new era of user interaction with the information contributing to the general trend of radical changes in human-computer interface. And there is no doubt that a skilled team of founders will be able to create a truly innovative product, so we are excited to support Zet Universe”, noted Sergey Toporov, Principal at Leta Capital.
“Zet Universe is like visual Dropbox and brings innovation into productivity software. We believe in the concept and are excited to support the team”, said Kirill Klokov, Partner at MSU Business Incubator.
“Altair.VC has been supporting and consulting Zet Universe during their participation in MSU Business Incubator program. They have made solid progress and have become one of the 4 Russian companies represented in the final of MassChallenge 2013. This proves the potential of the product and the team, so our decision to support Zet Universe came naturally”, emphasized Igor Ryabenkiy, CEO of Altair.VC.
A closed group of professionals from leading U.S. and Russian IT companies has already tried out the private alpha version of the product, and provided valuable input back to the product team. Following the initial test deployments, the company will invest the funds raised into accelerating the development of the next release to better address the business needs of the first customers.
About Zet Universe:
Founded in 2012, Zet Universe, Inc. (www.zetuniverse.com) focuses on visual file organization and sharing of information on mobile devices. Zet Universe offers new experience in processing of information designed to increase productivity of knowledge workers, primarily product managers and product designers.
About MSU Business Incubator:
Moscow State University (MSU) Business Incubator was founded in 2010 as part of university program to support innovative entrepreneurship. It is a subsidiary of Moscow State University named after M.V. Lomonosov and serves as business-accelerator for thoroughly selected startups (15 residents were selected from 323 applications). Alumni of MSU business Incubator have attracted over $3M of venture capital and created 129 jobs.
About LETA Capital:
LETA Capital – a corporate boutique venture fund founded by LETA Group, a Russian IT-holding Company with over $100 million dollars in revenue. Annually, LETA Capital makes investments ranging from $5 million to $7 million dollars. The deal size ranges from $400k to $2 million dollars. The fund is aimed at supporting innovative IT startup companies at their seed or early growth stage.
Altair.VC is a seed fund investing to very early/seed stage startups, primarily in the areas of Internet and Mobile. The fund has extensive operational and investment experience in Russia, Europe, China, and the USA. Altair.VC takes a leading investor role, mentor startups and provide operational support when necessary, but is also comfortable with making investments as part of a syndicate.
For more information, please inquire:
Sales and Marketing Director
Media Contact: Pavel Kuzmenko, Zet Universe, +79636107185, Pavel_Kuzmenko@zetuniverse.com
An ultimate Zet Universe User Experience is provided only as interplay of both hardware and software.
Sign up at http://www.zetuniverse.com/signup
We are continuing the series of posts describing Zet Universe Interface Language. In this post we will cover the basics of things movement and navigation inside Zet Universe space.
Space: The final frontier. These are the voyages of the Starship Enterprise. Its five-year mission: To explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.
Today we will cover the basics of navigation in Zet Universe space.
As it was noted in the previous post, it is a two-dimensional zoomable infinite space that plays the fundamental role in user interface interactions. In the analogue with the real world’s Universe, this space contains everything in it. In Zet Universe language we use “thing” to describe any living concept from the real world; a thing is located the space.
Zet Universe is being designed to be used using different input methods, including mouse, pen and multitouch (in the beginning). There’s a dilemma on how to design these interactions for multiple input methods; we can either try to optimize interactions for each method, or use the same interaction gestures across all methods. Both approaches have their own advantages and disadvantages; in order to better understand them, there is a need to clearly distinguish them from each other. Hal Berenson, ex-Microsoftie, who until recently was Distinguished Engineer in the company, wrote an excellent article covering this topic, stating that there are three main attributes that are defining applicability of input method to the given task:
These three attributes, density (how much information can be conveyed in a small space), precision (how unambiguous is the information conveyed), and how natural (to the way humans think and work) can be used to evaluate any style of computer interaction. The ideal would be for interactions to be very dense, very precise, and very natural. The reality is that these three attributes work against one another and so all interaction styles are a compromise.
The way navigation in two-dimensional zoomable infinite space is employed heavily depends on the distance required for user to cover to get from the beginning point to the destination’s one. Zet Universe enables user with a simple dragging metaphor that is the same across all three input methods (currently supported) in order to finish the navigation process to reach the final position. Unfortunately, also it is a simple and effective way when navigation is needed to be done within one-two, maybe three screens from the current position; it becomes complicated to make a long-distance “jump” as user needs to drag through many screens to get the final point. This problem is solved by providing the so-called “Big Picture” view that enables user to see the higher-level map where only names of things clusters are shown:
|Infinite space at normal scale; all things are visible; we call this normal mode an “Infinite Space”
|Infinite space at semantic zoom scale; only group headings are shown; we call this mode a “Big Picture”|
Thus, user is able to navigate using mouse, pen or touch by directly clicking (pressing stylus down, tapping) on the part of the space (free of things), moving and unpressing the mouse button (or unpressing stylus, or finishing touch gesture) to finish the navigation in the space. This is working in an absolutely same manner in both modes – “Infinite Space” (a normal one) and “Big Picture”. Zooming gesture (mouse wheel for mouse, pinch-and-zoom for touch) ensures seamless change of the view from big picture to details view and vice versa. It is important to note that we do not provide any simple way to zoom in/out for pen input method.
In the old world of desktop environments the typical approach to choose a thing on the desktop was to point-and-click. In the modern NUI world it is simply tap. We support both approaches to make the interface natural in both interaction modes:
But how can you choose several things simultaneously? In the old world you’d just click in the free area and make a rectangular selection using your mouse. What about NUI world? Your fingers are good enough to move things around, but drawing a thick line to draw a free-form selection by fingers is a hard job. This is the task that requires precision. Thankfully, in Windows slates you are empowered with a pen (or stylus), and that’s the way we provide users with this functionality.
To choose more than one thing, use the “Selection Things” button in the Actions Menu:
Once clicked, it becomes green and you are now in the “lasso selection mode”:
Selecting Things: Drawer
Once thing(s) are selected, the drawer part of the interface appears:
The drawer is playing the role of the visual clipboard, helping user to know what’s selected right now. This part of the system is heavily influenced by the approach common among realtime strategy games where selected units are shown on the “drawer” for exactly the same purpose.
Navigation in both modes is the same; the process of moving things on the short and long distances is not. Why?
The process of moving things on the short distance is pretty similar to the one used for space navigation. Point-and-click (tap, stylus down), drag, release mouse (pen, touch). Done. However, when the space has sufficiently large amount of things in it the need to find a better metaphor to move things on the long-distance becomes more important. To find one we started research in several directions:
- We wanted to find the easy way to transfer things that is already known to the audience,
- We wanted to make the metaphor itself easy,
- We wanted to make sure it will fit into the NUI vision of Zet Universe and modern NUI trends (interaction is done directly with content).
One of the easiest ways to transfer things across a long distance is the one used in Real-time Strategy (RTS) Games.
It is known that RTS games initially used some ideas of desktop environments, namely the technique of “clicking and dragging” to move units around. However, the task of moving things around has different meaning in these games, and the idea of “click on unit, move on map, right-click to point unit to get to the new location” quickly became the standard in these games.
However, in Natural User Interfaces paradigm user expects that all content is directly interactive; specifically, user can drag content with her fingers. At the same time as it was noted above, it is annoying to drag the same thing over a long-distance, thus we needed to find the compromise.
Below is described the approach we’ve taken based on these ideas and considerations.
Short Distance – “Tap-and-Move”
When one thing to be selected:
- tap on it,
- directly drag it within the boundaries of the screen being as precise as the input method permits that,
- leave it at the desired place
Done, this thing is moved to the new location.
When many things to be selected:
- click on “lasso selection” button,
- draw a free-form line around these things as described above,
- make either a right-click or a long tap.
Done, the group of selected things will be “teleported” to the new destination, and their relative positions will be kept. We care about positions very carefully, because they represent the meaning for our user.
Long Distance – “Teleportation”
The same approach is used for both one and many items in case of long distance transfer:
- User should select one or more things,
- get to the new destination using a series of pan-and-zoom operations,
- and make a right click or a long tap in the destination point to get all selected things “teleported” to the new destination.
So, today we discussed the way user can navigate in her Zet Universe of information, select one or many things and move them within the short distance (within the screen boundaries) or within the long distance.
Now, If only we could teleport to a new geographical position in the Earth with the same simplicity in the almost zero-time as you can move information in your Zet Universe!
Zet Universe is designed to be a transparent, ubiquitous environment within which user is left with content and actions she can apply to it, removing the complexity of the underlying technologies.
In these series of blog posts we will be describing the interface language of Zet Universe.
User interface uses a language as the medium to translate the desire of the user into actions that are recognizable by the underlying system.
Zet Universe provides such a language by using the following metaphors:
“Space” is the basic element of the interface as everything else is happening inside it. It is a two-dimensional (2D) zoomable space; we call it “infinite space”. Also it is impossible to see absolutely everything at the same time because nobody can embrace the unembraceable, as it was said by Kozma Prutkov; it gives a working metaphor of infinite space with no practical limitations of number of elements user can have on the space (the only limit is the disk space). Space is designed to be friendly for navigation done using both mouse and touch.
Second basic element is a “thing”. We live surrounded with things. These things can be physical ones coming from our environment, or they can be products of our imagination, or, finally, be digital objects created as elements of virtual world built by computer applications, websites and games. Things we deal with, starting from documents and other files in our folders, to users in Facebook and emails in our inbox, seem to be natural to us. We constantly make different interactions with these things, switching from physical world to a virtual one back-n-force, referencing to them in our endless collaboration with friends and work colleagues. The patterns of interacting with digital objects were brought from the physical world by Internet and computer pioneers, and now these patterns are highly interconnected as we constantly transfer our experiences between our real and digital lives.
Instead of computer strict understanding of classes and instances, things in Zet Universe are more familiar to the user; as in the Metro Design Language, things are content.
In the current alpha development milestone there are several kinds of things user is able to add to her space in Zet Universe *:
- Web Pages.
(*We plan to add more kinds into the system as we move forward with Zet Universe development)
Topic is a central element in Zet Universe kinds map, as it helps to define a part of the area, giving it specific meaning as per user’s need. It is conceptually similar to a name of the area on the geographic map:
|An example of topics, or labels, on the Bing Maps||An example of topic on the Zet Universe infinite space|
File is second foundational element in Zet Universe kinds map, providing interoperability with existing information stored in the various information silos across user’s computer devices, as well as enabling compatibility with existing applications that are using individual files as information containers.
File is represented with a thumbnail and its display name. Any Windows application that is using standard Win32 APIs to work with files is able to work with files stored in Zet Universe by design, including creation, editing and deletion of files. This means that if user added an image, or document, or PDF file into Zet Universe, she can safely open it from there, edit and save; its contents will be kept inside the system.
Web page is third foundational element in Zet Universe kinds map; it is a text-only replica of the existing web page, acting as the next step of favorites in web browsers. When user pins a web page to her infinite space, she is then able to recall the web page as she would do that with a link to a web page stored in her browser’s favorites/bookmarks.
Actions are the third metaphor of Zet Universe Language. In the language philosophy, actions performed on kinds are called “speech acts”. Speech acts are the way in which language is used to accomplish things: asking questions, making requests, taking positions, committing one, and so on. In Zet Universe, they are implemented in the forms of gestures and other modes of direct manipulation. Actions, therefore, are the binding between speech act, kind and gesture. Each action is recorded by Zet Universe with its specific meaning (“remember”, “pin”, “create”, “link”, “open”, “forget”, etc.).
Actions & Gestures
This is the foundational gesture used to add various content types to Zet Universe. To activate this gesture user should perform either a double click using mouse or touchpad, or a double tap using finger or pen on any part of the space that is free from existing content.
Create, Remember, Pin, Select, Capture
These actions are available from the Space Menu enabling user to add new content to Zet Universe or make operations on it. We decided to use different words to these actions based on the corresponding kinds to focus user attention on different meaning that these actions have; for instance, create topic means that a new topic is created on the surface of the infinite space while remember file(s) enables user to choose one or several existing files from her computer to let Zet Universe remember them, pin web page enables user to naturally pin an existing web page to her infinite space, capture photo enables users to make a quick capture of a photo in the context of the current activity, finally, select things helps users to make a lasso-like selection of things (read the second part of this blog series for selection of things).
Link is a property describing explicit relationship between any two things, helping user to define her own ontologies. To create a new link between two things user should perform one of the gestures described below depending on the distance between these things.
In case of a small distance user can simply drag and drop one thing over another.
In case of a large distance user should use a two-step approach that is similar to one used in the real-time strategy games:
- The first step is to select one thing on the surface by either making a click using mouse left button, or making a touch tap on the object. Once a first step is done, a “drawer” area will be populated with short information about the selected thing:
- Second step is to make a right-click or perform a hold touch gesture on the second thing. A dialog will appear asking if user wants to link selected things together.
To Be Continued.
We updated screenshots for our core product features. We also uploaded first screenshot of our “Visual Search” feature that highlights search results in user information space as she types the search query.
Take a look and give us your feedback!
Of particular love is our visual search J
Also we usually do not reblog posts, this is a special case.
Samsung Slate 7 Series is a first slate that truly deserves the right to be a base context-aware device with the range of sensors, including light sensor, accelerometer, GPS & compass (accelerometer and GPS are accessible via Windows Sensor and Location Platform), with ubiquitous internet connectivity including WiFi and 3G, normal desktop-like Intel Core i5 CPU (Sandy Bridge), 8 touches multitouch screen and active digitizer. This device is capable to run full Zet Universe product (once it ships). More details on available configurations are available in the original blog post.
Originally posted on Kurt Shintaku's Blog:
This is not true. The Series 7 Slates are technically different devices in comparison to the devices handed out at BUILD. Here’s a chart that goes over what the different technical specifications between each:
(Taken from http://www.samsung.com/global/windowspreview/)