Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation

Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation: It is very exciting to be here and to talk about the great advances taking place in hardware and software. 

Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation
Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation

Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation: It is very exciting to be here and to talk about the great advances taking place in hardware and software.

Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation

BILL GATES: Good morning. It is very exciting to be here and to talk about the great advances taking place in hardware and software. Microsoft’s role is to provide a software platform that allows everyone who is building applications to build far more powerful applications. It was very inspiring to hear from the doctor about the Rolltalk application and the difference it has made in his life. Stories like that remind me that we are in the most interesting field that there is. The pace of innovation is as fast today as ever; the opportunities for great advances are clearer than any in the past.

First, let me talk about the hardware foundation that we build on. We have had the benefit of exponential improvement in recent decades. Moore’s Law was a prediction that we would see [a] doubling in performance every two years and, in fact, the microprocessor industry has done even better than that. Performance is something that we almost take for granted at this point. Particularly in the first half of next year, as we move from 32-bit to 64-bit systems, we will have a very smooth transition, with total binary compatibility. This will provide the ability to mix 32-bit and 64-bit; a very simple recompilation for any application that you might want to run, using the full 64-bit address base. This will really bring us to the frontier of computer performance, whether single-system performance or the ability to group together many systems in a scale out type approach. Even the most expensive mainframe will not deliver the performance that industry standard hardware running Windows will deliver. That is a wonderful milestone for us, no longer requiring people to buy expensive systems simply to achieve the best possible performance.

Hardware Innovation

However, the processor has not been the only place that has provided new opportunities for software innovation: the advances in network speeds, where, within a corporate network, we can assume even a gigabyte type performance; the availability of wireless Wi Fi in most places where business is done, and in Wide Area Networks; and the advance in storage capacity, with costs coming down even faster than Moore’s Law type improvement, allowing us now to think of document and image libraries that would have been impractical in the past. We are very excited to see the pace of adoption of these technologies: new graphics chips for better visualization; larger screens, moving up to 17-inch, 19-inch or multi PAL type LCD displays; and RFID for tagging information and having it available to track. I think it is fair to say, therefore, that hardware is in no way holding us back.


We would like to see broadband adoption increase even more quickly. Some countries have moved ahead on that; nevertheless, over the next five years, in all developed countries, the majority of people will be connecting through broadband and interacting with information through many devices:

  • PCs at Work and
  • PCs at Home,
  • Portable PCs,
  • Tablet PCs, and
  • Pocket devices such as
  • the phone,

evolving from simply a voice device to a data device. Having all these devices work very well together and be secure and up to date are very significant challenges.

Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation - 2004
Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation – 2004

Not only are advances taking place in hardware, but also in software. In terms of Office, we are taking the idea of meeting or document sharing or business intelligence, and bringing them to a new level. Investment in R&D; by us and our partners is at a record level; we alone have spent over $6 billion in R&D;, which is applied primarily to the Windows platform, and to Office. Interestingly, however, we see the pace of improvement in usage being somewhat faster in the consumer space, with developments in photography, web searches and music, and advances in gaming applications, being at their highest ever pace.

In the business space, which is our biggest focus, there are some great breakthroughs: the acceptance of XML as a standard, and the acceptance of the Web-service protocols for connecting applications – Exchange and XML data. Those promise finally to allow any piece of software running on any system to connect to any other piece of software. The sorts of things we are doing in Office will bring an entirely new level of Office productivity, breaking down the boundaries of distance and different organizations.

However, we still find that the pace of adopting these improvements is not as fast in the business space, and there are several reasons for this. First, there is a need to make the environment easy to secure. Second, there is a need to make it not nearly as complex. Let us focus here on complexity, which is really the theme of my comments: how we can take software and use its magic to eliminate much of this complexity. The fact is that this is exactly what software is for. Many of the manual things involved in setting up systems and monitoring them can be eliminated; many of the efforts involved in trying to work out how to map your workload on to different systems can be done in a very automatic way.

Simplifying IT through Software

We have set a very ambitious goal for ourselves in terms of software simplifying IT; not just one aspect of lifecycle, but bringing all the different elements together, and making it so that, when you develop an application, you can put in information about how that application should run and the resources it requires. This makes it easy to bind that application into the operations environment, and for information workers to see the state of transactions and how well the applications are running. It also makes it easy to think in terms of high level model diagrams, and check to see what is happening, not only with the systems, but with the business processes themselves.

Historically, development, operations and business analysis have been different worlds. We have not had the ability to express, in a high level form, the things that would connect all of those together. It is through these high level models that are built on top of the web service standards that we can really change the world of IT complexity. It think this is not only a very exciting thing, but also very necessary. After all, historically, most IT budgets made room for innovation, because hardware and communications prices came down. Although that has continued, those have become a small enough percentage that, in order to make room for great innovation such as new applications, new processes and Tablet PCs, or the ability to understand customers better, and to track and forecast in a better way, we have to take a lot of the IT costs so that we can free up budget for the new world. That means eliminating complexity. Of course, software is the magic element here; having software that models system resources and applications needs, and makes it far simpler to take advantage of the hardware resources that are available.

Microsoft Operations Manager (MOM) and Virtual Server 2005

We have a few things that are very focused on this, and which are being announced today. In particular, we have our MOM (Microsoft Operations Manager) 2005, which is our key management software – a major new version of which is available today. We also have our Virtual Server product. This is the first time we have made this capability available. Over a year ago, we bought a company that worked in this area called Connectix, and began to study what our customers wanted in terms of being able to break workloads down and use different system resources in a very dynamic way. In order to provide this, we created the Virtual Server product.

These products enter a vision of the IT lifecycle and the simplification that people expect, making it possible to no longer think in terms of individual systems that are brought together through labor, but to have MOM provide an integrated view of what is happening on all these different systems. There is a big embrace of standards here: management standards that have been advancing in a very exciting way; the new methodology of using remote capabilities and web service protocols to connect all these things together; and a belief we can make things far more automatic.

In order to understand why we are so excited about this, we should take a look at these products in action. I will now ask Bill Anderson, group product manager at our Windows and Enterprise Management Division, to give us a quick look at the new products.

BILL ANDERSON: I would like to show you that, once again, we have painted Copenhagen with a brush of manageability. Last year, we launched Systems Management Server (SMS) 2003 at our worldwide launch event here, and this year we have an opportunity to launch MOM 2005 and Virtual Server 2005. I would like to take a few minutes to show you some of the things that you can see in MOM 2005, and to walk you through a scenario where using these technologies together could help you solve some real business needs.

The user interface is the first change that you will notice, with a very task based approach. We have found that, with an Outlook basis, it makes it simple and easy for an administrator to navigate quickly through tasks. The second thing you will notice is the word “state.” You are probably very experienced at managing many distributed enterprise. We want to take those distributed enterprises and enable you to think of them based on roles and on health. By using the “state” view, we can do that; we can simply and easily display the health of the system and its distributed applications so that you can really understand the situation at a single glance.

Through the interface, it is possible to drill down into details on that item. I can view the typical alerts that I am used to being able to manage. However, the real power of tools like MOM is the ability to encapsulate knowledge within the application. Down below, under Alert Details, it provides some base level descriptions; however, if I expand the Product Knowledge tab, it allows me to see detail that has been embedded inside the shipping product. For example, this is the Exchange Management pack; the data that has been built by our program managers in the Exchange team tells you best how to use this particular technology. If I scroll down, it even has things such as Suggested Registry Configuration. You probably have your own standard operating procedures that you follow when you see an alert such as this, so you can encapsulate all of that as well. It stores knowledge – both ours and yours – through this process, on an end to end basis.

I would now like to talk about using MOM and Virtual Server to solve a business problem that you face on a day to day basis, and we will look at a server consolidation scenario. We will use MOM as a preparatory tool to understand which servers should be consolidated, and we will use MOM plus Virtual Server to do this consolidation. I will firstly open up my MOM reports; I have created some customer reports in MOM, which is something that all of you do. It is very extensible and the data is very open. In this case, I have a report called “candidates for virtualization.” I have taken a group of servers from some of my geographies, and have consumed things such as CPU usage and memory utilization. I then look at them on a simple grid to understand which servers are underutilized, and which are over utilized. As you can tell, my Florida servers seem to be quite underutilized, so they become a good candidate to be consolidated; in this way, I can reduce the cost of software management going forward.

I am now going to return to the MOM console and select a geography; in this case, it is my North American East Coast region. I will show you another new piece with MOM, which is the diagram view. It allows me to render its topology of my distributed services, and not only to show them topologically, but to show the relationships between the two. I have my two geographies exhibited here.

The last thing we want to be able to do is to take actionable data from MOM and execute on it. In this case, we have created a Task view as well, where we have embedded some base tasks and you can extend your own in the Task pane view. I am going to select one of these servers and virtualize it for a consolidation scenario. I will go to my Virtual Server tasks, and choose Migrate Physical Server Virtual Machine. When I begin this process, it is using some of the scripting that was released in Virtual Server Migration Toolkit (VSMT), also available this week, to remotely virtualize this process. Normally, for server consolidation, you would send a team of technicians across your distributed geography, which is a very costly process. Now, by using MOM 2005 as a task launch pad type of element, using the scripting in the VSMT, and then being able to use Virtual Server as a consolidation tool, you have the ability to do this kind of virtualization remotely, without having to incur travel or operational costs.

In a normal server, this would take 30 45 minutes to execute. For the sake of condensing this for today’s demonstration, we have run the automated process but have not wrapped any services to bring back. It is running through the process, but we are not pulling a Web service or directory service to come back. The task has now just completed and has brought that server back into the Virtual Server environment, and it is rebooting the Virtual Server for me.

I will now go back and look at where we left this by refreshing my diagram view. You will see that the topology updates for me. Not only is going to update the topology that I have in place, but it will show me the new relationship between these virtual sessions that I have just moved across, and across the virtual host session there, because of the management packs that are in place today. Hopefully, by using technologies like MOM and Virtual Server, you will be able to manage your software and hardware more efficiently and more cost effectively. Thank you.

Automated Processes

BILL GATES: Let me talk about another element of complexity in managing an IT environment, which is keeping software up to date. The importance of this has really been highlighted by the security issues over the last few years. Many system updates are critical in nature; that is, the need to reach the systems that run the software before some malicious code exploits some aspect of that system. There is no reason why the update should not propagate faster than the malicious code; we simply need to have the infrastructure and the right classification.

Updates come in many different categories; in most cases, it is necessary to take time to install an update on your network. If it is a new version of the application, you may want to synchronize that with some other work that you are doing. You may want one group of users to have it first and use that as a pilot, before you roll it out to other people. Every aspect of software updating should be very automated. You should be able to do it sitting at a central console, not only setting up the commands, but also being able to track exactly what is happening.

Microsoft has been in the business of providing these capabilities through our SMS product for many years. SMS 2003 has been a great success; the majority of our enterprise customers now use this to carry out deployment, representing over 16,000 companies connected to more than 10 million different devices. We also have a version of SMS for smaller customers, simply connecting their systems very directly to the updates that we provide. We call that Windows Update Services, and that is going into public data today. Its successor will be called SUS and will be quite advanced, meeting the needs of small and many medium sized organizations.

For SMS, we have two very important additions; these are new capabilities that we have called feature packs, which are fully supported add ons. The Device Management Feature Pack enables you to connect and manage things like Pocket PCs, Smartphones and other Windows CE type devices.

The OS Deployment Feature Pack, installed in the same framework as you manage software updates, enables you to manage operating-system updates and firmware updates. All these things can be brought together and synchronized in a very simple fashion. As I have talked to customers about SMS over the past year, I would say that the OS Deployment pack has been what they have most frequently requested. We think this is one of the processes that, for the IT department, have been fairly high overhead. Even with the high overhead that people have had in software updating, they have not been able to achieve the degree of compliance and the speed of updating that they want to achieve.

Therefore, with the magic of software, we think we can make dramatic headway in both of those areas, in terms of reducing the time involved and achieving update compliance, so that you literally sit at a console and pick the timeframe that you want, insisting that the systems have those updates. The area of agile software delivery is something that we believe we are turning into reality.

I would like Bill Anderson to return and show you a demonstration, which I think is the best demonstration ever done of any management related software. This is our Zero-Touch Provision.

BILL ANDERSON: Today, we announced the OS Deployment Feature Pack and the Device Management Feature Pack for SMS. These will be available to you for download today, so we are really excited about that. As we start thinking about OS Deployment, SMS has always been a great preparatory tool, but it has lacked that one piece – the ability to deliver an image to an existing machine, which is what we are delivering to market today.

When we think about OS Deployment, we think about two different core scenarios. One is a user self provisioning scenario, where you may have a group of users who are very empowered, or users who have very inconvenient schedules, or users where a topdown “everyone installed at once” approach does not work. For other scenarios, you may want to be able to take a group of 100 or 1,000 machines and provision them in an automated way. We will walk through both of those scenarios today.

In the first one, I am an empowered user working in a local bank called Woodgrove Bank. This is my work PC; I am also running an NT4 workstation. My administrator has given me the ability to do the upgrade in my own timeframe. Because we use e-mail so frequently, my administrator has sent me a simple e-mail, indicating that I have already been approved to do this upgrade. He has given me some details, so I understand that I am going to submit something that will take a couple of minutes, and that the process will actually take 30 45 minutes. He has also given me the link to a SharePoint portal site, which we use to provide a lot of services to different employees in our organization. This is based on your business logic and business rules. There is a list of different services; today, we are most focused on “upgrade my computer.”

Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation
Remarks by Bill Gates, Chief Software Architect, Microsoft Corporation

I select Upgrade My Computer, which goes to Active Directory, and it makes note of machines that I would have the ability to upgrade. By selecting my client machine, it will also prompt me that it will move me to XP SP2. By choosing ‘next”, it gives me more information. Because this is a longer process for a user, we want to make sure that they are very well informed of the processes they are going through. At this point in time, I submit my request. I have already been pre approved for this, but, in a large, complex organization, you may need a tier of administration to include a pending approval process. It might be somebody working in purchasing or procurement, to ensure that you own the software, or an administrator, to ensure that your 486 could really run XP SP2 if you chose to do so. However, since I have been pre approved, in a couple of minutes…

… structures will come back to me. This would actually kick this process off. My state and data would be preserved and my machine would be upgraded.

That is the first scenario. However, a lot of you have a lot of machines that you want to be able to take care of, and hundreds or thousands of PCs that you want to be able to migrate. We thought about how we could simulate hundreds or thousands of PCs for you to be able to migrate over the course of this demonstration. We thought that we might take the Connectix or the show’s .NET and simply flatten it and rebuild it completely during the course of my keynote speech. For those of you who know Mr Cheeseman[?], that did not go over very well. Therefore, Andrew decided that he wanted to encourage us to choose a different path. Hence, we have chosen something a little simpler and easier to do that, it is to be hoped, really draws the message across for you.

If you take a look at the right hand side of the auditorium towards the front, you will actually see a curtain pulling back. Back there we have 100 PCs that are currently set up running Windows 2000 and Windows XP Service Pack 1. Those PCs have a little user data and a little user state. You can notice that they have different color bitmaps. The goal of the bitmap is for you to understand that we are actually migrating state as part of the overall process. During the course of the keynote speech, our goal is to back up the user state and data, flatten those devices, bring them back up to Windows XP SP2, restore user state and data and conclude all of that in the course of eight to 10 minutes or so.

That is what we are going to try and do. The goal for SMS’s OS Deployment Feature Pack is to make sure that it is very tightly integrated with what you do with SMS today. Let me show you just how easy it is to set it up inside of SMS. For those of you who are familiar with the SMS user interface, you will notice that the simple Microsoft Management Console (MMC) snap-in and we create a package and a program just like any other piece of SMS software. This allows you to do this particular set of tasks, without having to re skill your administrators. As this is integrated with SMS, you get the scalability, reliability and hierarchical nature that you are used to using to move these pieces around. It is a simple program that we have created.

One of the things that is interesting about OS deployment is that it has some unique qualities to it for customization. It has things such as licensing that has to be customized and Windows configurations and network configurations that many of you spend a lot of time with scripts to customize today. We have nested those inside the user interface so that we can automate that for you, so that you do not need to be script geniuses. This will solve about 80 to 90 percent of the needs you have for OS deployment right there. For those of you who have some advanced needs that you need to get through, we have built it in a modular way so that you can use simple scripting, whether it be Visual Basic Scripting (VBS), batch files and so on, nested in each of the modular components that are there.

Enough about that; I want to light the wall on fire. For the sake of the demonstration, you would normally schedule this for midnight or whatever the time is for your particular users. As you can imagine, we probably are not prescriptive enough to know that at 10.52 a.m. we wanted that to go ahead and kick off. Therefore, what we have done is that we have kicked off the [?] SMS and it is just waiting for a set of instructions from me before it goes. Say a prayer to the demonstration gods, as there is no backup for this one. I am going to execute this script.

You notice automatically that it has already kicked those systems off. There are a couple of other color patterns for you to note; the Windows 2000 machines are the red and the blue bitmaps, the XP machines are the yellow and green bitmaps. The first process it kicked off was state migration. Notice the systems that are actually running faster. Therefore, that is again another reason to think about XP. What we are doing is that we are backing up state, re imaging these machines remotely, not touching them and we are going to be able to do this in about eight to 10 minutes. You will say, “Oh, my gosh.” Allow me to give you one caveat. We are not doing the significant number of applications that you would normally do in your enterprise and we are not doing gigabytes of data as you would typically back up and restore for your enterprise customers. That is how we get it into 10 minutes instead of 30 or 40 minutes. Either way, the goal is that with the technologies we just shipped today, you have all the tools are your disposal to be able to migrate hundreds and thousands of PCs at a really low operational cost.

Thanks again, Bill. Watch the wall.

BILL GATES: We had the challenge to be able to do this automatically laid down before us several years ago. Therefore, it is fun to be here and see it actually working with shipping projects. That is a real milestone. I want to give you one example of a customer who is using our updating infrastructure, as well as our management infrastructure to deal with a very complex scenario. The customer is the Federal Department of Foreign Affairs (DFA) in Switzerland. They are a challenging case because they have 156 embassies and consulates spread all over the world. Of course, they have quite a wide variation in network activity to those different locations, including cases where the network may go down for a period of time and then come back up later.

Therefore, you have to be very smart about how you use the network bandwidth and be able to track and let them know exactly what has and has not been done. They have over 3,000 PCs and 500 servers worldwide. We sat down with them and went through the idea of moving towards a single domain, which was a great simplification. They had one in each of the embassies and therefore about 150 went down to one. We then went through building this foundation that they can track and update their entire environment. Hence, the savings that they were able to identify were worth over $1 million a year and yet the quality of what they are able to achieve with this new approach is actually higher than with the previous approach. I think it is a good example of how the magic of software really can step in and deal with some of the complexity that we have here.

Identity Management

Let me now talk about another significant source of IT complexity, dealing with identity management. As we have looked at our customer environments, they have a lot of applications that relate to identity, for example the payroll applications and the file system permissions. All the different systems they have provide different user interface and different approaches for group identity and individual user privileges. Often, this complexity shows through to the end user effectiveness, forcing them to remember different passwords or waiting until things have been brought into compliance. Hence, as a new employee or as somebody whose privileges have changed, it is not very immediate for them to get that put into effect.

Also now, we have a lot of cases where people are trying to break into identity systems. The term given to this is “phishing.” Over 50 million people have received fraudulent mail that attempts, by using Web sites that look legitimate, to have you come in and provide credit card numbers and things of that nature. If we have the identity system set up properly, we should be able to authenticate exactly which sites you want to give that information to and prevent those fishing people from ever being successful.

We also have, from many companies including, probably, everyone here, the need to follow the new privacy directive. That means really looking at what information we have on different systems, bringing that information together and making sure that it is very secure. This has become extra important.

For many IT systems, we have Active Directory. Active Directory has been very successful. The majority of our customers have deployed the latest Active Directory and most of the rest of them are in the process of doing that. It is a key foundational piece.

We have an update that comes next year to the Windows Server called R2, which has some significant updates to the Active Directory capabilities. We have talked about the Web single sign on and federated identity as being two substantial advances that come with that Windows Server release. Those are things that people have been asking about in order for these scenarios to work across different organizations.

We also believe in connecting Active Directory to these identity systems, and we call that our Identity Integration Server. That is where we can read and write information to, for example, HR systems and other directory systems. We can connect up to UNIX directories, NetWare directories and over 20 different systems. We believe that the idea of having a layer there that connects the information between the different identity systems is very important. Although today it is only a few thousand customers who are using this meta directory approach, we believe that we can bring that into the mainstream and make that very practical for people and show it to be a solution that reduces the complexity they have in dealing with the identity systems.

Another major issue for identity systems is, of course, the weakness of the password. Passwords have been the primary way that people identify who they are. Unfortunately, for the type of critical information on these systems and the regulations that ask that these systems be secure, whether it is health data, financial data or customer access to customer records where only certain people should have that information, we are not going to be able to simply rely on passwords. Therefore, moving to biometric identification and particularly in moving to smart cards, is a way that is coming. This is something that has been talked about for several years, but now we finally see the leading edge customers taking that step.

We have many partners that are helping to push this forward. One of the key elements for us is to have a smart card that really connects up in the best possible way to the Microsoft platform. Therefore, I am announcing today a .NET based smart card. We have a key partner here, Axalto. They have done a super job on this. In fact, Microsoft itself will be using their smart cards internally and in all of our access to our premises. Each employee will have a smart card and will use the same smart card to get in and out of the buildings as we use to connect up to our machines. Therefore, we are requiring smart card use for any remote connections to our systems. Over time, we will completely replace passwords, where even the internal access will be done through this smart card. By having the .NET capability there, we think that allows you to bring different logic and different information down onto the card itself, using the same development tools used for everything else. Hence, we have a richness and a continuity there to the platform that only exists in that .NET environment. We are very excited to see our smart cards moving into the mainstream and connecting up to our infrastructure.

Dynamic Systems Initiative

Let me talk about our overall framework for how all of these IT activities can be connected together. The broad term we use for this is the Dynamic Systems Initiative (DSI), which is our phrase for moving away from thinking about individual systems and manual activities. This vision of software doing very strong resource allocation is a vision shared across the industry. In fact, we recently had a great advance in the submission of web services management protocols that we and partners like Intel and Dell submitted to become the web services standards around this type of dynamic management, which means modeling systems and applications. DSI has both system and applications descriptions.

In order to do this well, we need to bring in all the different parts of the IT lifecycle, including developers, analysts and operations people who need to be connected. To do this really well, it has to be built into the development and modeling tools; in our case, Visual Studio. The 2005 release of Visual Studio, which is in a beta stage right now and should be finalized around the middle of next year, is a significant step towards having made DSI a strong reality. We believe that this is a software problem; this is a very important set of things to do. It requires strong work with partners, and we are very proud of what we are doing there. All of these tools fit under the DSI initiative. DSI is the common description language that will let all of these things connect together. That is what will get us to the ultimate level of efficiency in these things coming together.

I mentioned the importance of partnership, which spans our entire management space: working with the hardware companies and the applications providers to make sure that the pieces really fit together. I would say a very important partner, if not our most important partner, in doing this is Dell. We are very excited about some joint initiatives that we are taking with Dell. Michael Dell has some comments about the partnership; let us hear from the Chairman of Dell.

[VIDEO] MICHAEL DELL: Bill, thanks for the great introduction and for inviting me to be a part of your keynote at IT Forum. Microsoft’s focus on systems management and solving our customers’ most challenging data centre issues is shared at Dell. In fact, the Development Partnership for Change Management that Kevin Rollins and Steve Ballmer announced yesterday will send a life raft to customers who have been manually updating their servers every day and a wake-up call to the industry to focus on customer centered product development. We are just getting started.

The IT Forum is an important opportunity for customers to learn about new solutions for management that are in development and to share your vision of the perfect data centre with us. At Dell, our vision for the data center is one built around industry-standard building blocks. It is a vision we call the Scalable Enterprise. Our goal is to simplify your operations, improve utilization, and cost effectively allow you to scale today while building a path to the future.

Microsoft’s vision, the Dynamic Systems Initiative, complements the Scalable Enterprise with an excellent set of solutions to help customers manage standard based servers, storage and networks in distributed environments. Dell and Microsoft have taken an innovative customer centric approach to systems management by delivering the first hardware change management solution that leverages existing OS applications.

You will have fewer management tools, resulting in lower administrative costs. We are doing this by providing our OpenManage 4 source code to Microsoft so that it can be directly integrated into Microsoft SMS and MOM. Dell and Microsoft customers will be the only ones in the world to have one tool, one process to apply operating system patches, application updates, and server software. We are calling this the “One Click” experience. As we look to the future, our strategies are aligned to achieve these goals, and we have teams working on the next phases. We know that together we can truly enable the data centre of the future.

Thank you, and enjoy the conference.

BILL GATES: As Michael said, what we are doing with the partnership in terms of using the new SMS Feature Pack to deliver even the hardware upgrades is a first. It really fits into the theme of using the magic of software to reduce complexity. We have a number of other initiatives that we are doing with Dell along those same lines. Dell and Microsoft have reached an agreement for Dell to distribute a version of MOM that we call the Workgroup Edition. Starting early next year, that product, aimed at medium sized customers, will be available as a Dell offering.

We have also announced a partnership with a company called Vintela that helps us extend SMS to manage not only Windows systems, but also UNIX and Linux environments. People with mixed operating systems can still look to SMS and MOM as the overall umbrella solution to provide one simple way to make sure that they take care of their updating needs. We have a broad relationship with Intel, including allowing our sales support and infrastructure to work with those products – an investment by Microsoft and Intel – so that those companies can work together. From the largest partners, the category which Dell fits into, to software vendors like Vintela, we feel the pieces are coming together very strongly to deliver on this vision, the dynamic systems initiative.

One thing that often interests customers is how we take all this information and see it in a rich, straightforward way. Obviously, we build into our management tools many different ways of looking at information, so the common views are going to be right there. But for having the greatest flexibility, for being able to take the full power of our reporting server and have it operate against all management events and data logs out there, we have this idea of the system center. It connects up to SMS and MOM, and it gives you the ultimate flexibility in visualizing exactly what is going on. We think this is a great way to do performance modeling, to look into where the different bottlenecks are. The ability to mine the data, because it is on a SQL-reporting server infrastructure, it is a very powerful environment.

We have a view that will take this system centered concept even further, to provide very tight integration so that for the whole lifecycle of development, design and analysis, people can sit in the system center and have the data warehouse connect up and look for our desired performance state, and what is going on with that. There is a lot of rich modeling and performance analysis capability that will be built in. You can think of this as a key application, almost like financial management is for the business group. This is a key application for the IT group. It lets them get at everything going on, and understand it in a very deep way.

For our third and final demo, I will ask Bill to show us a good example of what we mean by intelligent management in capacity planning.

BILL ANDERSON: I want to look through a couple of things that we will be doing in the next 12-24 months in the areas of integrating our data, and moving the needle forward in the area of capacity planning and modeling. Bill mentioned the system center reporting server, which will be available in the first half of next year. I want to walk you through a couple of examples of where reports, by integrating change, config, operation and performance data, can help you solve some key business problems.

Often, you will spend a lot of time doing proactive Performance Management, but if a change was injected into your system, it would be very difficult to understand whether it had an impact on your ongoing performance. By taking this integrated data, we can have a look at that.

I have a performance comparison report that will allow me look at changes executed across my server infrastructure, and will be able to correlate that against performance measures. At the top, you see software changes. These are advertisements from SMS 2003. In this case, these are different patches and updates delivered to our servers thus far. But it is also able to track my performance results in the same window. I do not see anything inconsistent with the early changes. I might see a spike from an interruption of service, possibly, but looking on or around November 8 or 9, you can see that my performance has flat-lined on me. This looks like it is some form of problem. By moving back up to the software changes, I can visually correlate, and see that my performance problem occurred on x date. You can see a “Vulnerability in Authenticode” error at the bottom. It seems to have been applied somewhere in the same area. I would never simply assume that that was the problem at this point, but it can reduce dramatically the amount of time spent troubleshooting the problem, and getting to the real root cause.

With the ability to integrate operational performance and changing config data, I can start understanding what my systems can do for me. We will take a look at an example with Exchange. We have another report on the reporting server that allows me to look at my Exchange traffic, specifically the messages per second, by day, that are running through my exchange infrastructure. The report at the top is something you can generate from MOM today. But the real power is being able to look at this data, and in the same lens be able to look below and see each server, and understand the inventory of those servers from SMS. You can then put the two together and say, “This one seems to be performing because it has the right horsepower, the right processor, the right memory, the right disk and spindle speeds.”

Capacity planning is a problem that a lot of you probably spend a significant amount of time on. We think reporting server will be good, for a start. But there is some other work we are doing through Microsoft Research that will, we hope, move the needle even farther. There is a project we are working on, codenamed “Indy,” which is based at Microsoft Research in Cambridge, U.K.

It is really a set of deep analytics around performance variables inside hardware. The project’s goal is the ability to consume a model. Bill mentioned standard models, and the ability to show what an application should behave with. Being able to use that model, take that data, and provide prescriptive architecture to give you the guidance necessary to deploy, is what Indy will allow us to do.

What I have is a simple exchange diagram for my corporation: a single server in Europe, a couple in Asia, and four in the States. What I have done through “Indy” is enter manually the number of users at locations, and some of their e-mail behaviors. What “Indy” has done is give me this prescriptive architecture, showing me the topology I should have. However, most sizing tools will stop there. But “Indy” allows us to run a simulation against this topology, and let me look in depth at the bottlenecks there. I will simulate the model against this topology, and it will come back for each of my servers and their key roles. It will show me the performance counters and utilization in each one of those elements: CPU, NIC, RAID; all the different pieces crucial to our sizing and capacity planning. As I scroll through, I see nothing outside of the norm. It looks like a topology that will probably work for me.

I do not know about your business, but ours is quite fluid. Often, we are creating business partnerships, mergers, acquisitions, constantly changing our business and our business variables. One thing that “Indy” will also allow us to do is to make a change, represcribe the architecture, and resimulate the model. So, not only is it a one time snapshot, but it can also be used as an ongoing tool. I will change the parameters now. Instead of 400 employees in Europe, let us say we have 2,000. I will now simulate the new architecture that is there. By doing these analytics, it says that topology-wise, we need to increase our capacity inside Europe. That is what we think, anyway. But let us simulate the model again. It will simulate that model and its use cases against the existing topology it proposed, and reverify that the hardware and software performances levels are at the capacity needed.

We hope that through this modeling technology and prescriptive architecture, our management tools like SMS and MOM, and the system centre line consuming this data, we can give you all the necessary details to run your business economically.

Microsoft Product Roadmap

BILL GATES: We have great belief in this modeling approach. That is where we can save a dramatic amount of cost, and let you really plan ahead, and have far more reliable systems. The models can let you do things like capacity planning.

I want to spend a few minutes on the roadmap for Microsoft products coming out this year.

And next year, and then in the “Longhorn” wave, from the top in 2006. There are a lot of exciting things happening. The theme we have struck today is about using software to reduce complexity; that is a very big theme for us. In parallel with that, we are taking things and providing new and rich capabilities. We are ushering in the wave of development tools that provide the web services approach. People talk about that as the service orientated architecture; I believe that that is the greatest change to how developments are designed that has taken place over the last decade. It is really a dream come true in terms of the flexibility that it provides. It is really necessary to allow integration within your enterprise and across enterprise boundaries. Without this architecture, the full realization of e commerce effectiveness would never have taken place. It is fantastic that the industry has really come together around the Web services standards. Whether it is Microsoft, IBM, Intel or many other companies, the standards are being laid down there, to work across systems of all types.

This year we made the announcements for Virtual Server and MOM 2005. We also have BizTalk 2004, our ISA Server update and our Host Integration Server update. Quite a few things have been happening in the last part of 2004. Next year is a very big year; it sees the arrival of 64-bit. As I have said, that is a really huge thing. As we have been going through the benchmark numbers of Windows Terminal Server, or of the new version of SQL working out of environment[?]; the ability to cache more data in the memory makes a huge difference for the performance critical applications. This is not a case where you are going to have to think, “OK, let’s use 32-bit systems where we want inexpensive hardware and use 64-bit systems where we want expensive hardware.” The work that Intel and AMD do at the chip level means that these 64-bit capabilities are going to come into your servers with no premium in price. The same type of pricing with the servers that you have today will be available with 64-bit capability.

We are actually in the final stages of testing of the 64-bit version of Windows. In fact, internally in Microsoft, we are running a lot of applications already on 64bit Windows. Our Microsoft treasury department does our portfolio analysis using 64-bit Windows. All of the work we are doing in the Search base, where we have literally thousands of servers building a capability that we think will go beyond what others like Google have done in the past, is being built on our 64-bit Windows. That is a very strong foundation. We have a Server update; Release 2 (R2) comes late next year. In the middle of the year, we have Visual Studio and the database update; SQL Server 2005, codenamed “Whidbey” and “Yukon.” Those are a very big deal, with modeling coming in and Web services arriving in a rich way. There is another BizTalk update, a System Center update, a Commerce update and the Host Integration Update; quite a bit, therefore, in 2005.

I would actually say, though, that in some ways 2006 is an even more important year, as this is where the advances under the codename “Longhorn” are coming out. We have a new client, where manageability and security have been made the top priorities. We have new development capabilities with the presentation advances in “Avalon,” the runtime package for Web services that we call “Indigo” and the rich file system – which goes way beyond the traditional file system – bringing together the benefits of file system and database, designed around XML constructs, and called “WinFS.” All of that becomes available to developers and users during 2006.

It is a very strong roadmap; a reflection of the substantial increase in R&D; investment that we are making. It is really a response to what ISVs are interested in us delivering, so that they can focus on their solution work and rely on the platform to do a lot of work that they have had to do in the past, and then address the needs of end users and IT.

Looking ahead, our optimism about how software can improve is stronger than ever. Some of the breakthroughs like speech and ink, and the modeling based approach that we have talked about today, require many years of R&D;, feedback and improvement. No doubt, however, there will come a point where, just like graphics interface, we will take those things for granted, and we will have big business benefits coming from those things.

All of these technologies need to relate to customer value; where they see new applications and where they see their IT budget going. Therefore that is why I say that innovation is not just about new applications, but also about driving a level of simplicity that can open up those opportunities. There are big breakthroughs ahead. We appreciate your support; it has been great to see the attendance here at this conference, which has been a record for us. This is a great opportunity to continue the dialogue about exactly what we should do with R&D; to allow you to meet your goals. It is great to be here and thank you very much.

Towing Boston Service - (617) 201-5620


Please enter your comment!
Please enter your name here