EUT on Tour

The team will be attending the Microsoft Management Summit 2010



We also have updates from Lotusphere 09, Microsoft Management Summit 08, TechEd Europe 08 and the Lotus Leadership Alliance 08


Friday, April 23, 2010

Done and dusted!

I hope you have enjoyed the posts, Colin and I enjoyed writing them as it 'locks' in what you are learning so much better.


(We did also complain it was an additional 5 hours on our day each day, so you better like them ;-) )


If you have any comments/questions let us know. I'm more than happy to present further at any meetings if something was particularly of interest.


If you have any tips for how we could improve our posts for you for next time, please let us know too!


So its goodbye from him -










And its goodbye from me!

Server Quarium

I said I'd try to get some pics of the Server Quarium that was running the labs so you could see some of the plasma screens. Some of these are examples of Configuration Manager Dashboards too. I think you can click through to see them larger.

Not sure how well these have come out -












Centralising and managing user data

Just after 8am on the morning after the attendee party and as you can guess this session was not well attended. That was a shame as it was very good.

The two presenters are part of the Windows File team and had done some work internally at MS to centralise and manage data of users on this pilot.

Aims:

  1. 99.99% availability (less than 5 mins a year downtime in their environment)
  2. Near local access times, regardless of the location of user/data
  3. Recovery Point Objective (RPO) of zero data loss for the central location
  4. Single backup server
  5. Selective file/folder restore by end user.
  6. Same view of files wherever the user logs on

Technologies leveraged:

  • 10Gb quota per user
  • Folder redirection and offline file cache
  • Backups via SCDPM
  • Windows 7 - when user logs on first time, files are moved to local offline cache then synced with server transparently. This is better than previous versions of Windows which blocked access to desktop until files copied up to server and then back to local offline cache.
  • Slowlink mode in Windows 7 - detects when link is slow and makes user work locally then syncs when Lan/Wan is better
  • SMB 2.1 - better Oplock model so client can sleep (Office uses oplocks and would stop computers sleeping unnecessarily.
  • File System Resource Manager (FSRM) - quotas, allowed file types, periodic or on demand reporting to see storage trends etc
  • File Classification Infrastructure - assesses how files are used long term, can choose to compress, or tie into Hierarchial Storage Management (HSM)
  • Shadow copy for shared folders - allows users to be self sufficient in restoring previous file versions (they have to be online to recover).
  • Policies/GPO

All the demos worked smoothly to prove it. We are doing some similar things with our solution, such as shadow copy, backups in data centre. Here is a great example of how we can take this forward, particularly for roaming users - why should my data be tied to ISB, if MTO can provide a much better service in every location; rather than good service in ISB but shocking elsewhere?

Again one to consider for the roadmap.

Diagnostics and Recovery Toolset

The Diagnostics and Recovery Toolset (DART), is another great tool in the MDOP suite.

The MDOP suite typically saves $70-80 net per pc per year (WIPRO reseach). DART can be $10 dollars of that.

DART is basically a bootable CD/DVD (USB and WIM work but are not supported) that runs on WinRE (Windows Recovery Environment) and is used to troubleshoot/repair a client machine before just rebuilding.
It Can:
  1. Recover an unbootable PC
  2. Detect and remove malware (whilst the PC is booted in WinRE)
  3. Delete, recover, save off files
  4. Reset local Admin password
  5. Manipulate services
  6. etc

Benefits:

- Accelerates TCO savings by minimising recovery time and preventing data loss.

- Recover instead of rebuild - saves user time and allows root cause analysis

Rebuilding a unbootable PC guarantees data loss, this tool gives you the option to do data recovery at worst case and full system recovery at best. This way the user does not lose data or their time waiting on a rebuild and then their time setting things up just right.

Case study of a company called Ultrasonic Precision Inc they saw Help desk costs decrease 27% and end user downtime decrease between 50-60%.

Demo's were very effective is providing the crash analysis of a blue screen, and restoring data that had been accidentally deleted.

Tools included in DART:

  • ERD Regedit - similar to normal one
  • Locksmith - local admin PW reset
  • Crash analysis - assesses BSOD and gives reasons/help
  • File restore - will scan for all deleted files and give you a likelihood of recovery
  • Disk commander - repair MBR, recover volumes/partition table
  • Disk wipe - secure DoD level wipe to prevent data recovery
  • Computer management - similar to normal
  • Explorer - Gui based, not command prompt as normal WinRE, USB active to save files off or copy them back to restore service
  • Solution Wizard - Wizard to help you choose the right tool to fix the problem (I would think if you need the wizard, you maybe are not the right person to be doing the work - ironically the presenter just said that too)
  • TCP/IP config - if you want to get onto LAN or ensure you can get to internet for System sweeper toget updates.
  • Hotfix uninstall
  • System sweeper - malware/rootkit detection tool,
  • SFC Scan - system file check can be used in Windows (assuming it boots), great to see it here at WinRE level (I have used SFC successfully a few times - correctly restore corrupt system files).

You can add DART tools to a hidden system partition of your builds to ensure they are a F8 option for troubleshooting (probably should not include locksmith).

Whilst it is a MDOP feature, once you are licensed for MDOP on your desktops, you can use it on servers too.

You can create a DART cd/dvd from within a virtual machine - very cool.

Find out more here

Desktop Error Reporting

I was also in this session and agree with what Colin has written, Microsoft Desktop Optimisation Pack, is a great tool. DEM is a great feature of it and the presenter demonstrated these well.

Something we should definitely look further into to understand the cost impact of getting MDOP into our environment and using these tools.

Find out more about it here (Pdf will open)

MMS 2010

Done!

Desktop Error Monitoring

This was an excellent session which covered a component of the MDOP suite called Desktop Error Monitoring (DEM). I was extremely impressed with this product demonstration and can see immediate use for it in both our current and future environments. The tool would assist primarily the tier 3 teams (EUT), in strategic problem solving, but would also be useful to tier 2 teams in terms of published problem management, and statistical information. I understand that the tool itself is free, however because we don't have desktop OS enterprise licensing, there will be some commercial issues which would need to be ironed out prior to us deploying - I certainly intend to pursue this investigation, and if necessary raise a business case to implement MDOP as the benefits are clear and immediate.

In order to describe the product, the speakers first talked about why the product exists - this was mainly user need driven:

• Provide an immediate ROI
• Deliver end to end solutions
• Better TCO on desktops/laptops
• Requirement for low cost monitoring for knowledge and productivity issues
• Requirement for better visibility of desktop issues (users automatically reboot, often overwriting error data in the process)

DEM offers the following to help with the above:

• Crash monitoring
• Application and System crash/hang data captured and stored centrally
• Direct access to troubleshooting & solutions
• Agentless deployment (via group policy)
• Lower helpdesk volume calls
• Engagement with support partners
• Internal 'Watson' back-end
• Patch and update tracking
• Easy analysis of captured data reports

The requirements for a DEM deployment are pretty standard:

• A management server
• A reporting server
• An SQL server
• Active Directory
• Global Policies in use in the environment

It's worth noting that DEM is a separate product to SCCM, although SCCM does effectively do the same job albeit on a much bigger scale. DEM is focussed directly on the desktop/laptop environment.

DEM also offered such features as customisable web pages displayed on the desktop when a crash occurs - which means that if we have a solution or workaround already, the user is notified straightaway. This has an obvious effect of reducing helpdesk calls. DEM can also suppress the "Send details to Microsoft" dialog, which users as often as not will click "No" on - once deployed, DEM automatically sends the error data to the central server, and then can display the kind of web page as described above.

Along with application issues, DEM also records system errors such as the dreaded BSOD. One of the issues EUT has faced recently is the issue of collecting BSOD error data - our environment is such that this is not easy on all devices and the user was usually forced to reboot prior to the full error log completing - this could be negated with the DEM system. It is often essential for our vendors that we provide complete error logging so that they can quickly resolve these types of issues, so anything that can help with this will be invaluable to us.

In addition to error data, DEM also captures the CAB file associated with application issues and bundles this in with the reporting - this would help Satyam with issues in packaging and us with patching and update problems. When use in conjunction with crash analysis tools, this is a very powerful way of identifying issues in applications.

In terms of UI, DEM looks very much like SCCM. It has facilities groups similar issues together, but in granular detail (ie by revision/version of individual DLLs) so things like video driver errors etc are clearly visible, even on a cursory glance at the logs.


As I said in the beginning of this article, I intend to follow this up with a serious intent to raising a business case to implement this technology in our environment as soon as possible. It can be used very soon - as soon as the new AD is in production to be exact, and I think the support teams will see the practical benefits immediately. Management should also see benefits from this too - apart from the obvious potential to improve our problem management, quicker and more proactive issue resolution and the potential for ticket reduction; they will also enjoy both the high level reporting available, with the options to produce highly granular reporting if required as well.

Thursday, April 22, 2010

Best practices from Microsoft IT on Config Manager 2007

This was a nice wrap up to the day - the internal MS IT department team lead gave a presentation on how they handle the normal everyday jobs that all users of their products need to do.

The thing that surprised me was that they do not seem to be early adopters of their own technology...obviously they are heavily involved in the Alpha, Beta and QA for their new products (a process they delightfully call "Dogfooding", but in their own environment, they have only recently implemented some of the things I just assumed they would use from day one of it going gold. To give you a couple of highlighted examples, they only began to deploy O/S images six months ago using MDT, and only use one App-V based application throughout the entire organisation.

The other surprise was the size of their team - although the speaker did admit they outsourced for some tasks, their core team is only 13 people. This team services 274,000 clients based at six HQ and client sites globally.

Their SLAs are quite impressive too - for software compliancy (patching etc) they adhere to a 95% compliancy within 3 business days for active exploit patching. For critical updating the SLA is 95% within nine business days.

A large portion of the presentation was around performance monitoring - with such a large organisation which such a high data throughput, they needed to develop their own type of custom reporting, which they achieved with the LogMan tool, and a bundle of custom scripting.

One last point which was quite interesting - they stated that their DC operational costs had reduced by 75% using a virtualisation strategy - they have defined an 8-1 virtual to physical server ratio. They claim that most of the 75% savings are down to power and physical server cost savings, along with standardising the builds for easy and fast provisioning.

Forefront Endpoint Protection 2010

As is becoming very routine now in these sessions, the speakers started off by extolling the virtues of the 'single pane of glass' approach to SCCM and it's components, and Forefront is no exception. Again with this product, we would be able to manage a major portion of our infrastructure seamlessly from a single user interface.

Forefront, for those not familiar, is Microsoft's answer to antivirus, malware, spyware and firewall for enterprise customers. I had my reservations, previous consumer products have been eh....not great, only offering basic protection at best. Forefront however, has been designed from the ground up to be industry class, and my first impressions are that it may well become best of breed.

Of course, being an SCCM component, deployment of policy, updates and signature files are simple and managed in the same way as any other deployment.

In terms of provisioning Forefront to an environment, Microsoft have pushed the boat out somewhat to make it an admins dream. All that is required is for the installation to be completed on a root site, and it's automatically provisioned across the hierarchy, automatically creating additionally required components such as distribution packages. Another good feature is that when deployed to clients, Forefront will (again!) automatically remove/uninstall and other protection software you have installed, although I'm guessing our heavily scripted installations may cause it some issues.

Some of the other benefits mentioned were:

• Protects clients without complexity
• Admin control of protection level
• Protects apps, file systems and network layers
• Template driven policy creation
• SCCM distribution
• Option to control via legacy group policy if required
• Ability to limit the clients apps CPU utilisation of the PC, so as not to slow down the users during mandatory scans
• By leveraging SCCM and WOL (Wake up on LAN), updating and scans can be scheduled out of hours
• Centralised monitoring, alerting and reporting on protection levels, signature and update compliance across the environment via SCCM

Zero Touch Installation using MDT 2010 & SCCM 2007

This lab session went though the steps to configure SCCM/MDT2010 up to the deployment phase for deploying a Windows 7 workstation image. The steps included:

• Configuration of the deployment environment
• Configuration of offline installation of language packs and updates
• Configuration of a new computer PXE environment installation of Windows 7
• A refresh install of Windows 7

The lab was fairly routine, but it was good to go through the steps as I suspect my team will be involved in this heavily in the future.

Configuration Manager v.Next: Device management

Just my 2p's worth....

This session was the best of the week for me - as you know, one of my passions is mobile devices, and especially finding ways to integrate them into the Mars environment to enhance the user experience by giving more choice and flexibility. I've previously reviewed the current crop of Mobile Device Management tools in my blog entries from TechEd 2008, and am very excited to see the new developments and functionality that will be available in SCCM v.Next, particularly as this may well be something we can implement our new environment.

The speakers gave a few interesting statistics which I recorded:

• By 2013, there will be more smartphones than PCs in enterprise level business today
• Devices are trending away from platform conformance (ie iPhone, Android etc are becoming more common)
• 75% of smartphones are consumer bought, but still used for business (guilty as charged m'lord...)

In Mars, this is particularly worth noting due to tight control over business supported mobile devices - associates and contractors who don't qualify will often look for alternatives ways to access their corporate data, and in our environment this could pose a risk to us in terms of data security and corporate privacy as we have no control over these devices currently.

Using the tools available today, we have the following opportunities to take control:

• SMS 2003 - Windows Mobile / CE devices only
• SCCM 2007 - CE 4.2 / Pocket PC 2003 - basic control and provisioning
• MDM 2008 SP1 - Windows Mobile 6.1, mobile VPN, Rich Device Management (remote wipe etc)

When v.Next is available, we can look forward to:

• Management integration in the same UI for desktop, server and mobile devices
• Over the air enrolment (using AD credentials)
• Mobile application deployment (this is cool, see below)
• Monitoring and remediation of non compliant devices
• Support for WinCE 5+, Windows Mobile 5/6/6.1 and Windows Phone 6.5
• Additional platform support (ie Nokia Symbian)
• Over the air inventory and setting management including software and patch deployment, remote device lock/unlock and wipe

The topology v.Next includes the following key server roles for device management:

• Enrolment web proxy point
• Enrolment service point
• Software catalog roles
• Management point
• Distribution point

The speakers went over some enrolment and deployment scenarios, describing the process for establishing mutual trust between the mobile device and the enrolment web proxy, which demonstrated the over the air provisioning. This can be invoked either by the admin in the console in a few easy steps, or by the remote user, using the web based software catalogue which is part of SCCM's standard services. Whichever method is chosen , the end result is the user receiving a notification with instructions specific to their device type, and includes a one time PIN number which is valid for 8 hours by default. Once the user initiates the enrolment process on the device using the PIN, a secure session is initiated and enrolment is completed in the background on the device. Once the process is complete (which can either be bound to the users AD credentials, or specific credentials to the device), you're ready to deploy software, policy and patching to the device, along with being able to over the air inventory, status report (ie memory, CPU, free storage etc) & remote control in the same way as any other domain device. Specifically for mobile devices, you may lock/unlock or wipe the device.

Settings management for mobile devices direct from the console was also covered, and includes:

• Integrated mobile settings
• Support for monitoring and enforcement of policies
• Standard settings and simple UI which will be familiar to any SCCM admin
• Administrator defined settings via mobile registry or omni-uri (configuration via web link)
• All evaluation and remediation is done by the server so that the device isn't slowed by any processes required.

Alongside this, another great thing about this product is that you don't need to create separate security policies for mobile devices - rather you use your baseline desktop/laptop policy and add a supplement configuration item for mobile devices. This will save time for admins and security teams, and also ensure that sweeping security changes, for example a change to the 8/90 password policy, would be affected for all device types at once without the need for many policy changes to encompass all devices. The configuration item contains control for such things as bluetooth networking and sharing, camera use etc, specific to smartphones, along with and password lock policies etc.

Software distribution to mobile devices works in the same way as with any other SCCM deployment, so I won't go into detail here, however one point worth mentioning is that once a mobile device application or patch is packaged, it can be grouped into software collections along with the same applications for other devices on the DP servers, using the same requirement rules (for example device type, available memory and storage etc), and SCCM automatically works out which version to deploy to which device. Also, packages can be signed with a corporate certificate, so that the user can have confidence in the source, and the enterprise maintains continuity of the packages.

So to try and make this clear, if user Colin requires Adobe Reader and had a desktop PC and a smartphone, all the admin needs to do is deploy Adobe Reader once - it will appear on all devices if available and required. The only thing which isn't clear to me at this stage is how license constrains are observed here, user Colin may well own the application on his desktop, but may not be licensed on the mobile device - so I'm not currently clear on how this is handled. I am sure there will be a way though, it's not like Microsoft to miss something as fundamental to their business model as licensing!

Software distribution packages can be in several flavours, including MSI, App-V and mobile CAB. Software can be deployed either via SCCM or user initiated web based self service. The beauty of all this for admins, is that now, mobile devices can be treated pretty much in the same way as desktops and laptops all from the same UI, using the same packaging, monitoring and reporting functionality - giving us control of the devices in our environment finally!

Troubleshooting Windows 7 Deployments

This lecture was a little dry albeit very informative. The synopsis is: Windows 7 deployments can have problems, check the multitude of log files for help and RTFM before hand.

For those that would like some more tech detail:

Setupact.log - setup actions during process
setuperr.log - only the error messages - both these need to be read together, and depending at what poitn the failure was, they may be in different locations!
KB927521 has more
cbs.log - DISM commands - drivers, languages, security updates
setupapi.dev.log - %windir%\inf - driver install
netsetup.log - %windir%\Debug - Domain join errors
Windowsupdate.log - %windir% - Windows update, WSUS or SCCM (SUP) errors
wpeinit.log - startup issues in WinPE - gets deleted after reboot
wdsserver.log - WDS - logging is off by default - KB936625
usmtestimate.log - estimation of space errors
usmtcapture.log or scanstate.log - capturing the data
usmtrestore.log - restore errors
smsts.log - task sequence failures (another log that moves)
drivercatalog.log - inport drivers
tasksequenceprovider.log - save or import task sequences
smspxe.log - pxe issues
smsprov.log - save or import task sequences too.

In SCCM you can enable a checkbox for enable command support, if you then hold F8 during winPE you can get a command prompt to go find these logs. If you have got as far as windows setup Shift+F10. Having the command prompt window open, holds any reboot too.

Common issues:
  • Bad computer name - more than 15 characters
  • Mismatched product key to image file
  • Broken domain join - KB944353
  • Deploying with a KMS key! (KMS keys are for machines that provide keys to rest of org)
  • Crashes - check for stop errors, you may need to turn off auto reboot.
  • WinPE - generally networking related
  • SCCM - task sequences, hash mismatch (refresh DP - it is a bug MS cannot reproduce so far)
  • Make sure you test all task sequences at least one before deploying
  • Make sure packages are present (if not push them out)

Finally he mentioned a tool called SMStrace, which has the option to enter error codes, this can be very helpful.

App-V Overview

This session was to give a brief overview of App-v, it's benefits and some demos.

Application deployments are costly (as we know), App-v enables a desktop virtualisation solution by virtualising applications. (This is often known as presentation virtualisation - think traditional Citrix, however this has evolved slightly and now MS and Citrix include app streaming too)

A couple of customer examples: One cut app deployment from 3 months to 3 days, another reduced packaging costs by 50% and finally one reduced the amount pf PC images they needed.

The App-v sequencer was demonstrated with the app packaged in a matter of minutes. Obviously this package can then be deployed to any hardware, quickly and managed centrally. You can choose to stream (allow user to start the app before it is installed locally) or just do the deployment - streaming would be great for a large app.

Obviously it all ties into the one infrastructure, one management product suite tools message and helps you become user centric.

The recent 4.6 release of App-v is 64bit on servers/apps/infrastructure - so making the best use of what you have in terms of performance, CPU/Ram usage. There are specific things that allow for better Office 2010 virtualisation. They have managed to create a shared cache for apps between VDI and Terminal Services saving on diskspace significantly - often you would have previously had a package for each. Best of all, each app operates independently, so an app crash does not take down the whole OS.

It is still a good product (as it was when it was Softricity Softgrid), and may well be worth some investigation, we would need to contrast this with our existing investment in Citrix technologies.

Client of the Future: Capabilities, Considerations and Costs

This was a really good session and one that will be hard to illustrate without the slide desk. Once I have it, am happy to take people through it in more detail.

The session was run by one of the Strategists from the 'War on Costs' team internal to MS. It covered how previously the aim was to get standardised and locked down and make one size fits all. However now and into the future this is no longer true. We are shifting to being more user centric from device centric and should be looking at multiple ways to address their needs rather than the one size fits all approach.

This is good as it is what I have been pitching for a couple of years. One problem you have when trying to justify this is TCO, having a cost based model is not great for:
- emerging technologies which start off more expensive but give you competitive advantage
- solutions with a high upfront cost (typically we will have this problem for premium services in our software catalog - how do we make it fair to the sites that take the higher cost early on?)
- where value is only for a subset of users (e.g. VDI for our business partners)

TCO also does not measure, agility, efficiency, flexibility and productivity.

The presenter stated that a new model is needed which provide the 4 pillars of business value:
  1. Direct cost
  2. Agility
  3. Quality of Service
  4. Governance, risk, management and compliance

By building your cost models with these themes in mind you will get a more balanced view of what will work.

Next steps are to assess the various solutions against these and see what will be a good fit for your portfolio. Again there was a number of steps to go through before having a user centric environment.

I'll leave the recap here, but hopefully it was enough to give you a taste. Once I get the slides I can elaborate further, but the detail on them and the pace he was going was more than I could get good notes down for and I'd like to do it justice.

To give a bit more background, this was the abstract:


Application virtualization? Workspace virtualization? Desktop virtualization? Composite desktops? Desktops-as-a-service? In an ever-more-complex game of "buzzword bingo" it has become very difficult to compare vendor offerings and choose the client-computing technologies and capabilities that will help you succeed as a business. This session leverages the "War on Cost" team's most recent research into client computing, and provides a framework for comparing the capabilities and considerations of emerging client models.

We'll compare the costs, benefits, and optimal use-cases for application virtualization, desktop virtualization and more; we'll discuss the impacts of each model on desktop deployment and management, datacenter workloads, application delivery, user productivity and business agility; and we'll highlight the key factors and best practices that must be considered when aligning your desktop strategy with business priorities. This session will equip you to make well-informed choices as you work to implement an agile and effective next-generation client-computing environment that meets your business needs.

Best practices from MS IT - SCCM 2007

Both Colin and I were in this session and as it was more relevant for him I'll let him add the details.

Key things I noted - MS IT manage 275k clients with their SCCM infrastructure, so we don't need to worry about scale!
They have 13 people globally to manage all: servers and clients, patching, software updates, App-v, OS deployments and two of these are permanent packagers. The rest of the packaging they outsource.

13 people, 275k machines - pretty impressive!

Does this make me a proper geek?

Yesterday I queued up to get a free SCOM 2007 R2 book, and the authors signed it for me.
In my defence I only went because it was free, don't judge me!




Mobile Device Managment

So day 4 starts, its cold, rainy and windy. Fortunately I get to spend all my time in a conference center ;-)

However it does mean the attendee pool party has been moved to the underground car park - not quite going to be the same atmosphere methinks!

First session today was about Device Management and they mean specifically mobiles.

Fact roll:
  • Smartphones have increased significantly in importance to businesses
  • 2013 will see more smartphones in the enterprise than PC/Laptops
  • Trend is away from platform conformance (irritatingly for us)
  • Often consumer purchased but used for business (as an example all the pics here are with my personal smartphone)
  • Customers want 'a single pane of glass' view over their infrastructure from Servers to phones, not multiple consoles/infrastructures, and certainly not an infrastructure per vendor!

Microsoft have decided to invest more in this areas and thus have rolled their System Center Mobile Device Manager product into Configuration Manager v.Next. They have already announced that they will support Nokia/Symbian platforms at RTM and are stating that they are working with other vendors, no time lines committed yet.

This is interesting as when I was at MMS in 2008, they were saying then that they were in discussions with Apple and Rim, so either these discussions a)take a really really long time, b)are not going well or c)lost focus... Might be worth getting a more formal roadmap under NDA to understand what is really happening.

Apart from Symbian, MS will also support WinCE 5.0+ WM6.1+. These devices will be able to do over the air enrollment, inventory, settings management, software distribution and remote wipe. WinCE devices wont be able to remote wipe or do over the air.

Over the air enrollment ties into SCCM and your EA Certificate infrastructure (PKI - which we will have as part of Connex). Demo'd well and worked flawlessly.

Admins can register users, or they will be able to self register.

They are working to make the user experience the same on all platforms, so things like offloading compliance check/remediation assessment to the SCCM server will ensure the user is not impacted regardless of how powerful their device is or what OS it runs.

Demo's of settings management and software distribution were equally impressive, and tie into the new Software catalog with v.Next - i.e user can choose to register their phone in it, or what software they want installed.

Public beta will be available by end May 2010

Really interesting session and I'm very hopeful it can be a good solution. I would want to see some firm commitments on timelines and platforms. Again though as we have Software Assurance on SCCM, we will be entitled to the product in the future anyway. Definitely one worth investigating!.

Wednesday, April 21, 2010

Operating System deployment for ordinary admins

"Operating System deployment for ordinary admins" focussed on two free MS tools for Windows 7 deployment, namely MAP and ACT (MS Assessment & Planning and "Application Compatibility Toolkit).

Both toolkits should be in use (especially ACT Stan!) in Mars already, and I enjoyed the overview. If anybody wants an overview on them, let me know.

Software updates for smart admins

"Software updates for smart admins" consisted of two admins, one from a 50k user strong company, the other with barely 2k users. The point was to show best practices on software updating from different perspectives, with often diverse methods, but ultimately achieving the same end result of software compliance. The session was lively and obviously both admins had very different views on achieving their goal, and although they didn't quite argue about it on stage, they did agree to disagree. The only things they did agree on was that WSUS was old and SCCM was infinitely easier to manage. That and don't sync drivers, which seemed pretty obvious....who wants to download 70gb+ a month?

I will be getting the slide deck from this one though, as some of the methods described looked like they could save quite a bit of time for any admin - please let me know if you'd like a copy.

Monitoring Networks with Operations Manager 2007 R2

Next session was "Monitoring Networks with Operations Manager 2007 R2" I took a lot of notes on this one as I can see the benefits to an ops team, in that we often need to go to the Central Processing or Enterprise Networks teams and say dumb things like "My application is running bad", whereas "Server 51 is connected to Switch ABC on port 1, and we're seeing a lot of dropped packets between 9-11am" would be a bit more useful.

As you'll already doubtless know, R2 supports SNMP (V1 & v2) and can create either SNMP or SysLog workflows. What I didn't know is that it will also integrate with other monitoring solutions such as Solarwinds via a connector so that we can see the outputs of that alerting system right in the SCCM console. Pretty cool eh?

The larger part of the session was devoted (of course!) to v.Next, and how this offers more functionality. Please note this is all work in progress so subject to change before it goes gold.

The key points I noted are:

* Out of the box monitoring/discovery and reporting
* Server to network dependency discovery
* Multi Vendor/Multi Protocol support (SNMP v1/2/3 & IP v4/6 (note that discovery is IP4 ONLY!)
* Better scalability

Discovery can be manual or automatic (auto only needs one router IP address to discover the entire network!) and can be scheduled, via SMNP trigger or used on demand. This will support layer 2 & 3, VLAN memberships and HSRP (Cisco). Key monitor components by default are memory, CPU, Port, Interface card, PSU, temperature and voltage.

Monitoring defaults out of the box include port/interface up/down, traffic volume, CPU % utilisation, data drop and broadcast rates, memory counters (inc total and free RAM), PSU temperature and voltage and connection health end to end.

Final point was on inbuilt visualisation, which comes in either Dashboard or Diagram flavours - both looked common sense and useful, and of course were configurable ad infinitum.

SCCM 2007 - Configuration Manager v.Next migration

A lot of things here have been covered already, secondly this is based on the migration to the beta without full functionality yet.

Having said that, here's the deets:

Migration Console is in the Administartion tab of the v.Next console

Goal of migration: flatten hierachy, minimise WAN impact, Maximise reusability of x64 hw, assist migration of clients and objects

Plan - asses current environment, POC, Design.
Requires SCCM 2007 R2 SP2, 64bit hw, SQL server 2008 SP1 cumulative update 6.

Deploy -
  1. Setup initial v.Next primary/Cas
  2. Configure software update point and sync updates
  3. Setup server roles
  4. Make sure hierarchy is operating and software deployment works

Migrate - Map v.Next to existing 2007, migrate objects/clients/DP, Uninstall 2007 sites

All sounds so simple doesn't it ;-)

Enable migration in v.Next, specify hierachy - v.Next gathers info from 2007 for baseline, info is retained for reporting and displaying progress. I have some more details for those interested.

Concern for me is that it seems to be a side by side migration, not an in place - does this mean we will potentially need to buy new hardware to do an upgrade not long after we have finished our migration?

Config Manager v.Next Admin UI

First up today (after the keynote) was "Config Manager v.Next Admin UI" and as you can imagine, was focussed on the improvements of the user interface, compared with previous/current versions. This will only be interesting to existing users of SCCM, new users will just expect the "wunderbar" and ribbon approach, which is delivered. The UI changes are more than just cosmetic of course - of particular note are the very fast and easy sorting and filtering options, both of which are very configurable. Tagging makes grouping very easy and is available on virtually all objects - so for example, you could tie this in with the Role Based Management, and allow site admins to only view objects, policies and devices on their own turf - for example an EU deployment manager could be configured to only see the relevent tasks, devices and functions that he required. Even if we didn't use this kind of restriction, the technology would still be useful to just to simplify views, reports and workflows for any admin working in the environment.

Speaking of reports, they showed an overview of the new graphical functionality built into v.Next - this looked very Spectrum like, and was of course dynamic, allowing you to drill down through the environment, for example down to server certificates and application issues reported by the internal alerting engine.

Also of mention was the automatic deployment statistic reporting options, which by default right out of the box show performance and failure alerting.

Protecting Windows Clients with Data Protection Manager

Good session presented by two clearly passionate product managers. Quick survey of the room showed 50% were trying to backup up laptops/desktops but not one person was happy with their solution. Why?

  • Mobile users cause problems
  • Sheer volume of machines - how manage that data and scale policies across it?
  • Each user has different needs.

DPM 2010 released his week addresses these! They have removed any reliance on the end user and support user roaming and customisation. You can still enforce Admin defined restrictions.

Basics - first backup is a full backup, every future is the disk block changes - in essence allowing you to have a full backup each time, whilst only moving small amounts of data from the device to the DPM server. You can do multiple backups during the day to allow users to restore previous versions of files as they work. If they are offsite it backs up locally, sure if a disk fails you are stuck, but if they just want to restore any earlier version, they can still do this. Once back on network/vpn/directaccess it will sync with the DPM server.

Policy can configure backup locations, you can allow the users to add their own (or not). For example I have a bad habit of saving in progress files to my desktop and typically My Documents would be the location backed up - I could add my desktop to the locations to protect.

User can also choose to sync just before they go offsite, and do self recovery. If they lose their laptop you can restore to a new machine with their login, or if they just need a file, they can login to any machine and get what they need. I.e someone forgets their USB stick with their powerpoint.

The agent can be installed as part of a standard build, you only pay license costs when you start to do backups. This would be great as a premium service on top of SDS. Or a direct replacement for SDS backup.

A couple of flaws, each DPM server can only cater to 1000 clients, so assuming we need to run this from the datacenter, we would need significant server investment. Lets hope this becomes a cloud offering in the future!

2nd Keynote

Today Brad Anderson got to be the main man, as is tradition, first some stats:
  1. Windows 7 is the fastest selling OS in history

  2. In March 90 million Win7 machines were patched via Windows Update.

  3. Windows Update patches 725million PCs each month - bear in mind most corporates wont point to Windows Update.

SCCM 2007 R3 - will include more power management features. You can enable it in a data gathering mode first and understand how your estate is used, and understand the savings you could make. Typically Windows 7 has saved between $30-60 per machine by tweaking the power options from Windows XP. You will also be able to configure wake up for out of band patch/app distribution.

With the reports you can show CO2 savings as you implement the policies, therefore we could quantify the savings back to the sites. This helps as site power is obviously a different budget so whilst Mars IS wont see the benefit we can show the site what benefit they are getting because of our service.

SCCM 2007 R3 beta is available from today.

Brad says there are 5 things you need to build the core of your desktop strategy.

1) You must have one infrastructure to manage all your types of desktop - physical, vdi, app-v, etc. It must have comprehensive management tools for all the things you manage. Guess what? The system center suite does this ;-) In all seriousness it is a good point, for so long we have tried to go for best of breed and often suffered, there is a lot to be said for the one throat to choke approach.

2)Common way of integrating and managing all versions of virtualisation - vdi, VMs, App-v, Med-v Hyper-v, vmware, Citrix etc. Speaking of Citrix, XenApp can now be managed by the System Center suite (available in 60 days). Configuration Manager will allow for increased automation/management of XenApp and its server infrastructure - so delivery of apps to the server, through to publishing them to end users. Using Citrix Dazzle home users can gets apps delivered via Citrix and SCCM.

Some Hyper-V tweaks - Remote effects (fx?) and Dynamic memory, the first allows you to use a high end graphics card in your hyper-v server and provide full windows aero effects to end users with VDI - the GPU takes the workload so performance is not affected. Other VM providers cannot do this - this would mean the user experience is seamless from physical desktop to virtual - sounds insignificant but is very impressive - they demo'd it running 720p HD video in a virtual machine with all Aero feature on. Dynamic memory essentially allows you to define a range of RAM for your VM machines - this way as the user runs an intense app, they can dynamically grow their RAM usage, and when they close it, it will reduce. This allows for much more efficient RAM usage and again a significantly improved user experience. These tools will be available in SP1 for W2k8 R2 (I cant wait for my home server ;-) ).

3) Convergence of security and management - lower cost, simplified management and enhanced protection. Forefront product will now run off System Center infrastructure (no additional servers required). It will be built into Configuration Manager so you will get anti virus/malware/spyware. This also ties into the one infrastructure theme. RTM by end of year. As we consistently seem to have problems with our Symantec tools, maybe this would be worth a look! In fact the install package it includes in SCCM will auto uninstall other vendors security problems to avoid headaches (almost like a virus itself!) This combination will tie into the SQL reporting services and provide very rich reports to see overall status, any detections and so on. Tied into the Dashboard (see previous blog) a great addition to the office plasma!

4) Cloud based client management or as they define it, 'route to the cloud'. I have blogged about Windows Intune, so see this post for more.My view is that 2-5 years this product set will have developed enough to rival the on premise solutions, so by the time we come to look at the desktop management infrastructure again, this may be a viable solution.

Then Brad went off on a bit of a detour from the Cloud to the System Center Service Manager tool. This tool had 2 main design principles, simplicity and tight integration with AD and System Center. As blogged previously this tool will do compliance, incident/change and problem management. They gave an example of a customer having a meaningful CMDB within 2 hours of install, it is that simple.

In terms of compliance, it will do PCI, SOX, records management and one other I didn't catch. Service Manager can automate the discovery to assess compliance, demo'd in 3-4 clicks. If you already have VISA compliance and now you want to check for AMEX it will assess the delta that AMEX may require that VISA does not, but not duplicate the work already covered. It can even auto remediate to gain compliance. The integrated reporting can allow you to check compliance, or even generate the report direct for the auditor. Non compliance can auto generate a ticket for items it is unable to remediate. Microsoft will update the tool as regulations are updated.

Beta 2 available in June RTM later in the year.

5) User focused - enabling productivity anywhere on any device (sounds familiar!) - reiteration of much of what I have already written about SCCM, Configuration Manager v.Next. Talked about auto remediation of DCM/Settings management which is pretty cool, even to the extent of reinstalling apps a user may mistakenly remove.

There was a roadmap slide


Image credit: Hans Vredevoort - click the pic for his site.

Next years MMS will be at Mandalay Bay March 21st-25th 2011


Preview of one of tomorrow's sessions

The System Center Team blog have a write up of whats coming tomorrow around device management. Take a read here

I'll write up the session once I have attended tomorrow afternoon.

Misled

My next session was SCCM 2007 on steroids - real life experiences.

What it actually was was a company called Adaptiva using three of its customers to do a sales pitch on its products - Client health, One site and Green IT with companion.

Worst of all the presenters skipped the slides, so I had to guess at was was being demo'd!

Client health all the demo's failed - they stated typically 5-10% of all SCCM clients have errors - (I'd be interested to hear in the comments if Mark/Raitis you think this is about right, and what you do about it, as this product does not work!)
Tool seemed overly complex, needed an additional admin console, I was not impressed and neither was anyone else as half the audience left.

Other two products the demo's worked but again just not great products - if we wanted to invest in these areas, 1E have what appear to be better working products (Nomad, Nightwatchman and the Power and Patch pack).

Overall an appalling session.

Windows Intune

Hello again! I'm skipping the keynote write up until tonight as I have skipped lunch to do some quick updates. I love my audience that much! To show you appreciate it, add some comments or tick the rating boxes!

After being launched on Monday as a beta (admittedly Beta3), Windows Intune has already closed to new participants, what they thought would take a week, filled up in matter of hours, so you can see people are pretty excited.

Reminder of what it is - desktop management via a cloud service. Why MS think it is needed:
  1. Many customers struggle with non standard, multi version environments.
  2. Workers are in many locations
  3. Lack of insight into PC estate
  4. Cannot afford a huge infrastructure investment

By using Windows Intune you can avoid the above and deliver many additional features that small companies typically couldn't or don't do. Such as:

  1. Protect PCs from Malware
  2. Standardise on a version of Windows
  3. Upgrade to Win7 or downgrade to run a version of choice
  4. Automatic upgrades to new versions of the service
  5. Diagnostics and recovery toolset (which can recover even a non bootable PC)
  6. Access to all MDOP functionality (this is a great feature)
  7. Bitlocker to go (another great feature)
  8. No infrastructure required (so no hardware/OS/license costs or power etc)
  9. Predictable monthly billing

Signing up is via the MS Business Online services website, and you get access to the Cloud based admin console. From here you use a simple wizard to do some customisations and configuration which gets saved as a .MSI - this can then be installed on your pc estate and voila - you are managing you estate via the cloud!

I can see a niche use for Mars - Royal Canin currently have a series of startup companies whilst the enter a new market - each company has to manage its own IT - this would be a great solution to at least ensure they were patched, had anti virus, were licence compliant and so on. It can even manage non Domain joined machines the same as you would configure your traditional estate - this would ensure a consistent look and feel and user experience one the startup joins the Mars network fully.

The admin interface was very intuitive, and very immediate - I was impressed with this being a beta product. There is context sensitive help so someone with basic IT skills should be able to manage the PCs via this platform - again great for small companies where the focus is on the business not managing the IT. The PC agent even has some self healing built in to make it as simple as possible to remediate.

You can do things like export the hw or software inventories - great for Commercial to check compliance.

It does include basic remote control but the user has to make a request. Further versions will see this expanded.

Whilst it is only for client machines now, I did notice it had some server things listed so it may well be on the roadmap ;-) Another thing on the roadmap is software distribution - again this would be superb functionality to add.

Release should be within 12 months to NA/EU/Asia and Brasil

If you want to track the progress, the team at MS have a blog here

CIO.com review

CIO have added their views on the MS view of the cloud here

I'm off for Keynote 2, check back later for my thoughts, or watch it live here

Tuesday, April 20, 2010

More details...

I was going to bed when I thought I'd just check my RSS feeds and some relevant info popped up.

I mentioned Infrastructure Planning Guides for the Dynamic datacenter - just seen it is now available, see here

Mary Jo Foley on MS bridging Public/Private clouds here

RTM of System Center Essentials (SCE) and SCDPM here

Windows Intune (Cloud based desktop management) was showcased yesterday

There will be some Configuration Manager v.Next announcments tomorrow so stay tuned!

Configuration Manager v.Next - Hierarchy Design

This presentation build on the previous Configuration Manager v.Next talks and is around designing your architecture. For those non techies, skip to the next post now!

You should have a Central Administration Site, 1 Primary and Secondary's as required.

Central Admin Site - Location for all admin and reporting. No client data processing, no clients assigned and limited site roles.

Primary Site - services clients in well connected network. No tiered primaries, only add more for scale out; not needed for data segmentation, client agent settings or network bandwidth control.

Secondary - services clients in remote locations where network control is needed. Bundle Proxy MP and DP fr install. Tiered content routing via Secondary SQL replication.

Advanced features (multicast/streaming) are not available on file share only DPs (or W2k3 ones).

You can throttle and/or schedule to remote DPs

Branch DPs - can be run on a workstation, 100 or fewer clients, BITS gives you enough network control.
Utilise branche cache if you have W2k8 R2 (Mars traditional will) - they have seen a 71% drop in network utilisation at one customer.

Replication stays file based for content, but is SQL for global and site data.
SQL reporting services is the only reporting tool that can be used.

Topology views in Sites tab rather than event viewer - this will greatly aid troubleshooting replicating as the picture will show the alert and the link state.

Configuration Manager v.Next Overview - Mat's additions

As mentioned in both my and Col's previous posts, it is all about the User Centric Client Management and its 3 pillars:

  • Empower the end user
  • Unify the infrastructure/admin consoles, consolidate the separate tools (Mobile Device Manager is rolled into SCCM)
  • Control via improved feature sets and simplified processes.


This is based on a new definition of an end user - more tech savvy, used to the consumerisation of IT, Digital native. IDC predicts that there will be 1Billion mobile workers by 2011 - 75% of US workforce will be mobile by end of this year and 80% of Japanese.

Consequently demand for IT specialists will shrink(40% this year), there will be an increase in balanced versatilists (definite consultant speak - IT Pro's will need more all round knowledge and adaptability.

v.Next will embrace the user and move away from the device. It will provide a web based software catalog that users can pick from.

Tech details:

  • Can deploy apps to DP group
  • Role based security in admin consoles (which can be customised)- allows you to show only the items that role needs.
  • Can set a security scope, e.g. a EU admin would only see EU relevant data.
  • Should be able to reduce infrastructure - primarys are needed for scale only with them as an option for content distribution
  • Data segmentation for users - so they only see software catalog items relevant to them.
  • Using SQL transation replication rather than file replication services (for some data)
  • Lots of work on client health to help troubleshooting and auto remediation.
  • Mobile Device manager merged into v.Next. This will include cross platform support, ability to deploy apps to devices or user.DCM to devices, secure over the air enrollment, monitor and remediate out of compliance devices, app allow/deny.
  • DCM (Desired configuration management) is now called Settings management.
  • Patching - auto deployment of specific things based on rules - e.g. windows defender definition updates (and a very simple interface to configure these rules, audible gasps of joy in the audience).
  • OSD (Operating System Deployment) offline servicing of images based on update baseline you set for the live environment - means new builds don't need to go through a patch process once built, they are build to the latest environment update level by default.
  • Boot media updates - hierarchy wide boot media, unattended boot media with pre execution hooks to auto select task sequences.
  • USMT 4.0 - hard link offline and shadow copy features, UI integration.
  • Remote control - send CTRL+ALT+Del to remote device is back.
  • Settings Management - v.Next can 'set' registry, wmi, scripts. Unified across servers, PCs and mobiles. Audit tracking.
  • Configuration Item revision history - version control in packages so you can see what has changed over time.

Cloud Computing in the Enterprise: Enabling the Foundation with the Dynamic Infrastructure Toolkit for System Center

This lecture was about the Dynamic Infrastructure toolkit for System Center, basically providing the tools to allow an enterprise to build their own private cloud and manage it. Deliver the service, manage the fabric.

They went through a lot of the same background that had been covered in the keynote which was a bit of a waste of time.

The toolkit is designed to help with:

  1. Self service (provisioning, not pw)
  2. Greater failure resilience
  3. Greater scale
  4. Consumption based charging
  5. Service catalog
  6. Service orientated (faster delivery)

They plotted the journey from traditional data centre <15%>50% utilised (moving from physical to virtual machines) to private cloud - IT as a service, Chargeback and significant decrease in management costs to public cloud - capacity on demand, global reach.

They envision public/private clouds co-existing which I agree with. I cannot see this changing in the next few years whilst we have limiting legal regulation around data/user accounts or suspicion around the solutions. Once vendors have built up trust in the marketplace I can see the swing beginning where feasible.

The stage stage of data centers is IT PAC (Pre assembled components) this is a modular data center built as needs dictate, think of examples like the World Cup or Olympics, significant need that ramps up until the launch of the event, then suddenly not needed.

The toolkit derives from their learning from managing their environments (Azure/Bing etc). System Center v.Next suite will cover all aspects of private cloud management.

Azure is PAAS (Platform as a service)

BPOS is SAAS (Software as a service)

New for the Cloud is IAAS - Infrastructure as a service, the foundation the cloud is built on.

The toolkit is available from Summer 2010 and will contain:

  • Architecture roadmap, Infrastructure planning guides (these are generally great), best practices
  • Out of the box capability for self service portal, provisioning engine.

Then a few demo's.

The licensing question was bugging me - how do you manage licensing or costs when anyone can self provision and scale up an environment. Unfortunately the answer was not great - that's down to another product SCVMM, which a)isn't quite true and b)not good enough.

I have a feeling the truth is that it has not really been thought about yet, so will be interesting to see if it gets documented/included at release or a later stage.

SCCM - State of the Union - Mat's view

As Colin said, the presenters were great - the best of the event so far!

They started off with a run down of the top 10 codenames they came up with before Configuration Manager v.Next. As it is now focused on User Centric Client Management, UCCM was a candidate, before they considered what would happen if the Forefront brand was added (say it out loud), and then even worse, 'Enterprise' at the end. It got a good laugh and bought the audience in.

In the last 3 months Asset Intelligence has grown 30%, they are learning the gaps and adapting. They encouraged us to use MS Asset and Planning toolkit.

As Colin mentioned they have worked with Adobe to ensure product updates are being rolled into SCCM/SCUP - this is a user feedback driven product enhancement. They talked about a partner product Shavlik Scupdate which does many more 3rd party products and integrates with SCCM. (I have used this in a previous job with Windows Software Update Services and it is a good product)

They talked about some of the stats they have gathered from those customers that enrolled in the feedback service and then showed the changes they have made based on that knowledge.
They also use forums both external to MS as well as technet. The top issues commonly are down to admins not reading the documents or superflows or fail to fully configure the products. Good to see RTFM is still key advice!

Over the next 12 months -
Configuration Manager Information Experience team will be writing more superflow, as well as a web based help module.

v.Next will be able to deploy apps not just to end users but also to Citrix XenApp, this would allow a scenario where the full application is deployed to a users primary machine, but when they roam - they get their app via Citrix. A great solution for some key users.

SCCM R3 - more power management and reporting, scales to 300k machines, oem media will work better, MDM licensing will be rolled in.

Lots of research has been done with customers and their end users. From this they have developed the v.Next marketing, all around 3 pillars - Empower, Unify and Control. I'll define these more in another post.

Their research from end users was useful but not often in the way expected. An example given was that if a generic notification popped up asking the user to take an action, they would generally ignore it, if it had the company logo, people would do what it said, even if the text was to format their pc! It will be interesting to see what middle ground they get to ;-)

v.Next is currently being piloted within MS IT on beta 1 with 50k machines. The TAP (Technical Access program) has 14 other customers, 6 have more than 100k machines, 7 more than 10 primary sites, 6 more than 100 secondary sites, 8 more than 100 distribution points (DPs) so pretty big/complex environments.

They then did 3 demos of things that may or may not make it into the final product and got the audience to vote. I actually think all 3 should be in, but wont list them here just in case.

Key things to prepare for v.Next:
  1. Flatten your hierarchy
  2. Use AD sites and services for site boundaries
  3. Break up collections that contain users/computers
  4. Use Branche Cache
  5. UNC paths for source content
  6. Use App CI - will help with state based apps and detection methods.
  7. Use DCM

Keynote - Detail

As Colin has said I took lots of Notes. I also took some pics of the empty stage, to give an idea of the size

These were from the 1/3 nearest the stage - I reckon about 6k attendees could be accommodated.


You could really see the impact the flight ban has had on attendance as it was maybe 3/5s full at most. Hopefully there will be more attendees tomorrow now flight restrictions have eased.


Anyhoo back to the details...


Brad Anderson started off as the warmup for his boss Bob Muglia, Brad took us through some stats -
3 out of 4 attendees use SCCM, 80% of which are already on R2
2 out of 3 use SCOM, again 80% at R2
50% of SCOM users, take advantage of its hetergenous features to manage Unix/Linux
50% of attendees use System Center Virtual Machine Manager (SCVMM)
25% use App-V
10% are beta'ing System Center Service Manager


7 years ago MS first announced the Dynamic Systems Initiative, the first step on the path to Dynamic IT now they are making it a reality, the vision will continue to evolve.


He talked about things like the Lab management tool in Visual Studio 2010 which allows you to deploy your own test lab using Hyper-V and SCVMM.

Brad talked about Opalis, a recent acquisition, which has a Orchestration feature, which automates moving (virtual) Dev environments into production, with the whole environment available at once - no more multiple changes/over time.(Opalis is something we may own but is outside of scope of Connex - something for EUT to investigate further I think).


Next up was a demo and a pretty impressive one. There is a feature within Hyper-V which allows you to do a long distance live migration. This would allow failover between say ISB and MTO, with no user impact as the servers would migrate in a live state even over the huge distance.


Obviously MS are keen to push new parts of the System Center suite, they talked about the human workflow of change and how it can slow the process, System Center Service Manager (SCSM) can now do a change automation based on ITIL. System Center Data Protection Manager (SCDPM) has better functionality for backing up (Hyper-V) based virtual machines, down to individual files on VMs, not just a snapshot. Multi site clustering with Hyper-V and System Center products....and so on.


Actually they are making some significant improvements, I could well see it being time for Mars to assess Hyper-V certainly for Dev/QA environments as it is much cheaper than VMware and seems to be catching up in functionality and adding features VMware does not have.


Further areas of improvement will be including more Compliance management in SCSM and SCDPM. All about proving how they have and continue to deliver on their vision of Dynamic IT.


So Microsoft asks, "What next?"


The Cloud.


All the attributes MS have defined as Dynamic IT apply to the cloud. The Cloud they defined as just in time provisioning and scaling of services on shared hardware.


Why Cloud? Accelerates the speed and lowers the cost of IT. Brief definitions of Public/Private clouds (hosted/in house) and Shared/Dedicated (Shared with other customers/Service dedicated to you).


Microsoft is working to provide dedicated clouds with Azure in the future (Shared only now).

They are looking to deliver one platform, one application model and one management solution across all of - customer premise, partner cloud, MS clouds.


There are a few key enablers -

Hardware Model - Windows server is now 75% of all servers globally. MS now buy servers in 2000 server containers, they just plug in power, network and water. This is 10x more efficient than the process of provisioning individual/racks of servers. They are working with hardware partners on the learnings and expect to see smaller containers offered to end users in the future.


Application Model - This is a set of services delivered as part of the cloud - this reduces dev time, has increased scalability, higher high availability and greater flexibility. Again a 10x improvement over current methodologies to be faster to market. We need to understand that servers will fail, however, applications should not, the service should continue. MS are developing a new model language currently code named 'M' this allows a developer to build apps based on a model rather than traditional methods.


Operating Model - They have learnt a lot from running Bing! as a service with a small number of admins. They have taken this knowledge and built it into Azure and System Center to improve their products. They can now have 1 admin managing 1000's of servers! They suggest that IT jobs in this sector will evolve to provide a higher service, faster delivery etc. The underlying operating model enables this. They have seen (you guessed it), a 10x reduction in the cost of operations.


New features coming -
SCVMM v.Next will have the ability to manage OS/Apps that run across multiple machines (1 OS, multiple VMs - this I have not explained well, I'll try to find more info over the rest of the week). Applications are referred to as 'fabric layers'.#


Service Designer feature - allows you to deploy new services based on your templates (Customer logs call for more Oracle capacity - admin clicks on deploy Oracle service and the capacity is provisioned) basically you can draw the picture of your service in Visio and then SCVMM will deploy it....SCVMM will also scale up/down as the load increases/decreases as per your requirements. Great for end users, a nightmare for licensing compliance!


Server App-V use multiple apps on the one OS independently, SCVMM manages the underlying application fabric.


Greater control of offline patching - remember the app service must stay up, gives greater control and automation.


SQL Azure - running SQL as a service across 6 datacenters and 1000's of servers, provision of a new DB is as simple as clicking on a web page.


Finally, integrated monitoring between on premise and cloud - a SCOM management pack shipping later this year for Azure. The demo showed a diagram of the environment with hw onsite and cloud based, a simulated problem in the cloud alerted via SCOM allowed the admin to run a task to provision more capacity in the cloud. Again very impressive, but how do you manage the cost of this up/down scaling and the capacity required on standby? I think contracts will be very interesting!


My takeaways - we probably need to look at all the features of the products we have bought as part of Connex, not just focus on the immediate need (I think a common mistake in Mars and industry wide). There is much more to many of the tools that could allow for greater automation and much slicker operations just with a bit more upfront effort.


Secondly, we need to think more holistically and not just in our GIST silo's, products EUT are using will be more than useful to other teams, we need to ensure we highlight these to our colleagues (as we have done with SCCM and SCOM to Processing). This is probably a great example of where an Enterprise Architect function would be particularly useful - I think Chris Lane is going to be busy ;-)


I'll leave it here as I have another 4 sessions to blog, but you can find more info below.


For those that would like to watch today's keynote, it is now available here


Finally, tomorrow's keynote will be streamed live here from 8.30am Pacific