The final session I attended was Deploying & Scaling OCS Group Chat services.
In summary, MS have acquired a product which provides a chatroom facility (similar to IRC), but provides security (AD based) and logging. Whilst does require a separate server to run on than the OCS Messaging server, there is no additional application licensing required.
At present, it does require a separate client, but it does coexist niceley with the OCS client and there is plans to combine the two clients in future.
Some of the examples cited as potential benefit cases is project discussion rooms (because an archive of discussions is made) and global support teams, as members can come online and read the previous 18-24 hours discussions to see what issues/topics were discussed.
-------------------
This is my final blog post from TechEd 2008. See you in Berlin in November 2009!
Friday, November 7, 2008
TechEd 2008, Day 5, 14:45
I attended a session called "A case of the unexplained", presented by Mark Russinovich.
If you've worked with Windows at the technical level for more that a few years, you've probabaly heard of Mark. Or, if not, you've used software he's written. Anything from SysInternals, and much of the stuff in the Resource Kits, he's had a hand in it somewhere.
The session today was demonstrating how to diagnose performance problems, application crashes and the dreaded blue screen of death. There's a useful toolset from MS SysInternals which (when used correctly) can help you identify which software package, even down to which DLL, is causing problems and why.
It's heavy techy stuff, and not something we'd expect lower level analysts to do. It also takes a good deal of time and patience. But if a problem occurs often enough, or is critical enough, there are steps we can take.
If you've worked with Windows at the technical level for more that a few years, you've probabaly heard of Mark. Or, if not, you've used software he's written. Anything from SysInternals, and much of the stuff in the Resource Kits, he's had a hand in it somewhere.
The session today was demonstrating how to diagnose performance problems, application crashes and the dreaded blue screen of death. There's a useful toolset from MS SysInternals which (when used correctly) can help you identify which software package, even down to which DLL, is causing problems and why.
It's heavy techy stuff, and not something we'd expect lower level analysts to do. It also takes a good deal of time and patience. But if a problem occurs often enough, or is critical enough, there are steps we can take.
TechEd 2008, Day 5, 11:45
The session I just attended was called "Connecting your world". Whilst it was mainly aimed at consumer grade people, it demonstrated a lot of the new features in Windows Live and LiveMesh, such as mobile blogging and photo tagging, synchronising files via the cloud (to dekstops and Windows Mobile devices, etc.
They also showed some of the new photo gallery and manipulation tools that Microsoft Labs are producing. Photosynth is already in production, but they are intending to have photo stitching and High-Def stitching and viewing intergrated into Vista very soon. If you want to know more about these, I will no doubt be demoing them to the Mars Photography club very soon.
Final part was a demo of the new features in Virtual Earth, and World Wide telescope, which have both recently been upgraded. For VE, it's mainly US data that has upgraded.
They also showed some of the new photo gallery and manipulation tools that Microsoft Labs are producing. Photosynth is already in production, but they are intending to have photo stitching and High-Def stitching and viewing intergrated into Vista very soon. If you want to know more about these, I will no doubt be demoing them to the Mars Photography club very soon.
Final part was a demo of the new features in Virtual Earth, and World Wide telescope, which have both recently been upgraded. For VE, it's mainly US data that has upgraded.
TechEd, Day 5, 10:15
Just attended a session on Certificate Management in Exchange.
Not much to report - it was predominantly about SSL certificates when publishing Outlook Web Access or Exchange RPC-over-HTTPS. Having our own certificate authority certainly makes it easier!
Not much to report - it was predominantly about SSL certificates when publishing Outlook Web Access or Exchange RPC-over-HTTPS. Having our own certificate authority certainly makes it easier!
Thursday, November 6, 2008
Microsoft TechEd, Day 4, 18:00
Head's starting to spin a little, but I just attended a session called "21st Century Networking: Time to throw out your medieval gateways"
It's an interesting take on the state of network design, and starts off by telling us what we know is true, deep down, but never tell anyone: Network Firewalls are useless.
Because so much traffic tunnels over other ports, or random ports, or malicious code comes in via valid network ports, having a port based firewall is not going to stop stuff getting in or out.
And, nowadays, the operating system itself is reasonably secure. Attacks are coming in via applications and running services, rather than against the OS.
The solution to this is, basically, to retreat: The network, even an internal LAN, chould be considered a hostile place. Get use to it anyway, because IPv6 addresses are globally routable and tunnellable too.
So, to mitigate this, Microsoft recommend dividing client machines into to groups. Manage and Unmanaged.
Unmanaged clients are the PCs/devices beyond the control of the company. Home users, Internet cafes - ones not in the domain. To mitigate against risks from these, using two-factor authentication (Smart cards were the recommended one) and have the user access the corporate servers via MS's Internet Application Gateway software. This gets installed on the application server, and provides an application level firewall which can be modified based on the user's permissions. So, for example, if a user accesses a web page and there's data on the server that the user is not authorised to access, then the gateway filters out that information from the server's responses before allowing the server to client traffic.
Managed clients should have their own local firewall on by default, protecting that client from outside-in access. They should have an X.509 certificates and extensive IPSec policies delivered via Group Policy. Basically, the IPsec policy should contain the IPv6 address of every host in the corporate network (updated by GP every time they boot up), plus the IPv6 address of the corporate DNS server.
Using this method, the client can boot up anywhere in the world, in the company or from home. When the user tries to access a server, the corporate DNS server (which is public facing) provides the IP address. The IPSec policy then kicks in, requiring an encrypted tunnel between the client (using its own X.509 certificate, provided by the domain controller when the client machine joined the domain) and the server it is accessing (which has a certificate signed by the domain controller).
Totally invisible to the end user, and no VPN required at all, and protected against man-in-the-middle-attacks.
The servers, meanwhile, are hardened to only accept traffic coming from clients that are encrypted via IPSec, using valid certificate signed by the domain controller. Therefore, only pre-authorised clients can access them, but from anywhere in the world and with no LAN level firewall required.
As the speaker said: Implimenting this is 99% possible now with Windows software (the only piece missing to de-tunnel tunneled IPv6-over-IPv4 connections, which you can do on Linux but Windows won't have that until early next year).
What is significantly hard to do, is to convince bosses that you don't need a firewall anymore.
It's an interesting take on the state of network design, and starts off by telling us what we know is true, deep down, but never tell anyone: Network Firewalls are useless.
Because so much traffic tunnels over other ports, or random ports, or malicious code comes in via valid network ports, having a port based firewall is not going to stop stuff getting in or out.
And, nowadays, the operating system itself is reasonably secure. Attacks are coming in via applications and running services, rather than against the OS.
The solution to this is, basically, to retreat: The network, even an internal LAN, chould be considered a hostile place. Get use to it anyway, because IPv6 addresses are globally routable and tunnellable too.
So, to mitigate this, Microsoft recommend dividing client machines into to groups. Manage and Unmanaged.
Unmanaged clients are the PCs/devices beyond the control of the company. Home users, Internet cafes - ones not in the domain. To mitigate against risks from these, using two-factor authentication (Smart cards were the recommended one) and have the user access the corporate servers via MS's Internet Application Gateway software. This gets installed on the application server, and provides an application level firewall which can be modified based on the user's permissions. So, for example, if a user accesses a web page and there's data on the server that the user is not authorised to access, then the gateway filters out that information from the server's responses before allowing the server to client traffic.
Managed clients should have their own local firewall on by default, protecting that client from outside-in access. They should have an X.509 certificates and extensive IPSec policies delivered via Group Policy. Basically, the IPsec policy should contain the IPv6 address of every host in the corporate network (updated by GP every time they boot up), plus the IPv6 address of the corporate DNS server.
Using this method, the client can boot up anywhere in the world, in the company or from home. When the user tries to access a server, the corporate DNS server (which is public facing) provides the IP address. The IPSec policy then kicks in, requiring an encrypted tunnel between the client (using its own X.509 certificate, provided by the domain controller when the client machine joined the domain) and the server it is accessing (which has a certificate signed by the domain controller).
Totally invisible to the end user, and no VPN required at all, and protected against man-in-the-middle-attacks.
The servers, meanwhile, are hardened to only accept traffic coming from clients that are encrypted via IPSec, using valid certificate signed by the domain controller. Therefore, only pre-authorised clients can access them, but from anywhere in the world and with no LAN level firewall required.
As the speaker said: Implimenting this is 99% possible now with Windows software (the only piece missing to de-tunnel tunneled IPv6-over-IPv4 connections, which you can do on Linux but Windows won't have that until early next year).
What is significantly hard to do, is to convince bosses that you don't need a firewall anymore.
TechEd 2008, Day 4 14:00
I attended a session on "Exchange 2007 Unified Messaging Component description and overview".
You'd be forgiven for think this was a dry and boring topic because... it was. That was 90 minutes of my life that I will never get back.
You'd be forgiven for think this was a dry and boring topic because... it was. That was 90 minutes of my life that I will never get back.
Microsoft TechEd 2008, Day 4, 11:30
My first session for today was on Co-existence and Migration with Exchange Online.
Most of the session was about provisioning new users and migration from local Exchange. As far as Domino migration goes, there are 3 main strategies:
1) Use IMAP, and only migrate mail - no calendars or contacts
2) Migrate from Domino to Exchange locally first (on a staging server), then migrate up to Online. The speaker cited a US company that did this with 1,000 users in one weekend. Friday night they did the migration from Domino to Exchange, then on the Saturday pushed them up to Online. On the Monday, the majority of users used Outlook Web Access while they did client deployments.
3) Partner with a 3rd party vendor. Apparently, our good friends at Quest are about to release a toolset to migrate from Domino directly to Exchange online.
Most of the session was about provisioning new users and migration from local Exchange. As far as Domino migration goes, there are 3 main strategies:
1) Use IMAP, and only migrate mail - no calendars or contacts
2) Migrate from Domino to Exchange locally first (on a staging server), then migrate up to Online. The speaker cited a US company that did this with 1,000 users in one weekend. Friday night they did the migration from Domino to Exchange, then on the Saturday pushed them up to Online. On the Monday, the majority of users used Outlook Web Access while they did client deployments.
3) Partner with a 3rd party vendor. Apparently, our good friends at Quest are about to release a toolset to migrate from Domino directly to Exchange online.
Wednesday, November 5, 2008
Microsoft TechEd 2008, Day 3, 18:45
The final session of today was a security based one with Jesper Johansson. If you ever get a chance to attend on of his sessions, it's well worth your time.
The main points of the talk was that the nature of security threats has changed in a number of ways. There is a vast reduction of the "hobbiest" or "vanity" hacker, replaced with hackers that are motivated by soley by money. This means that attacks are become more closely targeted, and less likely to draw attention.
The other effect of this is that attackers, spammers, etc. are business people - and they are prepared to go on the attack if their business model is threatened. This is what happened with Blue Security, a service which tracked spammers back to their source to see who authorised the spam and/or sold the product. The spammers not only performed a denial of service attack against Blue Security, but by using their own message tracking, they were able to determine which companies were using Blue's services, and attacked them in retribution, diving Blue out of business.
In addition, attacks are increasingly targeting the human element rather than technological attacks, encouraging the user to download software (Java apps, ActiveX, Flash, etc.) rather than directly breaking the operating system to implant malware.
The only solution to this is to educate users better (not more, but better) about how they can take responsibility for their own security, rather than relying on "someone else" to do it for them.
That's it for blogging today. I'm off to the Microsoft UK cocktail party, followed by the 1E drinks at a bar down the road. (As I'm representing Mat to them, I'll be sure to drink wine)
But, if I don't blog anything until about midday tomorrow, you'll know that it was a good night.
The main points of the talk was that the nature of security threats has changed in a number of ways. There is a vast reduction of the "hobbiest" or "vanity" hacker, replaced with hackers that are motivated by soley by money. This means that attacks are become more closely targeted, and less likely to draw attention.
The other effect of this is that attackers, spammers, etc. are business people - and they are prepared to go on the attack if their business model is threatened. This is what happened with Blue Security, a service which tracked spammers back to their source to see who authorised the spam and/or sold the product. The spammers not only performed a denial of service attack against Blue Security, but by using their own message tracking, they were able to determine which companies were using Blue's services, and attacked them in retribution, diving Blue out of business.
In addition, attacks are increasingly targeting the human element rather than technological attacks, encouraging the user to download software (Java apps, ActiveX, Flash, etc.) rather than directly breaking the operating system to implant malware.
The only solution to this is to educate users better (not more, but better) about how they can take responsibility for their own security, rather than relying on "someone else" to do it for them.
That's it for blogging today. I'm off to the Microsoft UK cocktail party, followed by the 1E drinks at a bar down the road. (As I'm representing Mat to them, I'll be sure to drink wine)
But, if I don't blog anything until about midday tomorrow, you'll know that it was a good night.
Microsoft TechEd 2008, Day 3, 16:45
Just stepped out of a session on integrating OCS 2007 with IP PABX systems. The demo was fairly limited (people build entire careers out of this stuff - how much can you do in 90 minutes?) but was demonstrating linking Cisco Call Manager to an OCS Mediation server.
Nice stuff, and given that OCS 2007 Release 2 supports dial-in conferencing too, deploying OCS could potentially provide all the services Mars conferencing, video conferencing and voicemail too.
One of the suggestions floated was that for companies which already have a heavy investment in a PABX, you can replicate the environment and have a secondary dial plan with a prefix. So, for example:
I have extension 1151. If someone from a "standard desk phone" (or an external caller) were to call that number, it would ring on my desk. If they were to call 1151 from an OCS client, it would ring on OCS.
But... if they rang "91151" from a desk phone, it would bridge to the OCS client. When I go home at night, or travel OCB, I can forward my desk phone to "91151" and have the calls follow me.
Also note that, now Windows Server 2008 R2 supports microphones/inbound audio over a terminal services link, it would be possible to publish OCS as a server based application and have voice capability.
Nice stuff, and given that OCS 2007 Release 2 supports dial-in conferencing too, deploying OCS could potentially provide all the services Mars conferencing, video conferencing and voicemail too.
One of the suggestions floated was that for companies which already have a heavy investment in a PABX, you can replicate the environment and have a secondary dial plan with a prefix. So, for example:
I have extension 1151. If someone from a "standard desk phone" (or an external caller) were to call that number, it would ring on my desk. If they were to call 1151 from an OCS client, it would ring on OCS.
But... if they rang "91151" from a desk phone, it would bridge to the OCS client. When I go home at night, or travel OCB, I can forward my desk phone to "91151" and have the calls follow me.
Also note that, now Windows Server 2008 R2 supports microphones/inbound audio over a terminal services link, it would be possible to publish OCS as a server based application and have voice capability.
Microsoft TechEd 2008, Day 3, 14:45
Just come out of a session on Desktop Virtualisation Scenarios. Quite interesting, and a lot of it relates to the work that Stan is interested in regarding SDS roaming and things like that.
A few key points were:
A few key points were:
- They discussed again about drive encryption, both for laptops AND desktops - for desktops mainly because it mitigates the risk of harware being stolen or improperly disposed at end of life.
- Application virtualisation (via Microsoft's App-V - foremerly Softgrid) got a big push - particularly because it makes provisioning and re-imaging machines much faster. Only the thin OS needs to be pushed out, and the user can pull down just the applications they need as they use it.
- Offline folders has been revamped a bit in Vista, to imporve the speed (I'd presume it might need SMB v2 as well). The key thing was using Offline folders in conjuction with App-V can help in two scenarios:
a) Laptop users, can work offline or online transparently, but the data files are automatically sync'd to the server where they can be backed up. Using BitLocker to encrypt the drive mitigates the damage if the laptop is lost or stolen, but the data is backed up properly too with no end user action required, and is pushed down automatically to the replacement machine when they log in. (In this respect, offline folders might be a better solution than Sharepoint for H: drives)
b) A user can roam between a desktop, laptop and Terminal Services session / VDI desktop freely, and have all their applicationms and data files follow them transparently. - Microsoft have a product called "Windows Fundementals for Legacy PCs", which is a scaled down version of XP Embedded which provides a basic UI, Internet Explorer and TS Client - just enough to get to a terminal services session or VDI desktop. They demonstrated it using a 9 year old laptop with 128Mb providing a full Vista desktop.
Microsoft TechEd 2008, Day 3, 12:15
My brain is officially full. Lots of information from the Q&A session on Migrating Domino and Groupwise to Exchange. Not much on Groupwise, so fortunately most of the discussions were about Domino.
Firstly, the Microsoft Application Analysis tool is now depreciated, because the results were not always accurate and didn't provide good information. So, it's been dropped from the Transport Suite and there's no plans to re-release it. MS recommends partnering with BinaryTree or Quest, whom have better tool sets.
The Transport Suite 2007 is apparently more reliable that Exchange Connector 2003. The way they got that improvement was by dropping API based mail transport and using SMTP to transport the mail! :-)
There are still a lot of issues for long term co-existence. One specific one which causes problems is by having a recurring appointment with attendees from both sides of the connectors. Both sides will receive and process the message OK, but if the meeting owner updates the meeting, those changes won't flow across properly. (Apparently, the BinaryTree toolset has a fix for this issue too - MS doesn't)
There's a lot of other issues which are in the presentation slides. Most of them are noted, but not fixed. Because of the problems, the trend nowdays is to migrate as quickly as possible - usually by moving the user across with an empty, or only one week's historical mail, then migrating the rest of the data soon after.
There are a lot of gotchas as far as migration goes and consolidation. Too many to mention here, but don't forget to ask me about them when I get back.
Firstly, the Microsoft Application Analysis tool is now depreciated, because the results were not always accurate and didn't provide good information. So, it's been dropped from the Transport Suite and there's no plans to re-release it. MS recommends partnering with BinaryTree or Quest, whom have better tool sets.
The Transport Suite 2007 is apparently more reliable that Exchange Connector 2003. The way they got that improvement was by dropping API based mail transport and using SMTP to transport the mail! :-)
There are still a lot of issues for long term co-existence. One specific one which causes problems is by having a recurring appointment with attendees from both sides of the connectors. Both sides will receive and process the message OK, but if the meeting owner updates the meeting, those changes won't flow across properly. (Apparently, the BinaryTree toolset has a fix for this issue too - MS doesn't)
There's a lot of other issues which are in the presentation slides. Most of them are noted, but not fixed. Because of the problems, the trend nowdays is to migrate as quickly as possible - usually by moving the user across with an empty, or only one week's historical mail, then migrating the rest of the data soon after.
There are a lot of gotchas as far as migration goes and consolidation. Too many to mention here, but don't forget to ask me about them when I get back.
Microsoft TechEd 2008, Day 3, 10:00
The first session of the day was Deploying and Migrating OCS Server 2007 Release 2.
There's a few feature changes between 2007 and Release 2 - which is in Release Candidate at the moment but due out in February - such as dial-in audio conferencing to an OCS voice chat, better Windows Mobile and Blackberry integration and such. More details here: http://www.microsoft.com/Presspass/press/2008/oct08/10-14OCSR2PR.mspx
However, the architecture changes to support this mean that, like Exchange 2007, it requires 64 bit Windows under the hood. It also requires AD schema changes.
There's a few feature changes between 2007 and Release 2 - which is in Release Candidate at the moment but due out in February - such as dial-in audio conferencing to an OCS voice chat, better Windows Mobile and Blackberry integration and such. More details here: http://www.microsoft.com/Presspass/press/2008/oct08/10-14OCSR2PR.mspx
However, the architecture changes to support this mean that, like Exchange 2007, it requires 64 bit Windows under the hood. It also requires AD schema changes.
Tuesday, November 4, 2008
Microsoft TechEd 2008, Day 2, 18:00
Final session of the day was on "Upgrading to Exchange 2007". Sadly, it was primarily about upgrading FROM Exchange 2003, but it did raise a couple of significant tid bits worth knowing.
Firstly, Exchange 2007 apparently doesn't have any mail routing configuration - it uses the Active Directory Sites and Servers configuration to determine which site is which, and follows the same replication topology that AD uses. This means that we'll need to consider this when designing the AD structure, and also lock down the topology against changes. If we went down the Exchange path, routing and replication changes would need stricter change controls involving both teams.
Secondly, Exchange cluster/failover changes require having the same operating system on both halves of the cluster. And, in-place upgrades of the OS are NOT supported for Exchange 2007 servers. In practical terms, this would mean that a ceentralised Exchange cluster built on Windows 2003 servers would have a lot of challenges when the time came to upgrade to OS to Server 2008. We'd basically need to build a new cluster, set up connectors and migrate the mailboxes individually. The user mailbox would be unavailable during the move.
So, to defer the pain, it would be easiest to deploy onto Windows 2008 servers, so we'd want a WST supported 2008 build first.
Firstly, Exchange 2007 apparently doesn't have any mail routing configuration - it uses the Active Directory Sites and Servers configuration to determine which site is which, and follows the same replication topology that AD uses. This means that we'll need to consider this when designing the AD structure, and also lock down the topology against changes. If we went down the Exchange path, routing and replication changes would need stricter change controls involving both teams.
Secondly, Exchange cluster/failover changes require having the same operating system on both halves of the cluster. And, in-place upgrades of the OS are NOT supported for Exchange 2007 servers. In practical terms, this would mean that a ceentralised Exchange cluster built on Windows 2003 servers would have a lot of challenges when the time came to upgrade to OS to Server 2008. We'd basically need to build a new cluster, set up connectors and migrate the mailboxes individually. The user mailbox would be unavailable during the move.
So, to defer the pain, it would be easiest to deploy onto Windows 2008 servers, so we'd want a WST supported 2008 build first.
Microsoft TechEd 2008, Day 2, 15:30
Attended a session with Steve Riley, Senior Security Strategist with Microsoft Security. The session we called "Privacy: Who, What, Where?"
Most of the content covered was general in nature, and more covered risks associates with spyware, RFID chips, security breaches and such. The key message was that, in general, customers to a company don't seemed to be aware or concerned about information disclosure. As such, there is currently not much economic incentive for companies to take privacy and data security seriously. Often, it's cheaper to take the risk and pay government imposed fines rather than do the right thing.
Bitlocker, of course, rated a mention. Steve did say that now BDE supports additional fixed disks and removeable drives (as of Windows 7) that there is little benefit in using both BitLocker and Windows Encrypted File System - both mitigate against the same risks. Neither, though, will protect against documents being e-mailed or taken off a system using unencrypted devices.
One of the technologies to look at would be Windows Rights Management Server. Having a policy enforced by RMS would help manage the risk of a document "escaping" the network (or CTM.)
Most of the content covered was general in nature, and more covered risks associates with spyware, RFID chips, security breaches and such. The key message was that, in general, customers to a company don't seemed to be aware or concerned about information disclosure. As such, there is currently not much economic incentive for companies to take privacy and data security seriously. Often, it's cheaper to take the risk and pay government imposed fines rather than do the right thing.
Bitlocker, of course, rated a mention. Steve did say that now BDE supports additional fixed disks and removeable drives (as of Windows 7) that there is little benefit in using both BitLocker and Windows Encrypted File System - both mitigate against the same risks. Neither, though, will protect against documents being e-mailed or taken off a system using unencrypted devices.
One of the technologies to look at would be Windows Rights Management Server. Having a policy enforced by RMS would help manage the risk of a document "escaping" the network (or CTM.)
Microsoft TechEd 2008, Day 2, 12:30
The last session was on Exchange 2007 troubleshooting. Most of it was too techy to blog here, and primarily of interest only if we move to Exchange.
But they also covered off the general troubleshooting fundementals, these being (and I'm paraphrasing here):
Know your stuff;
Have a baseline, and proactively monitor systems to check for changes;
Think of the implications before you make a change.
We all know these things, but it's still good to be reminded occasionally.
Oh, and the other thing is that most of Exchange 2007 - and Windows 7 - advanced administration involved scripting in PowerShell. So, it's time to learn yet another scripting language!
But they also covered off the general troubleshooting fundementals, these being (and I'm paraphrasing here):
Know your stuff;
Have a baseline, and proactively monitor systems to check for changes;
Think of the implications before you make a change.
We all know these things, but it's still good to be reminded occasionally.
Oh, and the other thing is that most of Exchange 2007 - and Windows 7 - advanced administration involved scripting in PowerShell. So, it's time to learn yet another scripting language!
Microsoft TechEd 2008, Day 2, 10:15
First session was about Windows 7. A few new things, but evolutionary not revolutionary (tick that one off!)
One key technology is "DirectAccess" - need to get more information about this but apparently this, when used in conjunction with Windows Server 2008 R2, will allow seamless and secure access to corporate networks without needing a VPN. I have my doubts on what their definition of "secure" is, but will research further while I'm here.
Bitlocker encryption is being extended to removable disks/USB drives, and can be enforced by group policy - you can prevent a user writing to a USB device unless it's BitLocker protected. Someone should tell the UK Government this. Another advantage, of course, is that recovery keys can be backed up to Active Directory for easy recovery in the event of a forgotten password.
They've also made the Application controls (allowing only whitelisted applications) more flexible, - still could be a nightmare to implement first time, but would help prevent users from self-installing apps down the track.
One key technology is "DirectAccess" - need to get more information about this but apparently this, when used in conjunction with Windows Server 2008 R2, will allow seamless and secure access to corporate networks without needing a VPN. I have my doubts on what their definition of "secure" is, but will research further while I'm here.
Bitlocker encryption is being extended to removable disks/USB drives, and can be enforced by group policy - you can prevent a user writing to a USB device unless it's BitLocker protected. Someone should tell the UK Government this. Another advantage, of course, is that recovery keys can be backed up to Active Directory for easy recovery in the event of a forgotten password.
They've also made the Application controls (allowing only whitelisted applications) more flexible, - still could be a nightmare to implement first time, but would help prevent users from self-installing apps down the track.
Monday, November 3, 2008
Microsoft TechEd 2008, Day 1, 19:00
Session MGT327 - System Centre and the Desktop
This session revolved around desktop management using Microsoft System Center Operations Manager. There wasn't much new information here - new for me, but most of it is information Mat already got from 1E
There were a few interesting tid-bits to watch out for, though. They cited a survey that said users generally only report 10% of application or workstation crashes to the helpdesk, of which only half of those (if that) ever get escalated past first level.
One of the features they're promoting is having the Dr Watson / Windows Error reporting subsystem upload crash reports to a Windows Sahre, where Operations Manager can analyse and report on them. This allows better visibility as to where there may be faulty hardware or a buggy device driver, which can be prioritised for repair or escalated to the vendor.
They spoke a great deal about the system installers and software distribution too, and driver management has been given a lot of attention, both in system upgrades and slipstreaming them into new installs.
Application distribution now supports multicasting, which may make NMC happier. You can also schedule a maintenance period (on a user/group or site level), to better manage application distribution and patching. Wake on LAN is also supported, so updates can be downloaded to desktop machines overnight.
There's also a bit of work done with Intel's vPro chipset, which can allow SCOM to do remote hardware inventory while a machine is powered down. Even to the point of changing BIOS settings or even re-flashing the BIOS.
Other conference notes:
1) There are discounts on Microsoft Press books - 30% off, with an additional 5% if you buy 3 or more. So, if there's anything you want me to get, let me know.
2) I've noticed a reasonable number of people using netbooks here, predominantly the Asus EeePCs but a few others. I suppose it makes a lot of sense, given that they're light weight and have good battery life.
This session revolved around desktop management using Microsoft System Center Operations Manager. There wasn't much new information here - new for me, but most of it is information Mat already got from 1E
There were a few interesting tid-bits to watch out for, though. They cited a survey that said users generally only report 10% of application or workstation crashes to the helpdesk, of which only half of those (if that) ever get escalated past first level.
One of the features they're promoting is having the Dr Watson / Windows Error reporting subsystem upload crash reports to a Windows Sahre, where Operations Manager can analyse and report on them. This allows better visibility as to where there may be faulty hardware or a buggy device driver, which can be prioritised for repair or escalated to the vendor.
They spoke a great deal about the system installers and software distribution too, and driver management has been given a lot of attention, both in system upgrades and slipstreaming them into new installs.
Application distribution now supports multicasting, which may make NMC happier. You can also schedule a maintenance period (on a user/group or site level), to better manage application distribution and patching. Wake on LAN is also supported, so updates can be downloaded to desktop machines overnight.
There's also a bit of work done with Intel's vPro chipset, which can allow SCOM to do remote hardware inventory while a machine is powered down. Even to the point of changing BIOS settings or even re-flashing the BIOS.
Other conference notes:
1) There are discounts on Microsoft Press books - 30% off, with an additional 5% if you buy 3 or more. So, if there's anything you want me to get, let me know.
2) I've noticed a reasonable number of people using netbooks here, predominantly the Asus EeePCs but a few others. I suppose it makes a lot of sense, given that they're light weight and have good battery life.
Microsoft TechEd 2008, Day 1, 17:30
Attended session UNC205 - Exchange Online Administration and Management, which was basically an overview of the user management for Exchange online.
Interesting stuff, though how scalable the website is to a large company is not clear. One thing that did come across yet again is that everything is Active Directory centric, and to use any of MS's cloud offerings requires allowing them to store an AD replica on their systems.
On the plus side, they do have a single sign-on tool that works with (an auto-configures) Outlook, Live Messenger and Sharepoint.
Oh, and I had a chocloate donut during the break. Is that too much detail? Karoona said that communication is really important....
Interesting stuff, though how scalable the website is to a large company is not clear. One thing that did come across yet again is that everything is Active Directory centric, and to use any of MS's cloud offerings requires allowing them to store an AD replica on their systems.
On the plus side, they do have a single sign-on tool that works with (an auto-configures) Outlook, Live Messenger and Sharepoint.
Oh, and I had a chocloate donut during the break. Is that too much detail? Karoona said that communication is really important....
Microsoft TechEd 2008, Day 1, 15:30 - Keynote speech by Brad Anderson

The keynote focussed on a number of areas, but the major ones were virtualisation, Operations Manager and cloud services.
They made quite a big deal about the HyperV and Live Migration - mentions of VMware were noticeably absent from that part of the speech! Basically, there's not much in that space that we don't have already.
Upcoming, however was application virtualisation which did raise one or two interesting ideas. EUT (well, Mike) is already looking a little at virtualised applications for deployment purposes, but one of the ideas mentioned for forthcoming technology is to run virtualised server applications. The idea is that you can hot-migrate an running application from Windows server to Windows server, between physical and virtual. Key point there is that you could migrate the application off, patch and/or reboot the server operating system, then migrate back, which might have interesting implications for server uptimes and SLAs.
Operations Manager Virtual Machine Manager was the next major topic - VMware did rate a mention here, mainly because it can manage both VMware and MS environments, and can manage physical and virtual hosts, something that VirtualCenter can't do. It can also drill down to applications and services (primarily web services), and report availability across multiple servers too. Given the renewed push for Service catalog and application SLAs, this could be a useful reporting tool. But, naturally, you need to put in the runt work to model the applications and dependencies first!
Cloud Services was the final major thing - going forward, MS is developing all their services such that they can be run locally or in the cloud (Microsoft's Cloud!), and migrated between. One of the demos they did showed moving 5 user mailboxes from Exchange running locally to Exchange Online - including content - with no end user reconfiguration or intervention. Naturally, they didn't talk about security or firewall ports required, but I'm sure the info is available somewhere.
Microsoft TechEd 2008, Day 1, 11:00

So far, so good.
Conference registration went smoothly and, yes, I did get a T-Shirt.
If you're reading this then the conference WiFi network is working. Blogging from my mobile today as there aren't many sessions and I wasn't sure what laptop charging facilities there were. (Not many, and most in use. Note to self - get extended run battery pack before next conference!)
Conference coffee is tolerable, but not great.
The keynote session is at 14:00 from Brad Anderson of MS' services division. I'll write more after that.
Subscribe to:
Comments (Atom)