Over a month ago Christian Hammond (chipx86)<http://twitter.com/chipx86> has released the first technology preview of WSX. This tool enables users to access the console of their shared virtual machines via a web browser without installing any plug-ins or web controls. This service renders an HTML5 web page that can connect to your workstation hosts or ESXi servers (requires vSphere 5.0), enumerate the available virtual machines and allow you to power them on and interact with the desktop.
The July tech preview holds many improvements:
* Improved Home Page
* Improved Server Page
* Big Honkin' Power Button
* Better Touch Input
* Mouse Wheels
* Better Retina Support
* SSL Support
* Easier Installation
* Smarter Defaults
* Performance Tweaks
* Bug Fixes
[image]<http://www.ntpro.nl/blog/uploads/wsx-3.png> [image] <http://www.ntpro.nl/blog/uploads/wsx-1.png> [image] <http://www.ntpro.nl/blog/uploads/wsx-2.png>
If you're intrested and want to give it a try, just hop over to the ChipLog websit<http://blog.chipx86.com/2012/07/24/vmware-wsx-july-tech-preview-release/>e and find out the latest news and downloads.
________________________________
Original Page: http://feedproxy.google.com/~r/Ntpronl/~3/RjvHMxI07Dc/2096-Tool-Update-VMware-WSX-July-Tech-Preview-Release.html
Friday, July 27, 2012
Site Recovery Manager survey… please help us out!
I just received an email from the the Site Recovery Manager Product Management team. They created a new survey, and I was hoping each of you who is using, or will be purchasing SRM soon, could take the time to complete it. These types of surveys are very useful for Product Management when it comes to setting priorities for new features and identify gaps etc. Thanks!
We are conducting a survey about VMware vCenter Site Recovery Manager (SRM) to learn more about how people use our products. The survey will help us identify where we can improve the product to meet your needs and we would really appreciate getting your feedback.
The link to the survey is below, it typically takes less than 10 minutes to complete. http://www.surveymethods.com/EndUser.aspx?ECC8A4BDEDA6B9BAE7
by Duncan Epping
________________________________
Original Page: http://www.yellow-bricks.com/2012/07/27/site-recovery-manager-survey-please-help-us-out/
We are conducting a survey about VMware vCenter Site Recovery Manager (SRM) to learn more about how people use our products. The survey will help us identify where we can improve the product to meet your needs and we would really appreciate getting your feedback.
The link to the survey is below, it typically takes less than 10 minutes to complete. http://www.surveymethods.com/EndUser.aspx?ECC8A4BDEDA6B9BAE7
by Duncan Epping
________________________________
Original Page: http://www.yellow-bricks.com/2012/07/27/site-recovery-manager-survey-please-help-us-out/
Tuesday, July 24, 2012
VMware: VMware vSphere Blog: vSphere HA isolation response… which to use when?
A while back I wrote this article about a split brain scenario<http://blogs.vmware.com/vsphere/2012/05/ha-split-brain-which-vm-prevails.html> with vSphere HA. Although we have multiple techniques to mitigate these scenarios it is always better to prevent. I had already blogged about this before but I figured it wouldn't hurt to get this out again and elaborate on it a bit more.
First some basics…
What is an "Isolation Response"?
The isolation response refers to the action that vSphere HA takes when the heartbeat network is isolated. The heartbeat network is usually the management network of an ESXi host. When a host does not receive any heartbeats it will trigger the response after an X number of seconds. So when exactly? Well that depends if the host is a slave or a master. This is the timeline:
Isolation of a slave
* T0 – Isolation of the host (slave)
* T10s – Slave enters "election state"
* T25s – Slave elects itself as master
* T25s – Slave pings "isolation addresses"
* T30s – Slave declares itself isolated and "triggers" isolation response
Isolation of a master
* T0 – Isolation of the host (master)
* T0 – Master pings "isolation addresses"
* T5s – Master declares itself isolated and "triggers" isolation response
What are my options?
Today there are three options for the isolation response. The responses is what the host will do for the virtual machines running on that host when it has validated it is isolated.
1. Power off – When a network isolation occurs all VMs are powered off. It is a hard stop.
2. Shut down – When a network isolation occurs all VMs running on that host are shut down via VMware Tools. If this is not successful within 5 minutes a "power off" will be executed.
3. Leave powered on – When a network isolation occurs on the host the state of the VMs remains unchanged.
Now that we know what the options are. Which one should you use? Well this depends on your environment. Are you using iSCSI/NAS? Do you have a converged network infrastructure? We've put the most common scenarios in a table.
Likelihood that host will retain access to VM datastores Likelihood that host will retain access to VM network Recommended Isolation policy Explanation Likely Likely Leave Powered On VM is running fine so why power it off? Likely Unlikely Either Leave Powered On or Shutdown Choose shutdown to allow HA to restart VMs on hosts that are not isolated and hence are likely to have access to storage Unlikely Likely Power Off Use Power Off to avoid having two instances of the same VM on the VM network Unlikely Unlikely Leave Powered On or Power Off Leave Powered on if the VM can recover from the network/datastore outage if it is not restarted because of the isolation, and Power Off if it likely can't.
But why is it important…. Well just imagine you pick "leave powered on" and you have a converged network environment and are using iSCSI storage, chances are fairly big that when the host management network is isolated… so is the virtual machine network and the storage for your virtual machine. In that case, having the virtual machine restarted will reduce the amount of "downtime" from an "application / service" perspective.
I hope this helps making the right decision for the vSphere HA isolation response. Although it is just a small part of what vSphere HA does, it is important to understand the impact a wrong decision can have.
________________________________
Original Page: http://blogs.vmware.com/vsphere/2012/07/vsphere-ha-isolation-response-which-to-use-when.html
First some basics…
What is an "Isolation Response"?
The isolation response refers to the action that vSphere HA takes when the heartbeat network is isolated. The heartbeat network is usually the management network of an ESXi host. When a host does not receive any heartbeats it will trigger the response after an X number of seconds. So when exactly? Well that depends if the host is a slave or a master. This is the timeline:
Isolation of a slave
* T0 – Isolation of the host (slave)
* T10s – Slave enters "election state"
* T25s – Slave elects itself as master
* T25s – Slave pings "isolation addresses"
* T30s – Slave declares itself isolated and "triggers" isolation response
Isolation of a master
* T0 – Isolation of the host (master)
* T0 – Master pings "isolation addresses"
* T5s – Master declares itself isolated and "triggers" isolation response
What are my options?
Today there are three options for the isolation response. The responses is what the host will do for the virtual machines running on that host when it has validated it is isolated.
1. Power off – When a network isolation occurs all VMs are powered off. It is a hard stop.
2. Shut down – When a network isolation occurs all VMs running on that host are shut down via VMware Tools. If this is not successful within 5 minutes a "power off" will be executed.
3. Leave powered on – When a network isolation occurs on the host the state of the VMs remains unchanged.
Now that we know what the options are. Which one should you use? Well this depends on your environment. Are you using iSCSI/NAS? Do you have a converged network infrastructure? We've put the most common scenarios in a table.
Likelihood that host will retain access to VM datastores Likelihood that host will retain access to VM network Recommended Isolation policy Explanation Likely Likely Leave Powered On VM is running fine so why power it off? Likely Unlikely Either Leave Powered On or Shutdown Choose shutdown to allow HA to restart VMs on hosts that are not isolated and hence are likely to have access to storage Unlikely Likely Power Off Use Power Off to avoid having two instances of the same VM on the VM network Unlikely Unlikely Leave Powered On or Power Off Leave Powered on if the VM can recover from the network/datastore outage if it is not restarted because of the isolation, and Power Off if it likely can't.
But why is it important…. Well just imagine you pick "leave powered on" and you have a converged network environment and are using iSCSI storage, chances are fairly big that when the host management network is isolated… so is the virtual machine network and the storage for your virtual machine. In that case, having the virtual machine restarted will reduce the amount of "downtime" from an "application / service" perspective.
I hope this helps making the right decision for the vSphere HA isolation response. Although it is just a small part of what vSphere HA does, it is important to understand the impact a wrong decision can have.
________________________________
Original Page: http://blogs.vmware.com/vsphere/2012/07/vsphere-ha-isolation-response-which-to-use-when.html
Arista & VMware Present: Enabling Multi-Tenancy in the Cloud and VM Farm - Eric Sloof
Join VMware and Arista Chief Technology Officers for an in-depth engineering discussion and educational session on how to design and deploy multi-tenant infrastrucutres that enable new cloud services, greater customer isolation, and simplifies scalable network architectures.
This session will dive into technologies such as:
* VXLAN, NVGRE and STT (Stateless Tunnel Transport - an IETF Draft) discussing advantages and disadvantages of each and when to use them
* Network architectures to support scalable cloud deployments
* How to integrate network virtualization and provisioning into a common framework and service catalog
http://www.aristanetworks.com/en/vmware-july-26-webinar
This session will dive into technologies such as:
* VXLAN, NVGRE and STT (Stateless Tunnel Transport - an IETF Draft) discussing advantages and disadvantages of each and when to use them
* Network architectures to support scalable cloud deployments
* How to integrate network virtualization and provisioning into a common framework and service catalog
http://www.aristanetworks.com/en/vmware-july-26-webinar
VMware: VMware SMB Blog: SMB Pressures and Challenges Solved with Virtualization and Cloud Initiatives
Guest Post by Mark Bowker, ESG Senior Analyst
I'm always impressed when I see IT organizations that are strapped for resources and operating in firefighting mode take the initiative to turn around and successfully create a positive impact on the business as a whole. I'll admit that when I was in IT, I had a few moments of brilliance, but it was so hard to break away from the daily tasks of simply maintaining the investments we had on hand. Virtualization and cloud computing are certainly poised to enable amazing success for IT organizations that are ready to learn and embrace a better way of computing and delivering IT services.
ESG recently spoke with a few SMB IT organizations and published a white paper<http://www.vmware.com/files/pdf/smb/ESG_Exec_Summary_VMware_SMB.pdf?src=blog> on the current and planned success of virtualization and cloud initiatives, the success they are having with transforming desktop and application delivery, and how cloud is becoming a preferred consumption model. The information we captured was fascinating and ranged from efficiencies in blocking and tackling:
"Having one view is very important—when administrators have to switch over to different management consoles to access systems, that is really hard. Now we can do faster deployments and day-to-day tasks."
—Infrastructure Architect, IT services firm
…to truly changing the way IT thinks:
"I think most small businesses think of virtualization and think of one thing: How can I get more server density and more applications on the same server so I can get more out of it? It has taken us a while to grow and think about virtualization not just in terms of server density, but also about how it creates an abstraction between the application and the hardware. Now we are experimenting to see where else in the company we can gain some benefit from using virtual technologies."
—Vice President of IT and System Operations, financial services firm
While upkeep of the current infrastructure will always be necessary, an increasing percentage of new-project spending driven by IT leaders, as shown in the below figure, indicates an organization in which technology becomes a tool to implement strategy, rather than simply a tactic to keep the status quo intact. This is certainly the case with server virtualization, and it highlights the potential for cloud computing and desktop virtualization in SMBs.
[http://blogs.vmware.com/.a/6a00d8341c328153ef0167689d3cdc970b-600wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0167689d3cdc970b-pi>
So even if you have started to virtualize and explored desktop delivery and cloud consumption models, keep your foot pinned on the accelerator. Possibilities abound and your peers are racing ahead, discovering ways to simplify their responsibilities, amplify IT's value to the core business at hand, and have fun doing it.
- Mark Bowker
ESG Senior Analyst Mark Bowker focuses on all things related to virtualization and cloud computing. Mark researches cloud and virtualization technologies and evaluates the impact the solutions have (or will have) on IT strategy and the broader marketplace. His other research areas include data center management, application workload deployment in next-generation data centers, and the external influences driving adoption of data center technologies. Prior to joining ESG, Mark ran the IT organization for a business consulting and technology services company. A Microsoft Certified Systems Engineer, Mark is experienced in designing, implementing, and expanding network and system infrastructure for global organizations.
blogs.vmware.com [cid:/images/orig-link.png] <http://blogs.vmware.com/smb/2012/07/esg_smbvirtualizationandcloud-success.html>
◆
________________________________
Original Page: http://blogs.vmware.com/smb/2012/07/esg_smbvirtualizationandcloud-success.html
Sent from Feeddler RSS Reader
I'm always impressed when I see IT organizations that are strapped for resources and operating in firefighting mode take the initiative to turn around and successfully create a positive impact on the business as a whole. I'll admit that when I was in IT, I had a few moments of brilliance, but it was so hard to break away from the daily tasks of simply maintaining the investments we had on hand. Virtualization and cloud computing are certainly poised to enable amazing success for IT organizations that are ready to learn and embrace a better way of computing and delivering IT services.
ESG recently spoke with a few SMB IT organizations and published a white paper<http://www.vmware.com/files/pdf/smb/ESG_Exec_Summary_VMware_SMB.pdf?src=blog> on the current and planned success of virtualization and cloud initiatives, the success they are having with transforming desktop and application delivery, and how cloud is becoming a preferred consumption model. The information we captured was fascinating and ranged from efficiencies in blocking and tackling:
"Having one view is very important—when administrators have to switch over to different management consoles to access systems, that is really hard. Now we can do faster deployments and day-to-day tasks."
—Infrastructure Architect, IT services firm
…to truly changing the way IT thinks:
"I think most small businesses think of virtualization and think of one thing: How can I get more server density and more applications on the same server so I can get more out of it? It has taken us a while to grow and think about virtualization not just in terms of server density, but also about how it creates an abstraction between the application and the hardware. Now we are experimenting to see where else in the company we can gain some benefit from using virtual technologies."
—Vice President of IT and System Operations, financial services firm
While upkeep of the current infrastructure will always be necessary, an increasing percentage of new-project spending driven by IT leaders, as shown in the below figure, indicates an organization in which technology becomes a tool to implement strategy, rather than simply a tactic to keep the status quo intact. This is certainly the case with server virtualization, and it highlights the potential for cloud computing and desktop virtualization in SMBs.
[http://blogs.vmware.com/.a/6a00d8341c328153ef0167689d3cdc970b-600wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0167689d3cdc970b-pi>
So even if you have started to virtualize and explored desktop delivery and cloud consumption models, keep your foot pinned on the accelerator. Possibilities abound and your peers are racing ahead, discovering ways to simplify their responsibilities, amplify IT's value to the core business at hand, and have fun doing it.
- Mark Bowker
ESG Senior Analyst Mark Bowker focuses on all things related to virtualization and cloud computing. Mark researches cloud and virtualization technologies and evaluates the impact the solutions have (or will have) on IT strategy and the broader marketplace. His other research areas include data center management, application workload deployment in next-generation data centers, and the external influences driving adoption of data center technologies. Prior to joining ESG, Mark ran the IT organization for a business consulting and technology services company. A Microsoft Certified Systems Engineer, Mark is experienced in designing, implementing, and expanding network and system infrastructure for global organizations.
blogs.vmware.com [cid:/images/orig-link.png] <http://blogs.vmware.com/smb/2012/07/esg_smbvirtualizationandcloud-success.html>
◆
________________________________
Original Page: http://blogs.vmware.com/smb/2012/07/esg_smbvirtualizationandcloud-success.html
Sent from Feeddler RSS Reader
Monday, July 23, 2012
Understanding VXLAN and the value prop in just 4 minutes…
Understanding VXLAN and the value prop in just 4 minutes…<http://www.yellow-bricks.com/2012/07/23/understanding-vxlan-and-the-value-prop-in-just-4-minutes/>
In this video VXLAN is explained in clear understandable language in just four minutes. We need more videos like these, fast and easy to understand.
________________________________
Original Page: http://www.yellow-bricks.com/2012/07/23/understanding-vxlan-and-the-value-prop-in-just-4-minutes/
In this video VXLAN is explained in clear understandable language in just four minutes. We need more videos like these, fast and easy to understand.
________________________________
Original Page: http://www.yellow-bricks.com/2012/07/23/understanding-vxlan-and-the-value-prop-in-just-4-minutes/
EMC VNX Inyo: Hello World!
On Monday last week, the VNX Operating Environment code-named "Inyo" was released and made available for download on Powerlink for EMC Partners and EMC Customers. As is customary, there's a slight delay for "ship from factory" as the default installed software (which happens mid August).
This is a BIG release – with loads of goodies for everyone! I did a preview (including some demos) in the webcast – which is recorded and available here<http://virtualgeek.typepad.com/virtual_geek/2012/06/vnx-engineering-update-and-cx4vaai-vsphere-scoop.html>. While I would read the release notes, here are "My Top 10" high notes as far as I'm concerned:
1. Mixed RAID in Virtual Pools. This is a material improvement in overall capacity efficiency – about 30-40% for most customers)
2. Virtual Pool rebalancing. This is a huge efficiency AND "ease of" improvement. The ability to easily and non-disruptively add IOPs in the form of Flash, or capacity in the form of NL-SAS means that migrations can be avoided, and it's easier than ever to "start small" and grow as needed.
3. Advanced Snaps. I can hear EMC customers and Partners saying "finally". It's a fact that our block snapshots in the VNX family needed work. With hundreds of writeable snaps per volume, snaps of snaps, tens of thousands of snaps in total. They still aren't perfect in my opinion (what is), but we think we've made a massive improvement. I'm VERY eager for customer feedback. Ditto on Thin Provisioning overall improvements. Would LOVE to hear customer feedback. We do a lot of testing on this, but inevitably, customers are the truest test.
4. VAAI NFS VM-level (aka file) snapshots, but this time with snaps of snaps – and what VMware calls "fast clone" (ergo a deferred snap, not a file-level copy which existed in vSphere 5 and Franklin). This can be leveraged by future vSphere releases to accelerate View Linked Clone use cases and also vCloud Director. There are several folks at VMware and EMC playing with this now – more on performance impact later (I've learnt enough from the vSphere 4.1 VAAI experience to wait until we have loads of data before proclaiming victory :-)
5. vSphere API for Storage Awareness (VASA) – built in, and with NFS support. We've had VASA support from the moment of launch, but it depended on the use of solutions enabler which increased solution complexity and also limited VASA support for block only. With Inyo, VASA is built into the platform – simplifying things – and also supports NAS.
6. VAAI XCOPY improvements. I talked about this on this webcast (highly recommended viewing for CX4 and VNX customers) – in Inyo, an optimized code path ("Direct Movement") is used for many, many more XCOPY scenarioes.
7. VAAI Thin Provisioning Reclaim improvements. In future vSphere releases (and also in Windows Server 2012) Thin Provisioning UNMAP is used ever more extensively. These internal optimizations make TP reclaim work better.
8. "Flash 1st". While this is perhaps how FAST VP on VNX should have worked out of the gate – it's good to optimize. In Inyo – all IOs land on the top tier first, and migrate down as needed.
9. FAST Cache improvements. Everything we can do to remove "guardrails" around use cases simplifies things for our customers. Inyo contains some code optimizations
10. Some future vSphere optimizations around multipathing (will talk more about this at VMworld at the end of August – hope you've registered
So… What's next? Here's a hint… The VNX OE (used by the family) releases are codenamed after mountain ranges.
The previous release was "Franklin" – which are 7,192ft/2,192m high and look like this:
[http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aef9e970d-pi]<http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aef90970d-pi>
The current release is "Inyo" – which are 11,123ft/3,390m high and look like this:
[http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aefc7970d-pi]<http://virtualgeek.typepad.com/.a/6a00e552e53bd28833017616a4c231970c-pi>
The next release is codenamed "Rockies" – which are 14,440ft/4,400m high and look like this:
[http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aeff1970d-pi]<http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aefd4970d-pi>
So – as big a release as Inyo is relative to Franklin – the Rockies release is much, much bigger :-) More on that later – and also more on Isilon Mavericks which is getting close too….
virtualgeek.typepad.com [cid:/images/orig-link.png] <http://virtualgeek.typepad.com/virtual_geek/2012/07/emc-vnx-inyo-hello-world.html>
◆
________________________________
Original Page: http://virtualgeek.typepad.com/virtual_geek/2012/07/emc-vnx-inyo-hello-world.html
This is a BIG release – with loads of goodies for everyone! I did a preview (including some demos) in the webcast – which is recorded and available here<http://virtualgeek.typepad.com/virtual_geek/2012/06/vnx-engineering-update-and-cx4vaai-vsphere-scoop.html>. While I would read the release notes, here are "My Top 10" high notes as far as I'm concerned:
1. Mixed RAID in Virtual Pools. This is a material improvement in overall capacity efficiency – about 30-40% for most customers)
2. Virtual Pool rebalancing. This is a huge efficiency AND "ease of" improvement. The ability to easily and non-disruptively add IOPs in the form of Flash, or capacity in the form of NL-SAS means that migrations can be avoided, and it's easier than ever to "start small" and grow as needed.
3. Advanced Snaps. I can hear EMC customers and Partners saying "finally". It's a fact that our block snapshots in the VNX family needed work. With hundreds of writeable snaps per volume, snaps of snaps, tens of thousands of snaps in total. They still aren't perfect in my opinion (what is), but we think we've made a massive improvement. I'm VERY eager for customer feedback. Ditto on Thin Provisioning overall improvements. Would LOVE to hear customer feedback. We do a lot of testing on this, but inevitably, customers are the truest test.
4. VAAI NFS VM-level (aka file) snapshots, but this time with snaps of snaps – and what VMware calls "fast clone" (ergo a deferred snap, not a file-level copy which existed in vSphere 5 and Franklin). This can be leveraged by future vSphere releases to accelerate View Linked Clone use cases and also vCloud Director. There are several folks at VMware and EMC playing with this now – more on performance impact later (I've learnt enough from the vSphere 4.1 VAAI experience to wait until we have loads of data before proclaiming victory :-)
5. vSphere API for Storage Awareness (VASA) – built in, and with NFS support. We've had VASA support from the moment of launch, but it depended on the use of solutions enabler which increased solution complexity and also limited VASA support for block only. With Inyo, VASA is built into the platform – simplifying things – and also supports NAS.
6. VAAI XCOPY improvements. I talked about this on this webcast (highly recommended viewing for CX4 and VNX customers) – in Inyo, an optimized code path ("Direct Movement") is used for many, many more XCOPY scenarioes.
7. VAAI Thin Provisioning Reclaim improvements. In future vSphere releases (and also in Windows Server 2012) Thin Provisioning UNMAP is used ever more extensively. These internal optimizations make TP reclaim work better.
8. "Flash 1st". While this is perhaps how FAST VP on VNX should have worked out of the gate – it's good to optimize. In Inyo – all IOs land on the top tier first, and migrate down as needed.
9. FAST Cache improvements. Everything we can do to remove "guardrails" around use cases simplifies things for our customers. Inyo contains some code optimizations
10. Some future vSphere optimizations around multipathing (will talk more about this at VMworld at the end of August – hope you've registered
So… What's next? Here's a hint… The VNX OE (used by the family) releases are codenamed after mountain ranges.
The previous release was "Franklin" – which are 7,192ft/2,192m high and look like this:
[http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aef9e970d-pi]<http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aef90970d-pi>
The current release is "Inyo" – which are 11,123ft/3,390m high and look like this:
[http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aefc7970d-pi]<http://virtualgeek.typepad.com/.a/6a00e552e53bd28833017616a4c231970c-pi>
The next release is codenamed "Rockies" – which are 14,440ft/4,400m high and look like this:
[http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aeff1970d-pi]<http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177438aefd4970d-pi>
So – as big a release as Inyo is relative to Franklin – the Rockies release is much, much bigger :-) More on that later – and also more on Isilon Mavericks which is getting close too….
virtualgeek.typepad.com [cid:/images/orig-link.png] <http://virtualgeek.typepad.com/virtual_geek/2012/07/emc-vnx-inyo-hello-world.html>
◆
________________________________
Original Page: http://virtualgeek.typepad.com/virtual_geek/2012/07/emc-vnx-inyo-hello-world.html
The New Exchange
By The Exchange Team on July 23, 2012
Last Monday, at an event<http://www.microsoft.com/en-us/news/presskits/office/liveevent.aspx> in San Francisco, Steve Ballmer introduced the new, modern Office to the world. An exciting set of capabilities were showcased, including Windows 8 integration, Office as a subscription, and an enhanced social story.
Exchange is one of the cornerstones of communication and collaboration in Office. Over the past few years, we have seen significant changes in the way people communicate – a multitude of devices, an explosion of information, complex compliance requirements, social networks, and a multi-generational workforce. This world of communication challenges has been accompanied by a major shift towards cloud services.
The Exchange team has been hard at work in building a product and service that helps to address these challenges and better prepare our customers for the future of communications and productivity. We are excited to announce an important milestone on this journey – the preview of the next version of Exchange is now available!
With Exchange 2010, we redesigned the product with low-cost large mailboxes and cloud services in mind. We then extended this vision through Office 365 where tens of thousands of organizations with millions of users have accompanied us on this journey to the cloud. Now, customers can look forward to the new release of Exchange which offers a wide variety of exciting benefits:
* Remain in control, online and on-premises, by tailoring your solution based on your unique needs and ensuring your communications are always available on your terms.
* Keep the organization safe by protecting business communications and sensitive information in order to meet internal and regulatory compliance requirements.
* Increase productivity by helping users manage increasing volumes of communications across multiple devices.
As of last week the new version of Office, including Exchange and Office 365, has been made available to customers. I would encourage everyone to download the preview version of Exchange Server 2013<http://technet.microsoft.com/en-us/evalcenter/hh973395.aspx?wt.mc_id=TEC_116_1_33> and try out the service preview of Office 365 Enterprise<https://portal.microsoftonline.com/Signup/MainSignup15.aspx?OfferId=D214930B-46C2-4FD2-B7F9-EC134993F34A&dl=ENTERPRISEPACK_B_PILOT&pc=O365-Preview-2012&ali=1>. As with pre-release versions, please use them to preview but not for production use.
Here are some of the great benefits you get with the next release of Exchange:
1. Reduced costs by optimizing for next generation of hardware
Exchange can now support up to 8TB disks, by reducing database IOPS by +50% and optimizing for multiple databases per volume to increase aggregate disk utilization while maintaining reasonable database sizes. Ever growing memory capacity is used to improve search query performance and reduce IOPS. All this allows you and your end users to have larger mailboxes at lower costs.
2. Significantly reduced operational overhead for high availability
DAG management is simplified via automatic DAG network configuration, enhancements to DAG management cmdlets, support for multiple databases per disk, and enhancements to lagged copies. Auto-recovery capabilities – inherently built into DAGs – are now extended to the rest of Exchange and all protocols. Client-initiated, automatic recovery allows you to reduce recovery time for site failures from hours to under a minute.
3. Decrease the amount of time spent managing your system while maintaining control
Exchange now provides a single, easy-to-use, Web-based administration interface – the Exchange Administration Center (EAC). Role based access control (RBAC) empowers your helpdesk and specialist users to perform specific tasks which are surfaced appropriately in the EAC – without requiring full administrative permissions. This streamlined and intuitive experience helps you manage Exchange efficiently, delegate tasks, and focus on driving your business forward.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/0474.Ex2013Prev1.png]
Figure 1: The Exchange Administration Center (EAC) in Exchange 2013
4. Automatically protect Exchange availability from surges in traffic
Exchange now offers easy to administer controls to protect against unexpected surges in traffic. System work that is not interactive is automatically deferred to non-peak hours in order to preserve the end user experience and higher priority tasks. This improved overall system through-put leads to reduced costs by saving you from planning capacity for those infrequent, unexpected peaks.
5. Cloud on your terms
Exchange provides you tools to move to the cloud on your terms – whether that's onboarding to the cloud overnight or easily managing a hybrid deployment with mailboxes on-premises and online to meet your business needs. Provide your end users with a seamless experience including sharing calendars and scheduling meetings between on-premises and online users and have minimal user disruption when user mailboxes are smoothly moved across environments. Remain in control in the cloud by testing out upcoming enhancements via previews.
6. Automatically protect your email from malware
Exchange now offers built in basic anti-malware protection. Administrators can configure and manage their protection settings right from within the Exchange Administration Center. Integrated reporting provides visibility into emerging trends. This capability can be turned off, replaced, or paired with premium services such as Exchange Online Protection for layered protection.
7. Protect your sensitive data and inform users of internal compliance policies with Data Loss Prevention (DLP) capabilities
Keep your organization safe from users accidentally sharing sensitive information with unauthorized people. The new Exchange DLP features identify, monitor, and protect sensitive data through deep content analysis. Exchange offers built-in DLP policies based on regulatory standards such as PII and PCI, and is extensible to support other policies important to your business. New Policy Tips in Outlook 2013 inform users about policy violations as content is being created and about how information should be handled according to organizational standards.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/3073.Ex2013Prev2_2D00_sml.png]
Figure 2: Protect your sensitive data with Data Loss Prevention (DLP) capabilities
8. Allow compliance officers to run In-Place eDiscovery across Exchange, SharePoint, and Lync – from a single interface
The ability to immutably preserve and discover data across your entire organization is essential to ensuring internal and regulatory compliance. Allow your compliance officers to autonomously use the new eDiscovery Center to identify, hold, and analyze your organization's data from Exchange, SharePoint, and Lync. And, the data always remains in-place, so you never have to manage a separate store. With the eDiscovery Center, you can reduce the cost of managing complex compliance needs, while ensuring you are prepared for the unexpected.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/0207.Ex2013Prev3.png]
Figure 3: Run In-Place eDiscovery across Exchange, SharePoint and Lync from a single interface
9. Allow users to naturally work together – while compliance is applied behind the scenes
Site Mailboxes bring Exchange emails and SharePoint documents together. Like a filing cabinet, they provide a place to file project emails and documents and can only be seen by project members. Document storage, co-authoring, and versioning is provided by SharePoint while messaging is handled by Exchange – with a complete user experience within Outlook 2013. Compliance policies are applied at the site mailbox level and are transparent to the users – thus preserving their productivity.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/2402.Ex2013Prev4_2D00_sml.png]
Figure 4: Site Mailboxes bring Exchange emails and SharePoint documents together
10. Modern public folders provide a great way of managing and storing shared conversations and discussions
Public folders are now available in Exchange Online. Both on-premise as well as online, public folders provide the same capabilities customers are already familiar with. And more – they now share the same storage, indexing, and HA capabilities of regular mailboxes and public folder content can now be found via end-user search.
11. Give your users an intuitive, gorgeous, touch-optimized experience on all screens
Your end users will get more done from anywhere with a clean and uncluttered experience. Users can now take advantage of the fresh, easy, and intuitive Windows 8 style experience across Outlook and OWA. OWA user experience scales beautifully for any form factor and size – PC or slate or phone – and has a modern user experience voice with great support for touch and motion. OWA now offers three different UI layouts optimized for desktop, slate, and phone browsers.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/7444.Ex2013Prev5.jpg]
Figure 5: An intuitive, gorgeous, touch-optimized experience on all screens
12. Offline support in OWA allows your users to be productive when offline or on intermittently connected networks
You can now launch OWA in the browser and start working even if there is no network connectivity. Your emails and actions are automatically synchronized the next time connectivity is restored. This allows your users to be productive and have a great OWA experience even from remote locations with slow or intermittently connected networks or no network connection at all.
13. Bring all of your contacts together and automatically keep them up-to-date
People's professional networks span many different places. In Office 365, your users can import contact information from LinkedIn (and other networks in the future) so that they have all of their information in one place. Exchange will even find the same person across your personal contacts, GAL, and other networks and consolidate their information into one contact card, avoiding duplication and multiple contact cards with different information.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/2318.Ex2013Prev6.png]
Figure 6: Bring all your contacts together from Exchange's GAL, your personal contacts and other networks
14. Modern people search experience lets you quickly find the right person
People search experience is consistent everywhere – from people hub to nickname cache when composing an email. Search spans across all of your people – personal contacts, GAL, networks. Search results are relevance based and contain rich results – photos, phone number, location, etc.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/3302.Ex2013Prev7.png]
Figure 7: Quickly find the right person across your personal contacts, GAL and networks
15. Updated canvas makes calendar more useful for everyone
Like Outlook, OWA now supports simple entry of reminders and to-do's by typing right on the calendar. Users get quick, glance-able day and item "peeks". New views for day, week, and month – like the "month + agenda" (or "Mogenda") view – makes it really easy to manage your time.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/6433.Ex2013Prev8_2D00_sml.png]
Figure 8: Manage your time easily with the new views for day, week, and month
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/0361.Ex2013Prev9.png]
Figure 9: Calendar item "peek" shows useful information
16. Customize Outlook and OWA easily by integrating apps from the Office marketplace
Help your users be more productive via 3rd party apps for Outlook adding contextual information and functionality to emails and calendar. Apps for Outlook are easy to develop using the new cloud-based extensibility model. The same apps work across Outlook 2013 and OWA – including on OWA's slate and phone optimized layouts. Users and Exchange administrators can easily discover and install apps via the Office marketplace. You can control which apps different end users can use.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/8345.Ex2013Prev10_2D00_sml.jpg]
Figure 10: Customize Outlook and OWA with 3rd party apps from the Office marketplace
This is the first of a series of blog posts which will cover the next release of Exchange. In future posts, we will cover the full set of capabilities, including all of the features mentioned above, in more detail.
To get fully up-to-speed on the next release of Exchange:
* Download the preview version of Exchange Server 2013<http://technet.microsoft.com/en-us/evalcenter/hh973395.aspx?wt.mc_id=TEC_116_1_33>.
* Try the new Exchange Online in the Office 365 Enterprise Preview<https://portal.microsoftonline.com/Signup/MainSignup15.aspx?OfferId=D214930B-46C2-4FD2-B7F9-EC134993F34A&dl=ENTERPRISEPACK_B_PILOT&pc=O365-Preview-2012&ali=1>.
* Follow the Exchange Team Blog<http://blogs.technet.com/b/exchange/>.
* Attend the Microsoft Exchange Conference (MEC)<http://www.mecisback.com/> in September. Go deep on Exchange 2013 with industry experts and the Exchange engineering team. It's been a decade since the last MEC, and we've got some big surprises in store for our community!
As always, we welcome your comments and feedback. We've also gone live with the Exchange Server 2013 Forum<http://social.technet.microsoft.com/Forums/en-US/exchangeserverpreview/threads> and will monitor it regularly to collect your feedback.
Thanks so much for your interest in Exchange, and we hope you find the next version of the product as exciting and innovative as we do. The entire team looks forward to your feedback!
Rajesh Jha
Corporate Vice President
Exchange
TechNet Blogs [cid:/images/orig-link.png] <http://blogs.technet.com/b/exchange/archive/2012/07/23/the-new-exchange.aspx> |by The Exchange Team on July 23, 2012
________________________________
Original Page: http://blogs.technet.com/b/exchange/archive/2012/07/23/the-new-exchange.aspx
Last Monday, at an event<http://www.microsoft.com/en-us/news/presskits/office/liveevent.aspx> in San Francisco, Steve Ballmer introduced the new, modern Office to the world. An exciting set of capabilities were showcased, including Windows 8 integration, Office as a subscription, and an enhanced social story.
Exchange is one of the cornerstones of communication and collaboration in Office. Over the past few years, we have seen significant changes in the way people communicate – a multitude of devices, an explosion of information, complex compliance requirements, social networks, and a multi-generational workforce. This world of communication challenges has been accompanied by a major shift towards cloud services.
The Exchange team has been hard at work in building a product and service that helps to address these challenges and better prepare our customers for the future of communications and productivity. We are excited to announce an important milestone on this journey – the preview of the next version of Exchange is now available!
With Exchange 2010, we redesigned the product with low-cost large mailboxes and cloud services in mind. We then extended this vision through Office 365 where tens of thousands of organizations with millions of users have accompanied us on this journey to the cloud. Now, customers can look forward to the new release of Exchange which offers a wide variety of exciting benefits:
* Remain in control, online and on-premises, by tailoring your solution based on your unique needs and ensuring your communications are always available on your terms.
* Keep the organization safe by protecting business communications and sensitive information in order to meet internal and regulatory compliance requirements.
* Increase productivity by helping users manage increasing volumes of communications across multiple devices.
As of last week the new version of Office, including Exchange and Office 365, has been made available to customers. I would encourage everyone to download the preview version of Exchange Server 2013<http://technet.microsoft.com/en-us/evalcenter/hh973395.aspx?wt.mc_id=TEC_116_1_33> and try out the service preview of Office 365 Enterprise<https://portal.microsoftonline.com/Signup/MainSignup15.aspx?OfferId=D214930B-46C2-4FD2-B7F9-EC134993F34A&dl=ENTERPRISEPACK_B_PILOT&pc=O365-Preview-2012&ali=1>. As with pre-release versions, please use them to preview but not for production use.
Here are some of the great benefits you get with the next release of Exchange:
1. Reduced costs by optimizing for next generation of hardware
Exchange can now support up to 8TB disks, by reducing database IOPS by +50% and optimizing for multiple databases per volume to increase aggregate disk utilization while maintaining reasonable database sizes. Ever growing memory capacity is used to improve search query performance and reduce IOPS. All this allows you and your end users to have larger mailboxes at lower costs.
2. Significantly reduced operational overhead for high availability
DAG management is simplified via automatic DAG network configuration, enhancements to DAG management cmdlets, support for multiple databases per disk, and enhancements to lagged copies. Auto-recovery capabilities – inherently built into DAGs – are now extended to the rest of Exchange and all protocols. Client-initiated, automatic recovery allows you to reduce recovery time for site failures from hours to under a minute.
3. Decrease the amount of time spent managing your system while maintaining control
Exchange now provides a single, easy-to-use, Web-based administration interface – the Exchange Administration Center (EAC). Role based access control (RBAC) empowers your helpdesk and specialist users to perform specific tasks which are surfaced appropriately in the EAC – without requiring full administrative permissions. This streamlined and intuitive experience helps you manage Exchange efficiently, delegate tasks, and focus on driving your business forward.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/0474.Ex2013Prev1.png]
Figure 1: The Exchange Administration Center (EAC) in Exchange 2013
4. Automatically protect Exchange availability from surges in traffic
Exchange now offers easy to administer controls to protect against unexpected surges in traffic. System work that is not interactive is automatically deferred to non-peak hours in order to preserve the end user experience and higher priority tasks. This improved overall system through-put leads to reduced costs by saving you from planning capacity for those infrequent, unexpected peaks.
5. Cloud on your terms
Exchange provides you tools to move to the cloud on your terms – whether that's onboarding to the cloud overnight or easily managing a hybrid deployment with mailboxes on-premises and online to meet your business needs. Provide your end users with a seamless experience including sharing calendars and scheduling meetings between on-premises and online users and have minimal user disruption when user mailboxes are smoothly moved across environments. Remain in control in the cloud by testing out upcoming enhancements via previews.
6. Automatically protect your email from malware
Exchange now offers built in basic anti-malware protection. Administrators can configure and manage their protection settings right from within the Exchange Administration Center. Integrated reporting provides visibility into emerging trends. This capability can be turned off, replaced, or paired with premium services such as Exchange Online Protection for layered protection.
7. Protect your sensitive data and inform users of internal compliance policies with Data Loss Prevention (DLP) capabilities
Keep your organization safe from users accidentally sharing sensitive information with unauthorized people. The new Exchange DLP features identify, monitor, and protect sensitive data through deep content analysis. Exchange offers built-in DLP policies based on regulatory standards such as PII and PCI, and is extensible to support other policies important to your business. New Policy Tips in Outlook 2013 inform users about policy violations as content is being created and about how information should be handled according to organizational standards.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/3073.Ex2013Prev2_2D00_sml.png]
Figure 2: Protect your sensitive data with Data Loss Prevention (DLP) capabilities
8. Allow compliance officers to run In-Place eDiscovery across Exchange, SharePoint, and Lync – from a single interface
The ability to immutably preserve and discover data across your entire organization is essential to ensuring internal and regulatory compliance. Allow your compliance officers to autonomously use the new eDiscovery Center to identify, hold, and analyze your organization's data from Exchange, SharePoint, and Lync. And, the data always remains in-place, so you never have to manage a separate store. With the eDiscovery Center, you can reduce the cost of managing complex compliance needs, while ensuring you are prepared for the unexpected.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/0207.Ex2013Prev3.png]
Figure 3: Run In-Place eDiscovery across Exchange, SharePoint and Lync from a single interface
9. Allow users to naturally work together – while compliance is applied behind the scenes
Site Mailboxes bring Exchange emails and SharePoint documents together. Like a filing cabinet, they provide a place to file project emails and documents and can only be seen by project members. Document storage, co-authoring, and versioning is provided by SharePoint while messaging is handled by Exchange – with a complete user experience within Outlook 2013. Compliance policies are applied at the site mailbox level and are transparent to the users – thus preserving their productivity.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/2402.Ex2013Prev4_2D00_sml.png]
Figure 4: Site Mailboxes bring Exchange emails and SharePoint documents together
10. Modern public folders provide a great way of managing and storing shared conversations and discussions
Public folders are now available in Exchange Online. Both on-premise as well as online, public folders provide the same capabilities customers are already familiar with. And more – they now share the same storage, indexing, and HA capabilities of regular mailboxes and public folder content can now be found via end-user search.
11. Give your users an intuitive, gorgeous, touch-optimized experience on all screens
Your end users will get more done from anywhere with a clean and uncluttered experience. Users can now take advantage of the fresh, easy, and intuitive Windows 8 style experience across Outlook and OWA. OWA user experience scales beautifully for any form factor and size – PC or slate or phone – and has a modern user experience voice with great support for touch and motion. OWA now offers three different UI layouts optimized for desktop, slate, and phone browsers.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/7444.Ex2013Prev5.jpg]
Figure 5: An intuitive, gorgeous, touch-optimized experience on all screens
12. Offline support in OWA allows your users to be productive when offline or on intermittently connected networks
You can now launch OWA in the browser and start working even if there is no network connectivity. Your emails and actions are automatically synchronized the next time connectivity is restored. This allows your users to be productive and have a great OWA experience even from remote locations with slow or intermittently connected networks or no network connection at all.
13. Bring all of your contacts together and automatically keep them up-to-date
People's professional networks span many different places. In Office 365, your users can import contact information from LinkedIn (and other networks in the future) so that they have all of their information in one place. Exchange will even find the same person across your personal contacts, GAL, and other networks and consolidate their information into one contact card, avoiding duplication and multiple contact cards with different information.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/2318.Ex2013Prev6.png]
Figure 6: Bring all your contacts together from Exchange's GAL, your personal contacts and other networks
14. Modern people search experience lets you quickly find the right person
People search experience is consistent everywhere – from people hub to nickname cache when composing an email. Search spans across all of your people – personal contacts, GAL, networks. Search results are relevance based and contain rich results – photos, phone number, location, etc.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/3302.Ex2013Prev7.png]
Figure 7: Quickly find the right person across your personal contacts, GAL and networks
15. Updated canvas makes calendar more useful for everyone
Like Outlook, OWA now supports simple entry of reminders and to-do's by typing right on the calendar. Users get quick, glance-able day and item "peeks". New views for day, week, and month – like the "month + agenda" (or "Mogenda") view – makes it really easy to manage your time.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/6433.Ex2013Prev8_2D00_sml.png]
Figure 8: Manage your time easily with the new views for day, week, and month
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/0361.Ex2013Prev9.png]
Figure 9: Calendar item "peek" shows useful information
16. Customize Outlook and OWA easily by integrating apps from the Office marketplace
Help your users be more productive via 3rd party apps for Outlook adding contextual information and functionality to emails and calendar. Apps for Outlook are easy to develop using the new cloud-based extensibility model. The same apps work across Outlook 2013 and OWA – including on OWA's slate and phone optimized layouts. Users and Exchange administrators can easily discover and install apps via the Office marketplace. You can control which apps different end users can use.
[http://blogs.technet.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-postimages/8345.Ex2013Prev10_2D00_sml.jpg]
Figure 10: Customize Outlook and OWA with 3rd party apps from the Office marketplace
This is the first of a series of blog posts which will cover the next release of Exchange. In future posts, we will cover the full set of capabilities, including all of the features mentioned above, in more detail.
To get fully up-to-speed on the next release of Exchange:
* Download the preview version of Exchange Server 2013<http://technet.microsoft.com/en-us/evalcenter/hh973395.aspx?wt.mc_id=TEC_116_1_33>.
* Try the new Exchange Online in the Office 365 Enterprise Preview<https://portal.microsoftonline.com/Signup/MainSignup15.aspx?OfferId=D214930B-46C2-4FD2-B7F9-EC134993F34A&dl=ENTERPRISEPACK_B_PILOT&pc=O365-Preview-2012&ali=1>.
* Follow the Exchange Team Blog<http://blogs.technet.com/b/exchange/>.
* Attend the Microsoft Exchange Conference (MEC)<http://www.mecisback.com/> in September. Go deep on Exchange 2013 with industry experts and the Exchange engineering team. It's been a decade since the last MEC, and we've got some big surprises in store for our community!
As always, we welcome your comments and feedback. We've also gone live with the Exchange Server 2013 Forum<http://social.technet.microsoft.com/Forums/en-US/exchangeserverpreview/threads> and will monitor it regularly to collect your feedback.
Thanks so much for your interest in Exchange, and we hope you find the next version of the product as exciting and innovative as we do. The entire team looks forward to your feedback!
Rajesh Jha
Corporate Vice President
Exchange
TechNet Blogs [cid:/images/orig-link.png] <http://blogs.technet.com/b/exchange/archive/2012/07/23/the-new-exchange.aspx> |by The Exchange Team on July 23, 2012
________________________________
Original Page: http://blogs.technet.com/b/exchange/archive/2012/07/23/the-new-exchange.aspx
Wednesday, July 18, 2012
VMware: VMware vSphere Blog: ESXi host connected to multiple storage array - is it supported?
The primary aim of this post is to state categorically that VMware supports multiple storage arrays presenting targets and LUNs to a single ESXi host. This statement also includes arrays from multiple vendors. We run with this configuration all the time in our labs, and I know very many of our customers who also have multiple arrays presenting devices to their ESX/ESXi hosts. The issue is that we do not appear to call this out in any of our documentation, although many of our guides and KB articles allude to it.
Some caution must be shown however.
1. If you have an SATP (Storage Array Type Plugin) that is used by multiple arrays on the same ESXi host, great care must be taken if you decide to change the default PSP (pathing selection policy) for that SATP) as the change will apply to all arrays - kb.vmware.com/kb/101776<http://blogs.vmware.com/vsphere/2012/07/kb.vmware.com/kb/1017760>
2. Some storage arrays make recommendations on queue depth and other settings. Note that these are typically global settings, so making a change for one array will impact the queue depth to any other arrays presenting LUNs to that ESXi host - kb.vmware.com/kb/1267<http://blogs.vmware.com/vsphere/2012/07/kb.vmware.com/kb/1267>
3. Another recommendation I would make, and I believe this is in our training materials, is to use single-initiator-single-target zoning when zoning ESXi hosts to FC arrays. This avoids any 'fabric' related events occurring on one array from impacting any other array.
With these considerations taken into account, having multiple storage arrays attached to the same ESXi host or hosts is completely supported. I'm going to see if I can get something into our official documentation about this.
________________________________
Original Page: http://blogs.vmware.com/vsphere/2012/07/esxi-host-connected-to-multiple-storage-array-is-it-supported.html
Some caution must be shown however.
1. If you have an SATP (Storage Array Type Plugin) that is used by multiple arrays on the same ESXi host, great care must be taken if you decide to change the default PSP (pathing selection policy) for that SATP) as the change will apply to all arrays - kb.vmware.com/kb/101776<http://blogs.vmware.com/vsphere/2012/07/kb.vmware.com/kb/1017760>
2. Some storage arrays make recommendations on queue depth and other settings. Note that these are typically global settings, so making a change for one array will impact the queue depth to any other arrays presenting LUNs to that ESXi host - kb.vmware.com/kb/1267<http://blogs.vmware.com/vsphere/2012/07/kb.vmware.com/kb/1267>
3. Another recommendation I would make, and I believe this is in our training materials, is to use single-initiator-single-target zoning when zoning ESXi hosts to FC arrays. This avoids any 'fabric' related events occurring on one array from impacting any other array.
With these considerations taken into account, having multiple storage arrays attached to the same ESXi host or hosts is completely supported. I'm going to see if I can get something into our official documentation about this.
________________________________
Original Page: http://blogs.vmware.com/vsphere/2012/07/esxi-host-connected-to-multiple-storage-array-is-it-supported.html
Tuesday, July 17, 2012
VMware: VMware vSphere Blog: Running SRM Server Commands With Specific Credentials
When you run a script on the SRM server during a failover, the credentials under which the script will execute is that of the SRM service. For example, if you have left the defaults when installing SRM, the SRM service likely runs as "Local System".
In order to change this credential for your script execution, the solution has always pretty much been to change the ID of the SRM service. This has some benefits and some drawbacks. The benefit is you can create a specific account for the service and tightly control its permissions, to avoid scripts running amok and clobbering things if you forget to close a bracket in the script or something like that. The drawback is that your entire SRM service has to run as this ID which can lead to further problems in terms of authentication, privileges to execute other things, and if you do the ID wrong you can accidentally shut down your SRM service.
Alex Fontana, one of our brilliant Solutions Architects at VMware and co-author of a book on Virtualizing Microsoft Tier 1 Applications<http://www.amazon.com/Virtualizing-Microsoft-Applications-VMware-vSphere/dp/0470563605> came up with an excellent solution that avoids these pitfalls.
The solution is to run a command on the SRM server that executes a scheduled task instead of a script. The scheduled task is what then calls the script under its own credentials.
With this mechanism you can leave the credentials of the SRM service alone, and set permissions for each task that runs the script independently, giving you the freedom to run any script you want under any security context you want.
The process is to use schtasks as the command that runs on the SRM server, and have it call the task you have created with its permissions set as the userid you want.
First, create the command you want to execute as a scheduled task, within the server Server Manager under Configuration -> Task Scheduler:
* Right-click on the Task Scheduler Library and choose "Create Task"
[http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f2196970d-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f2196970d-popup>
* Set the user account you want the task to run as
* Set the "Run with highest privileges" checkmark. This will allow the command to run without requiring approval if you're using an admin level account.
* Please note, if you're using UAC make sure "User Account Control: Admin Approval Mode for built-in Administrator account" is disabled, or change "User Account Control: Behavior of elevation to prompt for the administrators" to allow elevation without prompting. Or turn off UAC. :)
* Under the "Actions" of the scheduled task browse to the script you want to execute.
[http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f225d970d-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f225d970d-popup>
* Personally I deselect any of the "Conditions" and ensure the the "Settings" tab includes "Allow task to be run on demand", but the details are of course specific to your use case.
Ultimately you will end up with a task that looks something like this:
[http://blogs.vmware.com/.a/6a00d8341c328153ef016768942d42970b-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef016768942d42970b-popup>
Next, you create the command to execute on the SRM server.
* At whichever step of your recovery plan you want the command to run, add the step. For example I might want to touch some log files with date stamps or whatever when a particular VM powers on, so I would add a post power on step to that VM.
* Select the type as "Command on SRM server"
* Name it appropriately
* The content will be the execution of the scheduled task which is done by calling schtasks as follows:
schtask /run /TN ""
It should look similar to the following. Make sure you put in quotes exactly what you named the task you created in the last step, otherwise the task will not be found.
[http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f24a4970d-800wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f24a4970d-popup>
The net result of this is that your SRM instance can run under any ID that is appropriate for your service, and you can have each script/command run with its own unique ID independently of the SRM service.
Enjoy your scripting, and many thanks to Alex Fontana for sharing his interesting work!
Posted by Ken Werneburg Tech Marketing
Twitter @vmKen
________________________________
Original Page: http://blogs.vmware.com/vsphere/2012/07/srm-scheduled-task-commands.html
In order to change this credential for your script execution, the solution has always pretty much been to change the ID of the SRM service. This has some benefits and some drawbacks. The benefit is you can create a specific account for the service and tightly control its permissions, to avoid scripts running amok and clobbering things if you forget to close a bracket in the script or something like that. The drawback is that your entire SRM service has to run as this ID which can lead to further problems in terms of authentication, privileges to execute other things, and if you do the ID wrong you can accidentally shut down your SRM service.
Alex Fontana, one of our brilliant Solutions Architects at VMware and co-author of a book on Virtualizing Microsoft Tier 1 Applications<http://www.amazon.com/Virtualizing-Microsoft-Applications-VMware-vSphere/dp/0470563605> came up with an excellent solution that avoids these pitfalls.
The solution is to run a command on the SRM server that executes a scheduled task instead of a script. The scheduled task is what then calls the script under its own credentials.
With this mechanism you can leave the credentials of the SRM service alone, and set permissions for each task that runs the script independently, giving you the freedom to run any script you want under any security context you want.
The process is to use schtasks as the command that runs on the SRM server, and have it call the task you have created with its permissions set as the userid you want.
First, create the command you want to execute as a scheduled task, within the server Server Manager under Configuration -> Task Scheduler:
* Right-click on the Task Scheduler Library and choose "Create Task"
[http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f2196970d-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f2196970d-popup>
* Set the user account you want the task to run as
* Set the "Run with highest privileges" checkmark. This will allow the command to run without requiring approval if you're using an admin level account.
* Please note, if you're using UAC make sure "User Account Control: Admin Approval Mode for built-in Administrator account" is disabled, or change "User Account Control: Behavior of elevation to prompt for the administrators" to allow elevation without prompting. Or turn off UAC. :)
* Under the "Actions" of the scheduled task browse to the script you want to execute.
[http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f225d970d-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f225d970d-popup>
* Personally I deselect any of the "Conditions" and ensure the the "Settings" tab includes "Allow task to be run on demand", but the details are of course specific to your use case.
Ultimately you will end up with a task that looks something like this:
[http://blogs.vmware.com/.a/6a00d8341c328153ef016768942d42970b-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef016768942d42970b-popup>
Next, you create the command to execute on the SRM server.
* At whichever step of your recovery plan you want the command to run, add the step. For example I might want to touch some log files with date stamps or whatever when a particular VM powers on, so I would add a post power on step to that VM.
* Select the type as "Command on SRM server"
* Name it appropriately
* The content will be the execution of the scheduled task which is done by calling schtasks as follows:
schtask /run /TN ""
It should look similar to the following. Make sure you put in quotes exactly what you named the task you created in the last step, otherwise the task will not be found.
[http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f24a4970d-800wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0177436f24a4970d-popup>
The net result of this is that your SRM instance can run under any ID that is appropriate for your service, and you can have each script/command run with its own unique ID independently of the SRM service.
Enjoy your scripting, and many thanks to Alex Fontana for sharing his interesting work!
Posted by Ken Werneburg Tech Marketing
Twitter @vmKen
________________________________
Original Page: http://blogs.vmware.com/vsphere/2012/07/srm-scheduled-task-commands.html
Friday, July 13, 2012
Thursday, July 12, 2012
Before You Go: Saskatoon VMUG Meeting
Don't forget the SK VMUG tomorrow :)
Before You Go: Saskatoon VMUG Meeting This is a friendly reminder that you are registered to attend the Saskatoon VMUG meeting taking place this Friday, July 13, 2012. Meeting Details
Visit the Event Details Page for the full agenda, directions to the event and the most current information available. For questions, please contact memberservices@vmug.com. |
| ||||||||||||||||||
Email ID: VMUG_Express |
VMware: VMware SMB Blog: VMware in the Midmarket – Gartner Magic Quadrant for x86 Server Virtualization Infrastructure
Recently, Gartner Inc. published the 2012 Magic Quadrant for x86 Server Virtualization Infrastructure, positioning VMware in the Leaders Quadrant.*
Today, small and midsize businesses (SMBs) represent one of the fastest-growing segments of our customer base, with the majority of SMBs worldwide choosing VMware as their virtualization
provider. In the last three years, we have dramatically increased the number of customers we serve in the SMB segment by introducing solutions and services aimed specifically at the needs of our SMB customers.
And we continue to innovate on behalf of our customers… Less than a year ago, we announced the general availability of VMware vSphere 5 which delivered more than 200 new features and enhancements that help simplify the lives of our customers while delivering quick and tangible value to their organizations. Server virtualization with VMware vSphere provides real world advantages including, reduced costs, increased operational efficiencies, simplified and automated IT management, and enhanced disaster recovery options. These are all the building blocks that help SMBs transform their IT infrastructure and become 'Cloud Ready'.
These advancements provide tangible business benefits to firms like Myron F. Steves and Company<http://www.vmware.com/solutions/company-size/smb/myron-steves.html?src=blog>. A Houston-based insurance wholesaler, the IT team at Myron Steves has deployed virtualization and cloud solutions from VMware to help ensure that the company's 200 employees can respond to customers should disaster strikes.
With their VMware virtualized architecture, Myron Steves is able to now reliably fail over to backup servers within hours instead of days and the company reduced IT costs significantly including:
* Eliminated $400,000 in annual costs for third-party disaster recovery service
* Saved $200,000 in yearly payroll costs in the IT department
* Reduced maintenance costs by $150,000 per year
* Deployed 100+ virtual desktops to enable employees to work from anywhere
When asked about the impact VMware has had on their business, Tim Moudry, Associate Director of IT for Myron Steves said, "With VMware vSphere and vCenter Site Recovery Manager, we know we can switch our business over to our backup datacenter anytime – and be up and running within a few hours. And it costs a fraction of what we paid for the third-party disaster recovery service we used before."
These benefits can extend to almost all SMBs IT teams. So, rest assured we are not stopping here. Innovation on behalf of our customers – small and large – is what drives us.
Do you have a story about IT transformation leading to business results for your company? Let us know. We would love to profile your story and success.
Brandon Sweeney
Vice President, Mid-Market and Small Business Customer Segment
*Gartner, Inc., Magic Quadrant for x86 Server Virtualization Infrastructure, Thomas J. Bittman, et al, June 11, 2012.
Magic Quadrant for x86 Server Virtualization Infrastructure
[http://blogs.vmware.com/.a/6a00d8341c328153ef0177434c0aac970d-800wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0177434c0aac970d-popup>
About the Magic Quadrant
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
________________________________
Original Page: http://blogs.vmware.com/smb/2012/07/gartner-magic-quadrant.html
Today, small and midsize businesses (SMBs) represent one of the fastest-growing segments of our customer base, with the majority of SMBs worldwide choosing VMware as their virtualization
provider. In the last three years, we have dramatically increased the number of customers we serve in the SMB segment by introducing solutions and services aimed specifically at the needs of our SMB customers.
And we continue to innovate on behalf of our customers… Less than a year ago, we announced the general availability of VMware vSphere 5 which delivered more than 200 new features and enhancements that help simplify the lives of our customers while delivering quick and tangible value to their organizations. Server virtualization with VMware vSphere provides real world advantages including, reduced costs, increased operational efficiencies, simplified and automated IT management, and enhanced disaster recovery options. These are all the building blocks that help SMBs transform their IT infrastructure and become 'Cloud Ready'.
These advancements provide tangible business benefits to firms like Myron F. Steves and Company<http://www.vmware.com/solutions/company-size/smb/myron-steves.html?src=blog>. A Houston-based insurance wholesaler, the IT team at Myron Steves has deployed virtualization and cloud solutions from VMware to help ensure that the company's 200 employees can respond to customers should disaster strikes.
With their VMware virtualized architecture, Myron Steves is able to now reliably fail over to backup servers within hours instead of days and the company reduced IT costs significantly including:
* Eliminated $400,000 in annual costs for third-party disaster recovery service
* Saved $200,000 in yearly payroll costs in the IT department
* Reduced maintenance costs by $150,000 per year
* Deployed 100+ virtual desktops to enable employees to work from anywhere
When asked about the impact VMware has had on their business, Tim Moudry, Associate Director of IT for Myron Steves said, "With VMware vSphere and vCenter Site Recovery Manager, we know we can switch our business over to our backup datacenter anytime – and be up and running within a few hours. And it costs a fraction of what we paid for the third-party disaster recovery service we used before."
These benefits can extend to almost all SMBs IT teams. So, rest assured we are not stopping here. Innovation on behalf of our customers – small and large – is what drives us.
Do you have a story about IT transformation leading to business results for your company? Let us know. We would love to profile your story and success.
Brandon Sweeney
Vice President, Mid-Market and Small Business Customer Segment
*Gartner, Inc., Magic Quadrant for x86 Server Virtualization Infrastructure, Thomas J. Bittman, et al, June 11, 2012.
Magic Quadrant for x86 Server Virtualization Infrastructure
[http://blogs.vmware.com/.a/6a00d8341c328153ef0177434c0aac970d-800wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0177434c0aac970d-popup>
About the Magic Quadrant
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
________________________________
Original Page: http://blogs.vmware.com/smb/2012/07/gartner-magic-quadrant.html
VMware Labs presents its latest fling - View Controlled Recompose Script
This script performs a Controlled Recompose of a VMware View Linked Clone Pool of Virtual Desktops. It first identifies a free desktop and recomposes it to create the first Replica Desktop. Note: If this first recompostion fails the script aborts assuming the creation of the Replica VM also failed.
After the recomposition of the first desktop, the script recomposes a specified number of additional free desktops to create a supply of recomposed systems. These desktops will be available for re-connecting users when the script next recomposes the remaining desktops in the pool, directing View to force logoff active users after the warning period specified in View Manager. An optional extra recompose can be run against the pool as the final step to provide a second attempt to recompose any desktops that may have failed.
During operation the script will abort after a specified number of timed out recompositions in a row (default 3). It will also immediately abort if it detects a View Composer error. The script can be configured to send Email Alerts to notify Administrators of both failed and successful script operations. The script runs by default in interactive mode, prompting for required settings. It can also be run in unattended mode to support scheduled, automated maintenance.
________________________________
Original Page: http://feedproxy.google.com/~r/Ntpronl/~3/YKtwD_ZJjNc/2085-VMware-Labs-presents-its-latest-fling-View-Controlled-Recompose-Script.html
After the recomposition of the first desktop, the script recomposes a specified number of additional free desktops to create a supply of recomposed systems. These desktops will be available for re-connecting users when the script next recomposes the remaining desktops in the pool, directing View to force logoff active users after the warning period specified in View Manager. An optional extra recompose can be run against the pool as the final step to provide a second attempt to recompose any desktops that may have failed.
During operation the script will abort after a specified number of timed out recompositions in a row (default 3). It will also immediately abort if it detects a View Composer error. The script can be configured to send Email Alerts to notify Administrators of both failed and successful script operations. The script runs by default in interactive mode, prompting for required settings. It can also be run in unattended mode to support scheduled, automated maintenance.
________________________________
Original Page: http://feedproxy.google.com/~r/Ntpronl/~3/YKtwD_ZJjNc/2085-VMware-Labs-presents-its-latest-fling-View-Controlled-Recompose-Script.html
Wednesday, July 11, 2012
Free e-learning course - VMware vCenter Operations Manager Fundamentals
This e-learning course<http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=det&id_course=132265> covers how to install and configure vCenter Operations Manager as well as how to use its many robust features.
VMware vCenter Operations Manager is an automated operations management solution that provides integrated performance, capacity, and configuration management for highly virtualized and cloud infrastructure. Deep VMware vSphere integration provides the most comprehensive management of VMware environments. VMware vCenter Operations Manager is purpose-built for VMware administrators to more effectively manage the performance of their VMware environments as they move to the private cloud.
[http://www.ntpro.nl/blog/uploads/vcops.png]
The course<http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=det&id_course=132265> consists of five modules:
1. Technical Overview of vCenter Operations Manager covers the vCenter Operations Manager 5.0 vApp architecture and resource requirements, the vCenter Operations Manager 5.0 vApp installation considerations, and introduces you to the major and minor badges.
2. Installing and Configuring vCenter Operations Manager discusses how to install and configure vCenter Operations Manager.
3. Using the Dashboards and Badges explains the main function of the major and minor badges, how to interpret the badge results, and how to configure thresholds and notifications.
4. Operations and Planning describes how to use the Operations tab and the Planning tab.
5. Working with Smart Alerts and Reports covers how to configure and use smart alerts, how heat maps are used, and how to work with reports.
________________________________
Original Page: http://feedproxy.google.com/~r/Ntpronl/~3/-mtEIaBW-QM/2084-Free-e-learning-course-VMware-vCenter-Operations-Manager-Fundamentals.html
VMware vCenter Operations Manager is an automated operations management solution that provides integrated performance, capacity, and configuration management for highly virtualized and cloud infrastructure. Deep VMware vSphere integration provides the most comprehensive management of VMware environments. VMware vCenter Operations Manager is purpose-built for VMware administrators to more effectively manage the performance of their VMware environments as they move to the private cloud.
[http://www.ntpro.nl/blog/uploads/vcops.png]
The course<http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=det&id_course=132265> consists of five modules:
1. Technical Overview of vCenter Operations Manager covers the vCenter Operations Manager 5.0 vApp architecture and resource requirements, the vCenter Operations Manager 5.0 vApp installation considerations, and introduces you to the major and minor badges.
2. Installing and Configuring vCenter Operations Manager discusses how to install and configure vCenter Operations Manager.
3. Using the Dashboards and Badges explains the main function of the major and minor badges, how to interpret the badge results, and how to configure thresholds and notifications.
4. Operations and Planning describes how to use the Operations tab and the Planning tab.
5. Working with Smart Alerts and Reports covers how to configure and use smart alerts, how heat maps are used, and how to work with reports.
________________________________
Original Page: http://feedproxy.google.com/~r/Ntpronl/~3/-mtEIaBW-QM/2084-Free-e-learning-course-VMware-vCenter-Operations-Manager-Fundamentals.html
Mike Fegan on vCenter Operations Manager
The demand on today's IT department is daunting. More and more small and midsize size businesses (SMB) are using the "do more with less" philosophy. Because of this, IT Professionals are finding themselves wearing more hats than ever before. Furthermore, with a smaller IT staff it can become extremely difficult to be proactive or even track down an issue that is affecting production.
Virtualization has helped with consolidation of the server infrastructure, but now you may be faced with managing a large number of virtual machines (VMs). So, how are SMBs with a small IT staff expected to handle these new challenges?
Tool Belt With Tangible Benefits
vCenter Operations Manager has the tools to support the under-staffed IT department. Instead of tracking down log files and manually measuring metrics, vCenter Operations Manager does this all for you, allowing you to focus on deploying resources to more proactive, strategic solutions and planning. Have an application performance issue? Identify it immediately with the "Health Badge". Health is measured on a scale of 1-100, with 1 being bad and 100 being good (pretty easy, huh?). You can see from this image that the overall health of our environment is OK. It's not quite 100, so if you dig just a little deeper you can tell that this particular cluster is bound by memory.
[http://blogs.vmware.com/.a/6a00d8341c328153ef01761649dbf5970c-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef01761649dbf5970c-pi>
Concerned about your capacity? From a single view you can quickly determine how long you can continue to run or how many new VMs you can deploy with your current hardware resources before having to add new servers. You can see in the example below that based on current usage we have more than a year before we need to add hardware resources. Additionally, with our current capacity we can add approximately 460 VMs.
[http://blogs.vmware.com/.a/6a00d8341c328153ef0176164ed39d970c-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0176164ed39d970c-pi>
Now, lets just say you're "Time Remaining" is less than a year and "Capacity Remaining" shows that you can only add 5-15 new VMs. What do you do? There's always the option of asking for budget to purchase more hardware, or you can be a hero and look into reclaiming capacity from overprovisioned VMs. As you can see below, we have a lot of reclaimable capacity! We've provisioned way more vCPUs and vRAM than we need for these VMs. With this information at your fingertips you can quickly identify ineffeciencies and increase your consolidation ratio making better use of your existing hardware infrastructure.
[http://blogs.vmware.com/.a/6a00d8341c328153ef01761649e016970c-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef01761649e016970c-pi>
Simple Install to Boot!
vCenter Operations Manager is a snap to install. It's delivered as an .OVF template. From your vSphere client, simply choose "Deploy OVF Template"; follow the simple wizard and the analytics server starts analyzing your environment immediately.
In summary, vCenter Operations Manager is the tool that allows the smaller IT departments to "do more with less." You're virtualized. And by cost-efficiently adding breadth with this tool you can quickly identify operational issues, minimize the amount of time it takes to troubleshoot an issue, plan for the future, significantly increase your consolidation ratio, and allow your IT team to focus on more strategic projects and end-user support.
Want to learn more about vCenter Operations Manager?
* vCenter Operations Introduction Video<http://bit.ly/LcTS24>
* VMware vCenter Operations Manager Getting Started Guide<http://bit.ly/OVMSai>
________________________________
Original Page: http://blogs.vmware.com/smb/2012/07/mikefegan_vcenterops.html
Virtualization has helped with consolidation of the server infrastructure, but now you may be faced with managing a large number of virtual machines (VMs). So, how are SMBs with a small IT staff expected to handle these new challenges?
Tool Belt With Tangible Benefits
vCenter Operations Manager has the tools to support the under-staffed IT department. Instead of tracking down log files and manually measuring metrics, vCenter Operations Manager does this all for you, allowing you to focus on deploying resources to more proactive, strategic solutions and planning. Have an application performance issue? Identify it immediately with the "Health Badge". Health is measured on a scale of 1-100, with 1 being bad and 100 being good (pretty easy, huh?). You can see from this image that the overall health of our environment is OK. It's not quite 100, so if you dig just a little deeper you can tell that this particular cluster is bound by memory.
[http://blogs.vmware.com/.a/6a00d8341c328153ef01761649dbf5970c-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef01761649dbf5970c-pi>
Concerned about your capacity? From a single view you can quickly determine how long you can continue to run or how many new VMs you can deploy with your current hardware resources before having to add new servers. You can see in the example below that based on current usage we have more than a year before we need to add hardware resources. Additionally, with our current capacity we can add approximately 460 VMs.
[http://blogs.vmware.com/.a/6a00d8341c328153ef0176164ed39d970c-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef0176164ed39d970c-pi>
Now, lets just say you're "Time Remaining" is less than a year and "Capacity Remaining" shows that you can only add 5-15 new VMs. What do you do? There's always the option of asking for budget to purchase more hardware, or you can be a hero and look into reclaiming capacity from overprovisioned VMs. As you can see below, we have a lot of reclaimable capacity! We've provisioned way more vCPUs and vRAM than we need for these VMs. With this information at your fingertips you can quickly identify ineffeciencies and increase your consolidation ratio making better use of your existing hardware infrastructure.
[http://blogs.vmware.com/.a/6a00d8341c328153ef01761649e016970c-500wi]<http://blogs.vmware.com/.a/6a00d8341c328153ef01761649e016970c-pi>
Simple Install to Boot!
vCenter Operations Manager is a snap to install. It's delivered as an .OVF template. From your vSphere client, simply choose "Deploy OVF Template"; follow the simple wizard and the analytics server starts analyzing your environment immediately.
In summary, vCenter Operations Manager is the tool that allows the smaller IT departments to "do more with less." You're virtualized. And by cost-efficiently adding breadth with this tool you can quickly identify operational issues, minimize the amount of time it takes to troubleshoot an issue, plan for the future, significantly increase your consolidation ratio, and allow your IT team to focus on more strategic projects and end-user support.
Want to learn more about vCenter Operations Manager?
* vCenter Operations Introduction Video<http://bit.ly/LcTS24>
* VMware vCenter Operations Manager Getting Started Guide<http://bit.ly/OVMSai>
________________________________
Original Page: http://blogs.vmware.com/smb/2012/07/mikefegan_vcenterops.html
Monday, July 9, 2012
Everything You Need to Know About Exchange Backups* - Part 3
In Part 1<http://blogs.technet.com/b/exchange/archive/2012/06/04/everything-you-need-to-know-about-exchange-backups-part-1.aspx> and Part 2<http://blogs.technet.com/b/exchange/archive/2012/06/14/everything-you-need-to-know-about-exchange-backups-part-2.aspx> of this series we looked at the fundamentals of Exchange backups using VSS, and the flow of an active DAG database backup.
In Part 3 we break down how a passive DAG database copy undergoes a full backup. The Exchange Writer responsible for passive copy backups doesn't run in the Information Store Service, but rather as part of the MS Exchange Replication Service. Among other functions, this service coordinates the backup process between the passive copy node and the active copy server. Similar to the backup of an active database described in Part 2, this post describes the backup of a passive database copy of DB1, hosted on server ADA-MBX1. The active mounted database copy is on ADA-MBX2, and again, a non-persistent copy-on-writer (COW) snapshot is utilized by the backup solution:
(please click thumbnails for full size version of graphics in this post)
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/2350.image_5F00_thumb_5F00_40949099.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/0537.image_5F00_416CF683.png>
The first steps to back up a passive database copy are about the same as for an active one. The backup application gets the metadata for DB1 from the Exchange Writer, but again, the writer is running in the MS Exchange Replication Service. A new writer instance GUID is generated which will persist throughout the job, as with an active database backup.
Event 2021 indicates that the backup application, or VSS requestor, has engaged the Exchange Writer. It will appear numerous times throughout the backup as different components are read from metadata, such as log and database file locations.
Events 2110 and 2023 indicate that the backup application has requested a particular set of components to back up, and the backup type.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/1680.image_5F00_thumb_5F00_7EADAB4A.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/2262.image_5F00_71B3CB39.png>
The replication service for the passive copy's server signals the active copy server that a backup is in progress. Events 910 and 210 on the active copy server, as well as 960 on the passive copy server, signify two things: First, they establish which server backing up a passive copy of the database; Second, the STORE service on the active copy server has marked the database with "backup in progress" in memory and acknowledges that the surrogate backup will proceed. Once this occurs it is not possible to backup the database again until either the current surrogate backup completes, or the "backup in progress" status is otherwise cleared.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/3817.image_5F00_thumb_5F00_20B60711.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/5305.image_5F00_16D115A6.png>
Events 2025 and 2027 are generated when the replication writer prevents the replication service from writing logs copied from the active copy server to the local disk. Replay of logs also stops, thereby keeping the contents of the database files unchanged. At this point writes of data for the database getting backed up are "frozen". VSS can now create the snapshots in shadow storage for each disk specified in the metadata.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/8117.image_5F00_thumb_5F00_65822B45.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/1261.image_5F00_14B416DD.png>
VSS creates snapshots of disks D: and E:. Once these complete it signals the Exchange Writer, which in turns allows the replication service to resume log copy and replay. Events 2029 and 2035 are generated when the "thaw" is completed and normal disk writes are allowed to continue.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/4478.image_5F00_thumb_5F00_63652C7C.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/0184.image_5F00_2B92E859.png>
Once the snapshots are created the backup application can copy blocks of data through VSS, which transfers blocks of data from shadow storage if they've been preserved due to a change, or from the actual disk volume if they haven't. The replication service writer waits for the signal that the transfer of data is complete. This flow of data is represented by the purple arrows, which in this case indicates data getting copied out of the snapshots in storage, through I/O of the Exchange server, and on to the backup server.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/8508.image_5F00_thumb_5F00_01633A71.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/0675.image_5F00_107A194B.png>
When the files necessary for backing up DB1 are safely copied to backup media, the backup application signals VSS that the job is complete. VSS in turn signals the replication writer, and Exchange generates events 963 and 2046 on the passive copy server. The replication service then signals the Information Store service on the active copy server that the job is done, and that log truncation can proceed if all necessary conditions are met. The active copy node generates events 913 and 213 signaling that the surrogate backup is done, and that the database header will be updated with the date and time of the backup.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/5554.image_5F00_thumb_5F00_664A6B62.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/0184.image_5F00_678F0441.png>
Events 2033 and 2037 signal the end of the backup. The active copy node flushes and rolls the current transaction log containing database header updates. That log is then shipped and made eligible for replay according to schedule so that the passive database copy is marked with the new header information at the earliest available time. Log truncation also proceeds if possible. In this case the snapshots are destroyed, and normal operations continue.
For more on the subject of this series here are some more great references:
Volume Shadow Copy Service
http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
Exchange VSS Writers
http://msdn.microsoft.com/en-us/library/bb204080.aspx
Overview of Processing a Backup Under VSS
http://msdn.microsoft.com/en-us/library/aa384589(VS.85).aspx
Backup Sequence Diagrams
http://msdn.microsoft.com/en-us/library/aa579076(v=exchg.140)
Troubleshooting the Volume Shadow Copy Service
http://technet.microsoft.com/en-us/library/ff597980(EXCHG.80).aspx
Jesse Tedoff
TechNet Blogs [cid:/images/orig-link.png] <http://blogs.technet.com/b/exchange/archive/2012/07/09/everything-you-need-to-know-about-exchange-backups-part-3.aspx> |by The Exchange Team on July 9, 2012
◆
________________________________
Original Page: http://blogs.technet.com/b/exchange/archive/2012/07/09/everything-you-need-to-know-about-exchange-backups-part-3.aspx
In Part 3 we break down how a passive DAG database copy undergoes a full backup. The Exchange Writer responsible for passive copy backups doesn't run in the Information Store Service, but rather as part of the MS Exchange Replication Service. Among other functions, this service coordinates the backup process between the passive copy node and the active copy server. Similar to the backup of an active database described in Part 2, this post describes the backup of a passive database copy of DB1, hosted on server ADA-MBX1. The active mounted database copy is on ADA-MBX2, and again, a non-persistent copy-on-writer (COW) snapshot is utilized by the backup solution:
(please click thumbnails for full size version of graphics in this post)
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/2350.image_5F00_thumb_5F00_40949099.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/0537.image_5F00_416CF683.png>
The first steps to back up a passive database copy are about the same as for an active one. The backup application gets the metadata for DB1 from the Exchange Writer, but again, the writer is running in the MS Exchange Replication Service. A new writer instance GUID is generated which will persist throughout the job, as with an active database backup.
Event 2021 indicates that the backup application, or VSS requestor, has engaged the Exchange Writer. It will appear numerous times throughout the backup as different components are read from metadata, such as log and database file locations.
Events 2110 and 2023 indicate that the backup application has requested a particular set of components to back up, and the backup type.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/1680.image_5F00_thumb_5F00_7EADAB4A.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/2262.image_5F00_71B3CB39.png>
The replication service for the passive copy's server signals the active copy server that a backup is in progress. Events 910 and 210 on the active copy server, as well as 960 on the passive copy server, signify two things: First, they establish which server backing up a passive copy of the database; Second, the STORE service on the active copy server has marked the database with "backup in progress" in memory and acknowledges that the surrogate backup will proceed. Once this occurs it is not possible to backup the database again until either the current surrogate backup completes, or the "backup in progress" status is otherwise cleared.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/3817.image_5F00_thumb_5F00_20B60711.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/5305.image_5F00_16D115A6.png>
Events 2025 and 2027 are generated when the replication writer prevents the replication service from writing logs copied from the active copy server to the local disk. Replay of logs also stops, thereby keeping the contents of the database files unchanged. At this point writes of data for the database getting backed up are "frozen". VSS can now create the snapshots in shadow storage for each disk specified in the metadata.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/8117.image_5F00_thumb_5F00_65822B45.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/1261.image_5F00_14B416DD.png>
VSS creates snapshots of disks D: and E:. Once these complete it signals the Exchange Writer, which in turns allows the replication service to resume log copy and replay. Events 2029 and 2035 are generated when the "thaw" is completed and normal disk writes are allowed to continue.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/4478.image_5F00_thumb_5F00_63652C7C.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/0184.image_5F00_2B92E859.png>
Once the snapshots are created the backup application can copy blocks of data through VSS, which transfers blocks of data from shadow storage if they've been preserved due to a change, or from the actual disk volume if they haven't. The replication service writer waits for the signal that the transfer of data is complete. This flow of data is represented by the purple arrows, which in this case indicates data getting copied out of the snapshots in storage, through I/O of the Exchange server, and on to the backup server.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/8508.image_5F00_thumb_5F00_01633A71.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/0675.image_5F00_107A194B.png>
When the files necessary for backing up DB1 are safely copied to backup media, the backup application signals VSS that the job is complete. VSS in turn signals the replication writer, and Exchange generates events 963 and 2046 on the passive copy server. The replication service then signals the Information Store service on the active copy server that the job is done, and that log truncation can proceed if all necessary conditions are met. The active copy node generates events 913 and 213 signaling that the surrogate backup is done, and that the database header will be updated with the date and time of the backup.
[http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/5554.image_5F00_thumb_5F00_664A6B62.png]<http://blogs.technet.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-31-06-metablogapi/0184.image_5F00_678F0441.png>
Events 2033 and 2037 signal the end of the backup. The active copy node flushes and rolls the current transaction log containing database header updates. That log is then shipped and made eligible for replay according to schedule so that the passive database copy is marked with the new header information at the earliest available time. Log truncation also proceeds if possible. In this case the snapshots are destroyed, and normal operations continue.
For more on the subject of this series here are some more great references:
Volume Shadow Copy Service
http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
Exchange VSS Writers
http://msdn.microsoft.com/en-us/library/bb204080.aspx
Overview of Processing a Backup Under VSS
http://msdn.microsoft.com/en-us/library/aa384589(VS.85).aspx
Backup Sequence Diagrams
http://msdn.microsoft.com/en-us/library/aa579076(v=exchg.140)
Troubleshooting the Volume Shadow Copy Service
http://technet.microsoft.com/en-us/library/ff597980(EXCHG.80).aspx
Jesse Tedoff
TechNet Blogs [cid:/images/orig-link.png] <http://blogs.technet.com/b/exchange/archive/2012/07/09/everything-you-need-to-know-about-exchange-backups-part-3.aspx> |by The Exchange Team on July 9, 2012
◆
________________________________
Original Page: http://blogs.technet.com/b/exchange/archive/2012/07/09/everything-you-need-to-know-about-exchange-backups-part-3.aspx
Saturday, July 7, 2012
RTFM Education » Blog Archive » Windows XP, IDE and vSphere5
One of the more slight irritating things about running Windows XP on vSphere is the default around what virtual disk controller type to use. By default if you create a clean/new instance of Windows XP is that it will default to using a IDE controller. If you these "Typical" wizard when creating a VM you don't even see this – as the option to select the controller is hidden… If you use the "Custom" option you will see the default is IDE. This happens despite being asked to select a SCSI controller type in previous dialog boxes. So watch out for Mr Next, Next, Next and looking out your office windows when hitting the [ENTER] key…
[http://www.mikelaverick.com/wp-content/uploads/2012/03/Screen-Shot-2012-03-22-at-12.44.17.png]<http://www.mikelaverick.com/wp-content/uploads/2012/03/Screen-Shot-2012-03-22-at-12.44.17.png>
Note: Many thanks to Brian Dewar<https://twitter.com/#!/BrianDewar> for helping on the screen grab front, and double-checking this against vSphere4 (which I no longer run) and vSphere5 which I do.
The trouble is I don't feel IDE is a good choice in a production environment for two reasons. Firstly, you cannot increase the size of IDE virtual disk from anywhere within the GUI (That's true if the VM is created in vSphere4, and then gets moved into a vSphere5 environment). I know this from hard experience. Back in the day when there was no such thing as thin-provisioned disk, and I lacked large amounts of physical disk space – I got in the habit of making my VMDK's quite small. Also before VAAI capabilities were introduced it helped with the cloning process. So you guessed it my Windows XP SP2 VM had insufficent disk space to take SP3 (which is a requirement for the View Agent in View 5.1).
Secondly, performance is sub-par when you use IDE disks inside Windows XP. With the use of Windows XP in a VDI environment – and storage performance one of the major scalability issues. This was brought to my attention by some testing<http://www.vmdamentals.com/?p=1060> done by Erik Zandboer<https://twitter.com/#!/erikzandboer> on his vmdamentals.com<http://www.vmdamentals.com/?p=1060> website. It took me back to the days when this debate used come-up on my courses – and I used to always recommend using LSILogic consistently for all systems (including Windows 2000 which actually defaults to BusLogic incidentally).
VMware has a KB article<http://kb.vmware.com/kb/1016192> (KB1016192) which outlines some of the limitations surrounding IDE, and guide on converting an IDE drive to SCSI. Personally, I just ended up blatting my IDE Windows XP and starting again from scratch – as it wasn't a complicated build – just base install used for doing "Captures & Builds" for ThinApps.
So why did VMware introduce this IDE option when blows so much. Well, I guess one of the problems folks had with virtualizing Windows XP back in the ESX 2.x days was the fact that Windows XP didn't have either the BusLogic or the LSILogic drivers built-in to the XP media. Folks like me had to hunt down the drivers from LSI Logics site, get them into a floppy disk file and then using F6 during the boot from the Windows XP CD to provide them during the install routine.
So this is whole story was a trip down memory lane to 2003 when we had to do this crazy kind of stuff just get an OS loaded – on twitter I called this "jumping into my TARDIS'. I'm figuring that this is why VMware changed the default of Windows XP in later versions of vSphere. Personally, I think this is a bad decision. They should have stuck with the less friendly approach – after all its one off PITA, compared potentially creating hundreds of Windows XP instances for a VDI project on IDE.
I had an amusing discussion with Erik Zandboer who put me on to the performance issue with IDE. We were joking about how we would explain this to customers. He came up with:
Q. Can we use IDE?
A. Don't
Q. Can you elaborate?
A. Yes… Don't Ever?
My take was this:
First rule of IDE, never use IDE
Second Rule of IDE, NEVER use IDE
Third Rule of IDE, NEVER USE IDE….
Of course, all this is rather moot really. WindowsXP is a dead isn't? At least in a VDI context. Aren't we all meant to be using "Surface" devices by Tuesday of next week? Erm, I think not…
________________________________
Original Page: http://www.rtfm-ed.co.uk/2012/07/06/windows-xp-ide-and-vsphere5/
[http://www.mikelaverick.com/wp-content/uploads/2012/03/Screen-Shot-2012-03-22-at-12.44.17.png]<http://www.mikelaverick.com/wp-content/uploads/2012/03/Screen-Shot-2012-03-22-at-12.44.17.png>
Note: Many thanks to Brian Dewar<https://twitter.com/#!/BrianDewar> for helping on the screen grab front, and double-checking this against vSphere4 (which I no longer run) and vSphere5 which I do.
The trouble is I don't feel IDE is a good choice in a production environment for two reasons. Firstly, you cannot increase the size of IDE virtual disk from anywhere within the GUI (That's true if the VM is created in vSphere4, and then gets moved into a vSphere5 environment). I know this from hard experience. Back in the day when there was no such thing as thin-provisioned disk, and I lacked large amounts of physical disk space – I got in the habit of making my VMDK's quite small. Also before VAAI capabilities were introduced it helped with the cloning process. So you guessed it my Windows XP SP2 VM had insufficent disk space to take SP3 (which is a requirement for the View Agent in View 5.1).
Secondly, performance is sub-par when you use IDE disks inside Windows XP. With the use of Windows XP in a VDI environment – and storage performance one of the major scalability issues. This was brought to my attention by some testing<http://www.vmdamentals.com/?p=1060> done by Erik Zandboer<https://twitter.com/#!/erikzandboer> on his vmdamentals.com<http://www.vmdamentals.com/?p=1060> website. It took me back to the days when this debate used come-up on my courses – and I used to always recommend using LSILogic consistently for all systems (including Windows 2000 which actually defaults to BusLogic incidentally).
VMware has a KB article<http://kb.vmware.com/kb/1016192> (KB1016192) which outlines some of the limitations surrounding IDE, and guide on converting an IDE drive to SCSI. Personally, I just ended up blatting my IDE Windows XP and starting again from scratch – as it wasn't a complicated build – just base install used for doing "Captures & Builds" for ThinApps.
So why did VMware introduce this IDE option when blows so much. Well, I guess one of the problems folks had with virtualizing Windows XP back in the ESX 2.x days was the fact that Windows XP didn't have either the BusLogic or the LSILogic drivers built-in to the XP media. Folks like me had to hunt down the drivers from LSI Logics site, get them into a floppy disk file and then using F6 during the boot from the Windows XP CD to provide them during the install routine.
So this is whole story was a trip down memory lane to 2003 when we had to do this crazy kind of stuff just get an OS loaded – on twitter I called this "jumping into my TARDIS'. I'm figuring that this is why VMware changed the default of Windows XP in later versions of vSphere. Personally, I think this is a bad decision. They should have stuck with the less friendly approach – after all its one off PITA, compared potentially creating hundreds of Windows XP instances for a VDI project on IDE.
I had an amusing discussion with Erik Zandboer who put me on to the performance issue with IDE. We were joking about how we would explain this to customers. He came up with:
Q. Can we use IDE?
A. Don't
Q. Can you elaborate?
A. Yes… Don't Ever?
My take was this:
First rule of IDE, never use IDE
Second Rule of IDE, NEVER use IDE
Third Rule of IDE, NEVER USE IDE….
Of course, all this is rather moot really. WindowsXP is a dead isn't? At least in a VDI context. Aren't we all meant to be using "Surface" devices by Tuesday of next week? Erm, I think not…
________________________________
Original Page: http://www.rtfm-ed.co.uk/2012/07/06/windows-xp-ide-and-vsphere5/
vSphere on NFS Design Considerations Presentation - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers
This presentation is one that I gave at the New Mexico, New York City, and Seattle VMUG conferences (this specific deck came from the Seattle conference, as you can tell by the Twitter handle on the first slide). The topic is design considerations for running vSphere on NFS. This isn't an attempt to bash NFS, but rather to educate users on the things to avoid if you're going to build a rock-solid NFS infrastructure for your VMware vSphere environment. I hope that someone finds it useful.
My standard closing statements goes here–your questions, thoughts, corrections, or clarification (always courteous, please!) are welcome in the comments below.
Original Post:
http://blog.scottlowe.org/2012/07/03/vsphere-on-nfs-design-considerations-presentation/
My standard closing statements goes here–your questions, thoughts, corrections, or clarification (always courteous, please!) are welcome in the comments below.
Original Post:
http://blog.scottlowe.org/2012/07/03/vsphere-on-nfs-design-considerations-presentation/
Wednesday, July 4, 2012
Raspberry Pi Thin Client for VMware View 5 - Eric Sloof
The Raspberry Pi is a 25$ credit-card sized computer that plugs into your TV and a keyboard. It's a capable little PC which can be used for many of the things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. Now the VMware View Client 5.0 is working on RaspberryPi, SSL security options is ok also auth via RSA. PCoIP protocol seem not working at the moment, just the RDP.
Original Page: http://feedproxy.google.com/~r/Ntpronl/~3/zt8cnQLHcEg/2081-Raspberry-Pi-Thin-Client-for-VMware-View-5.html
Original Page: http://feedproxy.google.com/~r/Ntpronl/~3/zt8cnQLHcEg/2081-Raspberry-Pi-Thin-Client-for-VMware-View-5.html
Monday, July 2, 2012
VMware Labs presents its latest fling - Guest Reclaim
Guest Reclaim reclaims dead space from NTFS volumes hosted on a thin provisioned SCSI disk. The tool can also reclaim space from full disks and partitions, thereby wiping off the file systems on it. As the tool deals with active data, please take all precautionary measures understanding the SCSI UNMAP framework and backing up important data.
Features
* Reclaim space from Simple FAT/NTFS volumes
* Works on WindowsXP to Windows7
* Can reclaim space from flat partitions and flat disks
* Can work in virtual as well as physical machines
What is Dead Space Reclamation ?
Deleting files frees up space on the file system volume. This freed space sticks with the LUN/Disk, until it is released and reclaimed by the underlying storage layer. Free space reclamation allows the lower level storage layer (for example a storage array, or any hypervisor) to repurpose the freed space for some other storage allocation request.
For example:
* A storage array that supports thin provisioning can repurpose the reclaimed space to satisfy allocation requests for some other thin provisioned LUN within the same array.
* A hypervisor file system can repurpose the reclaimed freed space from one virtual disk for satisfyingallocation needs of some other virtual disk within the same data store.
* GuestReclaim allows transparent reclamation of dead space from NTFS volumes.
http://labs.vmware.com/flings/guest-reclaim
Features
* Reclaim space from Simple FAT/NTFS volumes
* Works on WindowsXP to Windows7
* Can reclaim space from flat partitions and flat disks
* Can work in virtual as well as physical machines
What is Dead Space Reclamation ?
Deleting files frees up space on the file system volume. This freed space sticks with the LUN/Disk, until it is released and reclaimed by the underlying storage layer. Free space reclamation allows the lower level storage layer (for example a storage array, or any hypervisor) to repurpose the freed space for some other storage allocation request.
For example:
* A storage array that supports thin provisioning can repurpose the reclaimed space to satisfy allocation requests for some other thin provisioned LUN within the same array.
* A hypervisor file system can repurpose the reclaimed freed space from one virtual disk for satisfyingallocation needs of some other virtual disk within the same data store.
* GuestReclaim allows transparent reclamation of dead space from NTFS volumes.
http://labs.vmware.com/flings/guest-reclaim
Subscribe to:
Posts (Atom)