Wednesday, December 16, 2009

Exchange 2010 – Enterprise Client Access Licenses

Some customers have asked about what an Enterprise CAL in Exchange 2010 grants you compared to the standard CAL. It is important to know that Exchange CAL's are additive (this was also true in exchange 2007) so an Enterprise CAL is not a "covers all" - you need the Standard and the Enterprise CAL.

The most complete licensing comparison on Exchange 2010 is here:

And from the CAL chart there, we can see the detailed parts that are granted with an Enterprise CAL.

So let's detail these.

Advanced Activesync Policies
Within Organization Configuration, Client Access, Exchange ActiveSync Mailbox Policies, anything changes from the defaults on the Device, device Applications, or Other tab require an Enterprise CAL

You can see in these screenshots, that pretty much anywhere Enterprise CALs are being used there is an icon and a reminder.

Premium Journaling

If you have ever used an archiving product, you have probably used standard journaling. This is where every email written to a particular database is also copied to a single mailbox. Typically, then the 3rd part archive product picked up those emails and wrote them elsewhere. Premium journaling is under Organization Configuration, Hub Transport, Journal Rules. When you go to create a new journal rule, you see the same Enterprise CAL notification.

Unified Messaging

If you enable UM for a user, you need an Enterprise CAL.

Retention Policies

There are two different Managed Folders.. Default and Custom. Default folders are your Calendar, Contacts, Inbox, Draft, Sent Items, Tasks, Etc. Custom is anything you want to create and deploy to your users outside of this. When you create a new Custom folder policy, you see the Enterprise CAL notification.

Integrated Archive

Integrated Archive mailbox is new for Exchange 2010. When you attempt to enable archive for a mailbox, you get the Enterprise CAL notification shown below.

Multi-mailbox search and legal hold

This is the Discovery Management role within the RBAC (Role Based Access Control) that can be controlled via the ECP (Exchange Control Panel) This one does NOT give an Enterprise CAL notification when you add a user to the role group.

IPC, Transport Decryption, etc

This is a multifaceted one, that is not highlighted as Enterprise CAL required when you configure it either. I will default to Technet for descriptions of each of these features, as they have them neatly collected here:

Wednesday, November 18, 2009

Data Protection Manager: A look at 2007 and 2010 Beta

In Microsoft's world, where every service a business could ever need is run by a Microsoft application, storage is dirt cheap, all your Servers run modern operating systems, and the only files users lose are recoverable with VSS "Previous Versions," DPM is a great fit. You could keep a month of file, Exchange, and SQL backups, replicate them to a second DPM server offsite, and call your DR plan complete.

However, in today's world our infrastructures are often more complicated. We might have virtual machines running on (gasp!) VMWare, old Windows servers, non-Windows servers and budget limitations. But we'll get into that later...

If you are interested DPM as a backup solution, there are some definite major improvements in DPM 2010 over DPM 2007. This list is not comprehensive, but a selection of those features with the biggest improvements.

Auto-heal features
In DPM 2007, anytime a job failed an alert was logged in the management console, and no more jobs would run until the error was resolved, usually manually. This was extremely time consuming. Consistency checks and recovery points would often fail due to the partition DPM allocated for that server's backup filling up, and the partition had to be manually resized by "modifying disk allocation" in DPM. Now there are auto-heal features so that DPM can re-run failed consistency checks and recovery points automatically as well as automatically resize partitions as they fill.

Continue on Failure
In prior DPM versions if a backup came across a file it could not backup for any reason, such as permission denied errors or a corrupt file, the entire backup would fail and you would have no recoverable data until the issue with that single file was resolved. DPM 2010 will now continue to back up other files, skipping the one with the problem, and log those files that are skipped at the end of the backup.

Backup Engine Failure Auto-restart
In earlier DPM versions the backup agent service would periodically crash on some servers. DPM would then fail the backup and raise a critical alert in the management console. Now DPM will try and restart the backup service and the job itself before logging errors. Another administrative timesaver.

End User Recovery for SQL
As an add-on to regular end user recovery for files, DPM 2010 Beta now touts a similar model for SQL databases, which will allow DPM administrators to give certain users access to restore SQL databases themselves. This user role can be added through powershell.

Exchange 2010
Aside from Windows Server Backup, DPM is the first backup product on the market to support Exchange 2010 out of the box.

Do I have you excited yet? Now onto the limitations:

  • No compression or deduplication of backups on disk, which causes the need for large amounts of disk space.
  • No long-term disk storage, long term protection must be to tape. If you have any compliancy regulations to adhere to, you can forget a disk-only backup solution with DPM. Based on VSS, which has a limitation of 64 copies, you cannot store more than 64 backups per volume. If you need to keep more than that, you have to back it up to tape. Note that this limit does not apply to application data like SQL and Exchange.
  • Lack of encryption options. DPM 2007 allowed encryption of data at rest on disk only through the Windows Encrypting File System (EFS) but little documentation on it. DPM 2010 Beta documentation doesn’t mention encryption at all, but I imagine it hasn't changed. Tape encryption is certificate based only. You better back up your certificates!
  • No support for Legacy operating systems. DPM 2010 can only back up Windows 2003 and later
  • No support for non-Windows operating systems
  • Virtualization support. DPM offers some excellent features such as granular VM file restores from .VHD snapshots, but only if you are running Hyper-V. There is no snapshot support for VMWare or other virtualization technologies
  • Clunky use of Windows disk management for backup sizing. When you set up backups for the first time, DPM estimates for you how much space you will need based on your retention and the size of the files you select for backup. Then it allocates this space from the DPM disk pool by creating a disk partition based on the estimated size. A separate partition is created for each volume you want to back up. So if one server has 4 drives, C:, D:, E:, and F:, there will be 4 partitions created. When a partition is filled, it has to be resized before more backups can take place for that protected volume. Here is a view of disk management on a DPM server. Note that there is a partition for every volume. Messy, to say the least:

  • Manual failover switching. You have to manually fail over each volume you are backing up in the case that your primary DPM server fails and you need to switch to the secondary. And there is no multi-selecting volumes, so make sure you add this time into your DR plan if you have a lot of volumes to protect.

The reliability enhancements in DPM 2010 now make DPM a worthy purchase for a small shop with a limited budget for software and a small amount of data. It's hard to argue with the low pricing of DPM licensing.

But if you have a lot of data the storage costs quickly get out of hand. DPM estimated 221 GB of storage space in DPM to keep 104 GB of data backed up for 30 days. And chances are the actual storage space ends up being more than that. If you choose to use DPM, make sure your storage model is scalable.

Tuesday, November 17, 2009

Exchange 2010, Outlook Mobile 6.1 and Text (SMS) Messaging

One of the new Client Access Role features of Exchange 2010 is SMS messaging. The first thing to know about this… Exchange did not learn to speak SMS. Exchange doesn't dial a modem. Exchange doesn't do SMS, per se. Exchange does do Activesync. And the Activesync and Windows Mobile team made this possible. Activesync actually sends/reads/synchronizes text messages to your phone. So when a text is sent, it's sent from your phone because Activesync told it to!

First, lets talk environment. Exchange 2010 RTM, Windows 2008 R2. Mailbox and CAS are 2010. The mobile device is a Windows Mobile 6.1 - This requires a Windows Mobile 6.1 or better device. No iPhone, no Blackberries have this functionality.

Install Outlook Mobile 6.1 on your WM 6.1+ device - Download from Microsoft at:

Thanks to Mike here for this link:

Configure Acticesync to your Exchange 2010 CAS server(s), and the next time you go into text messages, your device will prompt you asking if you want to sync texts with Outlook. When you accept this, you will get an email like this one:

The link for this is:

When you log into OWA (or Outlook 2010 when available) you can send texts to contacts from OWA:

Exchange uses Activesync to instruct your device to text on your behalf.

When a reply is received to your phone, the next activesync (aka, when you get an email) will pull that text into your inbox:

Users can disable/turn off/edit this feature in OWA options:

Of course, this can be disabled entirely for all users of a CAS server using:

set-owavirtualdirectory -TextMessagingEnabled:$false

Or this can be disabled per user using new Exchange 2010 OWA Mailbox Policies!

Wednesday, November 11, 2009

Implementing integrated OCS in Exchange 2010

UPDATED on 8/31/2010 for Exchange 2010 SP1 here!

This entry is to show you how to integrate OCS 2007 R2 into your Exchange 2010 OWA experience. This is based on the following Technet article:

First, download and extract OCS 2007 R2 Web Trust Tool from Running and installing this will only extract these additional files. Each of these will need to be installed on each CAS server in your environment that you are enabling OCS Messaging on. Remember, there is no right click run as Administrator for MSI's - so run from an elevated command prompt if needed!
  • Install the vc_redistx64
  • Install UCMAredist.msi
  • Install CWAOWASSP.msi

On your Exchange 2010 CAS server(s), edit c:\program files\Microsoft\Exchange\V14\ClientAccess\Owa\web.config - look for the IMPoolName field. Update the webconfig file as follows:

FieldInsert Value FromExample
IMPoolNameFQDN of OCS R2 Poolocsr2pool.domain.local
IMCertificateIssuerDN of IssuerCN=DigiCert Global CA, OU=, O=DigiCert Inc,C=US
IMCertificateserialNumberSerial Number

01 F9 4E 46 AA 3C 4C 9E BD 8F 2C

(include spaces between octets!)

Look for this:

And based on this (where thumbprint is the certificate your CAS server uses for IIS)
Get-ExchangeCertificate -Thumbprint BJBHDS78FG6D8GFYH49SDF34TH9 | ft Issuer, SerialNumber, subject

Change to this:

The "subject" gives us the common name that we use in a bit to configure OCS.

Additionally, if your Issuer has funky characters, you need to replace them as they will break your web.config file, causing generic IIS errors. Just removing those characters will make for application event log errors that the certificate was not found in your certificate store.

Since the web.config is an XML file, and you need to use XML character special escapes

""(double) quotation mark
''apostrophe (= apostrophe-quote
&lt;<less-than sign
&gt;>greater-than sign

So if your SSL provider's issuer field causes you a problem here, this should help you work around it.

In Powershell, configure OCS:
Get-OWAVirtualDirectory -server SERVER | set-owaVirtualDirectory -InstantMessagingType 1

(The above line *did* say -InstantMessagingType OCS, but RTM documentation says 1 for OCS - thanks to Brian Day for this!)

Restart IIS (IISreset is fine)

On your OCS R2 Pool server, under the server properties of your pool, on the Hosts Authorization tab, you need to add the Client Access server. This can be FQDN or IP. If you use FQDN, OCS will additionally authenticate the FQDN against the certificate names - the FQDN here has to match the "subject" we found above (NOTE: Not the whole string, just the FQDN common name given in the subject) Additionally, you can choose to use FQDN and then use a hosts file to ensure that OCS is communicating with the correct server/IP.

Now I am able to log into OWA 2010 and get the light CWA client as well:

Upper right allows me to see and update my presence, as well as see how many IM conversations I have active and switch between them as well.

Wednesday, October 28, 2009

Exchange 2010 - Recovery Scenario #2 - Recover from a DAG member loss

In this scenario, I have a 3 three server DAG, and I use Windows Server backup to backup my Exchange 2010 Active database. On the server with the active copy, I hit the virtual power button. The Exchange services fail over to another server in the DAG right away.

The Microsoft documentation:

Recover a DAG member Exchange Server

Remove the copy of the Database in a DAG:

This will warn that it cannot communicate with the server. That is expected.

Then, you can remove the server from the DAG:

Reinstall Windows 2008 R2 from DVD (remember, DAG requires Enterprise!)

Reset computer account in Domain (Right click Reset in AD Users and Computers)

Name and IP the server, confirm the date/time is correct (since in a DAG, I also needed to IP my DAG network)

Install Exchange Pre-Requisites

Install Exchange 2010 using:

setup /m:recoverserver

If you skipped the DAG removal steps above, setup will fail with:

Once setup succeeds, you need to reboot the server (at this point, I would also patch as needed - at the time of this writing 2008 R2 and Exchange 2010 RC have no additional patches)

Since I have a DAG, I am able to re-add it to the database availability group and allow the database to reseed. If you were in a single server environment, this is where your backup would come into play. This might be scenario #3

Add the recovered server back to the DAG

Add a mailbox database copy to the recovered server.

Assuming you have a real DAG and you might have 300GB of data to re-seed by adding a copy, this is where the Windows Server Backup may play part. You may be able to restore the Exchange data to an alternate location, and before you add the database copy, move the restored EDB to the folder path for the database. This would allow you to skip time consuming reseeding, as long as your restored EDB was the most recent backup taken of the database.

Optionally, you can activate the database on the recovered server as well.

Tuesday, October 27, 2009

Exchange 2010 - Recovery Scenario #1 - Mailbox or items

In this post I wrote about how you can now backup Exchange 2007 SP2 and Exchange 2010 mailbox databases with Windows Server backup.

Since them, word of Exchange 2010's release has come, and with that, the question of "when will my backup vendor provide updates that are compatible with Exchange 2010" and "what can I do in the meantime"

One of the easiest solutions is using Windows Server backup, and then allow your existing backups to do file level backups of that data. The question once this is in place, of course is how do I recover from that?

So I intend to cover three scenarios:

  1. Single Item or Mailbox recovery - accidental delete, assuming you passed or misconfigured deleted item and deleted mailbox retention.
  2. Loss of a DAG member - How to recover from losing a single member of a DAG.
  3. Entire Server recovery - Building/site failure, need to return to service and restore data from backups. (this will be single server from backup)

Additionally, I am using a database that is in a DAG for this, but am writing it as if it was standalone, as the #2 and #3 scenarios would be addressed by the Database Availability Group.

So, on to scenario #1 - I have disabled my mailbox in the EMC, and run the below powershell to force the database clean:

Clean-MailboxDatabase geodb1

Now I see my mailbox under "Disconnected Mailbox" in a normal scenario, this is what Exchange 2003 and up has offered, where I could right click my mailbox and choose to re-connect it to my user account:

Of course, I want to go to backups so I reset my database to have a 0 day deleted item mailbox retention and refreshed this screen and my mailbox was no more!

Do note, the settings in the above screenshot are NOT recommended. Default is 30 days, and I recommend leaving it there or higher!

Next, we must recover the data "to an alternate location" using Windows Server Backup
Choose your Backup Date, then your recovery type should be "Applications"

Choose Exchange:

(I included the show details, which is the store GUID)

I chose here to recover to another location (Note: c:\RDB1 is NOT where my RDB's EDB/logs/anything are)

Do note that "this option will copy just the application data" - there are additional steps after this!

Finally, launch the recovery.

Once completed, you will have the file structure of the database in the path specified:

Now that we have our data files, the recovery is similar to Exchange 2007 SCR's database portability.

Run ESEUTIL /R from the log file directory

Then we can run:
Eseutil /mh geodb1.edb

And determine the DB is healthy:

Next few steps edited on 11/3/2009 for missing content:

Now we can create our new Recovery mailbox database using:
new-MailboxDatabase -Recovery -Name rdb1 -Server exch2010 -EDBFilePath "c:\rdb1\rdb1.edb" -LogFolderPath "c:\rdb1\"

Then we need to allow Restores:
Set-MailboxDatabase -AllowFileRestore:True

Copy the EDB file to the path EDBFilePath of the RDB1 Database, renamed it appropriately, and then it should mount successfully (NOTE: Logs didn't need to be copied since ESEUTIL /R replayed them into the EDB, however if you do copy them into place, Exchange will see they are replayed and move on)

Once mounted, we can use

get-MailboxStatistics -database rdb1

to see that the data is there:>

Now, in the Exchange documentation, it states that:

Restore-Mailbox -Identity chris -RecoveryDatabase rdb1

would recover the data into the mailbox. The problem is, we don't have a mailbox with that GUID any more. If I re-enable a new mailbox for chris, he will get a new mailbox GUID.

Enable-Mailbox chris
Restore-Mailbox chris -RecoveryDatabase rdb1

I get:

This makes sense, it cannot match GUID's and stops - more on this in a second.

However, you are able to run a recovery operation (similar to Export-Mailbox in Exchange 2007)

Restore-Mailbox -RecoveryMailbox chris -Identity chris -RecoveryDatabase rdb1 -TargetFolder "Recovery"

And the results, all of the content in a subfolder named "Recovery"

I attempted a few other things to see if I could restore directly into the mailbox, but was not able to find any luck.

Important to note - if I was recovering for a user that missed their deleted item retention time, I can use the restore-mailbox to specify by subject, dates, folders and more. Because I mail disabled the user, I am not able to restore directly.

Friday, October 16, 2009

OCS Voice Ignite Training - Registered!


We have been trying for most of the year to get me a seat at one of these, and instead, I think we got two seats so I will get to tag along with one of our Cisco Voice guys as well (this should be helpful in backfilling voice knowledge for me!)

Pretty stoked to get there. Irving, TX in February!


Wednesday, October 14, 2009

Installing Exchange 2010 quickly using PowerShell

In Exchange 2007, I typically used ServerManagercmd.exe to quickly deploy required Exchange 2007 parts. In Exchange 2010, when I ran ServerManagercmd, I get the warning that:

Servermanagercmd.exe is deprecated, and is not guaranteed to be supported in future releases of Windows. We recommend that you use the Windows PowerShell cmdlets that are available for Server Manager.

This is replaced by Powershell commands:

When you run those - you will get an error. Need to run this first:
Import-Module ServerManager

So let's see how fast we can make this go! I am installing for ALL roles. If you need to split out roles, you should read the MS documentation at:

You can of course use the Servermanagedcmd -i that they give you, but knowing it's deprecated and I will be doing this 40 times next year, I wanted to know the new way. So here it is!

Install 2008 R2 off a CD/ISO
Set computer time, networking, machine name and domain
Install the AD tools using Powershell
Add-WindowsFeature RSAT-ADDS
Upon reboot, launch PowerShell as Administrator and copy paste the below (again, this is for MBX, HT, CAS on a single server, check the link above for more detailed pre-requisite planning)
Add-WindowsFeature Web-Metabase, Web-Lgcy-Mgmt-Console, Web-Server, Web-ISAPI-Ext, Web-Metabase, Web-Lgcy-Mgmt-Console, Web-Basic-Auth, Web-ASP, Web-Digest-Auth, Web-Windows-Auth, Web-Dyn-Compression, Web-Net-Ext, RPC-over-HTTP-proxy, AS-NET-Framework, NET-HTTP-Activation

Set the TCP .net sharing service to automatic startup
Set-service NetTcpPortSharing -startuptype automatic

Optional - download and install the x64 version of the Microsoft Office Filter Pack (this allows office attachments content to be searched and indexed.

And from the Exchange 2010 install directory: .\ /mode:install /roles:mb,ht,ca

Now, if you take this and extend it to using PowerShell's remote capabilities, you can prep a BUNCH of 2010 servers quickly!

Tuesday, October 13, 2009

Exchange 2010 - What is an arbitration mailbox?

If you found this searching you most likely found out about arbitration mailboxes much the way most people will. By either finding out they accidentally deleted them, or by finding out that you need to move, disable or remove them in order to delete a database, or uninstall Exchange 2010, or remove the mailbox role.

From TechNet:
"Arbitration mailboxes are used for managing approval workflow. For example, an arbitration mailbox is used for handling moderated recipients and distribution group membership approval."

This is part of the Moderated Transport features that are new in Exchange 2010.

A lot more information about using arbitration mailboxes can be found here: Understanding Moderated Transport

In short, arbitration mailboxes are where messages awaiting moderation are stored, as well as information about moderator decisions are kept.

Now back to getting over the two most common immediate needs for arbitration mailboxes.

I deleted my arbitration accounts from my AD
This isn't really all that bad. I did it the first time I installed Ex2010 and had a panic moment before I found the fixes. Pretty simply, you need to rerun the AD preparation steps from the 2010 media. /PrepareAD /PrepareSchema /PrepareDomain

Only /prepareAD is required to recreate these accounts, but I left the other steps in here as well just for documentations sake.

I am trying to remove Exchange 2010, or a database, or the Mailbox role and am being told there are arbitration mailboxes preventing me from continuing
This is also not too bad. When you try to remove the first DB in Exchange 2010, there are a few arbitration mailboxes that will prevent database deletion. You have the choice of moving, removing, or mail-disable these mailboxes. Since you cannot see these in the Exchange Management Console, you need to launch Exchange Management Shell (EMS)

Get-Mailbox -Arbitration

This will list the arbitration mailboxes. To narrow it down to a specific database, you can edit this to:

Get-Mailbox -Arbitration -Database DB1

If you are used to PowerShell cmdlets in Exchange 2007, one big change to recall here is that specifying servername\databasename won't work anymore. This is one of the reasons why the database names need to be unique to the organization - so you don't have to specify servers anymore!

Once you have your "get" command returning the correct list of mailboxes, it's time to move, disable or remove them. Disabling the last arbitration mailbox is not allowed, so I recommend moving them as the first preference here.

Get-Mailbox -Arbitration -Database db1 | New-MoveRequest -TargetDatabase db2

Get-Mailbox -Arbitration -Database db1 | Disable-Mailbox -Arbitration

Get-Mailbox -Arbitration -Database db1 | Remove-Mailbox -Arbitration -RemoveLastArbitrationMailboxAllowed

If there is enough interest a little later, I may do a write up on using the arbitration mailboxes, but at this point there is still a lot of other Exchange 2010 things to learn and figure out!

Monday, October 12, 2009

Virtualized DC issue with time synchronization

This was a pretty simple mistake but it took me a while to figure out. We were noticing everything on the domain was 10 minutes behind other devices we used and in troubleshooting, I configured my PDC emulator to sync with using NTP, and it did, and seconds after, the time would revert back.

Of course, the issue here was that the DC was virtualized and Hyper-V time synchronization was taking preference and syncing to the Hyper-V server's local time which had fallen out of sync. The fix here was either disabling time synchronization on my DC, or enabling NTP pool synchronization for my Hyper-V server. I chose the latter, and moments later all machines were the correct time.

Credit where credit is due:

Thursday, October 08, 2009

Exchange 2007 SCR How to - Part 2

This is a continuation from Part 1 where we configured the SCR replication.

Failing over to the target SCR server

In this example, I am not having an ACTUAL failure. I am choosing to dismount the DB on the source server. I typically also will check the "do not mount the database store on startup" just so that if I do get to stopping/starting any services later that I don't accidentally remount the DB that I am attempting to fail over. And once I successfully fail over, I delete the now "old" SG and database from the EMC as well as the EDB file and transaction logs that were associated. I like to keep out of date data tidy like this. If you disagree, at the minimum, you should move these files into a well labeled folder so you know exactly what/when the files were from so that 6 months later (or wherever your comfort level is) you can do housecleaning in an educated manner. In order to ensure we have live data, I sent myself an email just before dismounting the database. You can see behind it the Get-StorageGroupCopyStatus showing the status as Healthy, with the hard coded 50 log replaylaglength.

First, we dismount the active database (in a real failure, something did this for you)

Dismount-Database OECU-EXCH1\ExecDB

Once the source database is dismounted, we can begin the SCR activation process. The first step here is to create a new SG and a new DB. DO NOT USE the folders/files/paths of your SCR data! For example, create a new SG named "RecoverySG" and a new DB named "RecoveryDB" and have their paths be new/unused folders and paths. (this is the part I mentioned above is not clear enough in the technet article)Run powershell as admin so it can do file operations!

New-StorageGroup -Name RecoverySG -LogFolderPath 'D:\Exchange Logs\RecoverySG' -SystemFolderPath 'D:\Exchange Logs\RecoverySG'

New-MailboxDatabase -name RecoveryDB -StorageGroup RecoverySG -EdbFilePath 'D:\Exchange Databases\RecoveryDB\REcoveryDB.edb'

Notice again - those are NEW and empty paths and files. If you attempt to use your SCR data at this point, you will have undesirable results! Now we run the restore command. This checks the status of the log shipping, and will attempt to copy missing log if files if needed. It also disables the original SCR and makes the database viable for mounting.

Restore-StorageGroupCopy "EXCHANGESOURCE\ExecSG" -standbymachine EXCHANGETARGET

Now is a good time for a quick reminder on what SCR is and how it does log shipping. SCR essentially just copies log files to the target, and when a backup occurs, the target will also replay those log files. So if we skip the next step, we risk bringing up a database that essentially is only as up to date as the last backup. If that was just before this test, it may not be a big deal, or may not be noticed (especially in lab where you don't have live mail flow, etc) So now we need to run ESEUTIL /R to replay the log files. This is done by running eseutil from the location of the log files like so:

eseutil /r E00

The /r Exx is the prefix for that databases logs (you can check by looking in the log folder directory for that storage group)

This should replay the logs and bring the database to a clean shut down state. You can confirm this by running eseutil /mh and specify the EDB file. The database state should be Clean Shutdown.

The below command updates the new RecoverySG's paths to match the paths of our SCR Database. The "configurationOnly" flag here tells it NOT to move the existing file, but to just change the configuration.

Move-StorageGroupPath "EXCHANGETARGET\RecoverySG" -SystemFolderPath "D:\Exchange Logs\ExecSG" -LogFolderPath "D:\Exchange Logs\ExecSG" -ConfigurationOnly

Now we need to do the same thing for the Database - point the "new" DB at our "recovered" data.

Move-DatabasePath "EXCHANGETARGET\RecoveryDB" -EdbFilePath "D:\Exchange Databases\ExecDB.edb" -ConfigurationOnly

Now we need to set the database to allow file restore. This is what will allow this database to be mounted.

Set-MailboxDatabase "EXCHANGETARGET\RecoveryDB" -AllowFileRestore:$true

If you skip the above step, when you attempt to mount the database, you will receive an error that appears to be permissions related.

Mount-Database "EXCHANGETARGET\RecoveryDB"

This is where most admins breathe a sigh of relief, but we aren't done - we need to move user s to this DB. Well, not really. What this really does is updates these user's AD objects to have their Exchange server and homeMDB at their new location in the RecoveryDB.

Get-Mailbox -Database "EXCHANGESOURCE\ExecDB" where {$_.ObjectClass -NotMatch '(SystemAttendantMailboxExOleDbSystemMailbox)'} | Move-Mailbox -ConfigurationOnly -TargetDatabase "EXCHANGETARGET\RecoveryDB"

The "where" clause in the middle of this is to prevent you from moving the System mailboxes that are unique to each DB. When you run this command - assuming your SCR target is in a different AD site - keep in mind you will need to sync AD to have your users in the main site start coming back online. You can trigger this in AD Sites and Services or various other ways, or just wait for replication to occur. At this point the users and database should all be online and client access to their data should be restored. If any HT servers in your organization had mail delivered for these users during the outage, it would deliver now. I recommend using OWA to test data as you may have client connectivity issues to troubleshoot as well with Outlook or Outlook Anywhere.

We can see the test email that I sent just prior to dismounting the database on the SCR source, so we know the logs replayed correctly. If your data is "out of date" then the likely thing that did not work is your ESEUTIL /R to replay the logs. You may be able to dismount the database and replay these again, but if you leave the DB mounted for a while and new messages flow in, your log sequence will likely be corrupted or broken. If you do this, you likely will need to go to restoring your DB from the previous night and then attempting to replay the SCR log files again.

Reseeding back to the original source server

Reseeding back is the exact same process, but your source and target are flip flopped. So you re-seed your "live" data that is on your DR server back to your main server. First, clean up the old DB/SG and the files/folders under it on your target server. Then, you can choose to rename/modify any of the DB or SG names or paths to your liking. (This can be skipped if wanted, but needs to be done before you configure SCR) Then, you can repeat the creation of an SCR replica and reseed the data back. Once data is seeded and healthy on the target, you repeat the failover process to "fail back" Once you fail back, clean up all the SG/DB paths/names once again on the DR server. Don't forget to recreate the SCR seed to your DR location after!

Tuesday, October 06, 2009

Exchange 2007 SCR How to - Part 1

All of this document is based on database portability offered in Exchange 2007 SP1 known as Standby Continuous Replication, or SCR. Microsoft's article is here: and one of the most overlooked items I felt in this document is this bit:
I I will get into more details on this below.
Before getting startedStorage group and database paths must match on the source and target. So D:\Exchange Databases\ for the EDBFilePath must be valid on both servers. Due to this, I recommend creating the folder/path structure on the source and target as you go and name everything really smartly so you know what logs you are looking at when you are in explorer. I typically use the two cmdlets below to make sure I have the info I need:
Get-StorageGroup -server EXCHANGESOURCE ft name,logfolderpath,systemfolderpath
Get-MailboxDatabase -server EXCHANGESOURCE ft name,edbfilepath
The System Folder path is recommended to be in the same folder as the log file path for uniformity as well as to reduce the risk of missing it in this step of having to type the default location of C:\Program Files\Microsoft\Exchange Server\Mailbox\Storage Group.
Also, only one database per storage group is supported for SCR log shipping to work and replay logs on the SCR target.
I recommend putting each .edb file into a separate sub folder as well because you later (in eseutil) need to specify the database directory (not path to EDB) to replay logs, and it’s a little intimidating if you have 4-5 edb files in the same directory.
Seeding the databases
Enable-StorageGroupCopy -StandbyMachine EXCHANGETARGET -Identity "EXCHANGESOURCE\ExecSG" -ReplayLagTime 0.1:0:00 -TruncationLagTime 0.2:0:0

Those day/time formats are in day.hour:minutes:seconds format - leaving one or two zeroes does not matter for hour/day/time.

ReplayLagTime is the time that the target server will wait to replay a log file into the EDB. Above, it is set to 1 hour. If not specified, the default is 24 hours. While this may work - it can mean replaying a lot of logs, so a lower setting is preferred. There is a hard coded lag of 50 files here. This means you will always see the ReplayLagTime as at least 50 when running Get-StorageGroupCopyStatus
TruncationLagTime is the time that the SCR target will delay deleting a replayed log. This is helpful if there was ever an incident where restoring from backup had to be performed, and the replayed log files from an SCR source could be used to shorten the gap between backup and moment of failure. The Microsoft default for this is 0, however.
You may receive this warning:
WARNING: ExecSG copy is enabled but seeding is needed for the copy. The storage group copy is temporarily suspended. Please resume the copy after seeding the database.
Get-StorageGroupCopyStatus -StandbyMachine EXCHANGETARGET
Will show the SCR replication status, including copy queue length and a suspended status because the DB is not there yet. If not suspended, suspend with:
Suspend-StorageGroupCopy -Identity "EXCHANGESOURCE\ExecSG" -StandbyMachine EXCHANGETARGET
Now, on EXCHANGETARGET, we can seed the database.
Run EMS as administrator, or these will error with "Access to the path (edbfilelocation)\temp-seeding is denied"
Update-StorageGroupCopy -Identity "EXCHANGESOURCE\Executive Staff Storage Group" -StandbyMachine EXCHANGETARGET
This will then seed the data:

If you receive the following error:
Database Seeding Error: Error returned from an ESE function call (0xc7ff1004), error code (0x0).
You need to enable and allow Windows Powershell as a program in Windows firewall.
Once the seeding is completed, the suspend operation should automatically resume. If it does not, you can manually do this with:
Resume-StorageGroupCopy -Identity "EXCHANGESOURCE\ExecSG" -StandbyMachine EXCHANGETARGET
Confirming that the Database seed is healthy
From the SCR source:
Get-StorageGroupCopyStatus -StandbyMachine EXCHANGETARGET
From the SCR target:
Get-StorageGroupCopyStatus -server EXCHANGESOURCE -StandbyMachine EXCHANGETARGET
This outputs something like:
NameSummaryCopyStatus CopyQueueLength ReplayQueueLength LastInspectedLogTime
ExecSGHealthy 0118710/6/2009
Obviously, "Healthy" is what you want to see here. If there are NotConfigured, they are either not configured, OR you left the -standbymachine off! If you have errors, check your application event logs, ensure the folder structure is correct and read the next step below.
CopyQueueLength is the number of transaction logs waiting to be shipped. If this number is commonly growing, your WAN connection may not have sufficient bandwidth.
ReplayQueueLength is the number of logs in the SCR target's log directory waiting to be replayed. This number will increase continually until a full backup is taken on the SCR source, at which point the SCR target "replays" these logs and commits them to the EDB on the target server. It is important to know there is a hard coded lag of 50 log files that cannot be changed.
Last InspectedLogTime shows the data and time of the last log inspected on the SCR target. The time usually is … in powershell, so run something like:
Get-StorageGroupCopyStatus -StandbyMachine EXCHANGETARGET | ft name, LastInspectedLogTime
Additionally, from the SCR target, you can run test-ReplicationHealth to troubleshoot any issues with SCR. This cmdlet does not work from the source server and errors that LCR (local continuous replication) is not configured. It also accepts a -verbose argument which displays a lot more detail.
Continue reading Part 2 which includes failover and failback as well.

Thursday, September 17, 2009

Creating an RPC directory on an additional IIS7 Web site for Exchange 2007 Outlook Anywhere

Usually when I get stumped, and I find the fix, I blog about it. It's usually a really good way to capture information that I myself was unable to find (and you would be surprised how often I personally refer back to my own blog entries)

Today, I had a pretty interesting challenge. A customer had an internal FQDN that they did not own on the Internet, and could not get an SSL cert issued for their AD FQDN name on their SAN certificate. Now, I have not run into this issue in a while - the last time was on a Windows 2003 (IIS6) server - this time was a Windows 2008 (IIS7) configuration.

The actual issue of splitting the certificates is a pretty well known and well documented procedure. We used the "default web site" for internal CAS, and secured it with an Enterprise CA signed certificate. I created a "External OWA" site for external CAS, and assigned the Digicert SAN cert to that site. I should also note, the External OWA site listened on a different LAN IP that was ONLY used for the NAT entry with TCP/443 (HTTPS) enabled to it.

Using Powershell, I was able to run (not exact commands, some of these will prompt you for additional required attributes)
  • New-OWAVirtualDirectory -WebSiteName "External OWA"
  • New-ActivesyncVirtualDirectory -WebSiteName "External OWA"
  • New-OABVirtualDirectory -WebSiteName "External OWA"
  • New-WebServicesVirtualDirectory -WebSiteName "External OWA"

This got me 95% of the way to having a working CAS server. The only issue - I was missing the /rpc and /rpcwithcert directories that Outlook Anywhere and RPC over HTTPS rely upon. There is no Powershell command for this, as it's not really an Exchange component.

Now, the last time I had to splut internal/external was on Windows 2003, you would back up the virtual directory in question to an XML file, then import it on the other site. This is no longer an option in IIS7.

I also admit, I do not know IIS7 very well, and I attempted to manually recreate the directories by investigating settings and mimicking them. I got pretty close, but the Exchange Remote Connectivity Analyzer was still reporting issues.

Google and Bing really didn't turn up much (rpc, iis7, windows 2008, exchange 2007, are all pretty common search words)

I eventually found this blog entry written by Saurabh Singh related to RPC over HTTP as it related to a TS Gateway issue that he ran into.

And awesomely - it WORKED.

So opening up the applicationhost.config file, I was able to build up the virtual directories and all their settings identically on a second non-default web site. Ran an IISreset, and then re-enabled Outlook Anywhere and everything worked!

Tuesday, August 25, 2009

Best Practices for Active Directory Schema changes

Part of my job is to extend AD Schemas to support new versions for products like Exchange and OCS, and this is part of what I do prior to Schema changes for customers as well as internally.

First off, a quick review of AD schema, and what it is and the function it performs. The Schema is essentially the "database" that AD resides in, so when we say things like "extending the schema" we mean the same thing any SQL DBA would mean - we are adding additional objects attributes to AD. These new additions allow for features in products that were not previously there to store their settings in Active Directory.

Some of the recent Schema extensions you will see:

  • Exchange 2007 SP2 requires schema extension.
  • Exchange 2010 requires schema extension.
  • OCS 2007 R1 or R2 require schema extension.

Additionally, while not an extension, these best practices also apply before raising your forest or domain functional levels.

Step One - Determine your Schema Master FSMO role holder

  1. On any domain controller, click Start, click Run, type Ntdsutil in the Open box, and then click OK.
  2. Type roles, and then press ENTER.
  3. Type connections, and then press ENTER.
  4. Type connect to server <servername>, where <servername> is the name of the server you want to use, and then press ENTER.
  5. Type q to return to the fsmo maintenance prompt.
  6. At the FSMO maintenance: prompt, type Select operation target, and then press ENTER again.
  7. At the select operation target: prompt, type List roles for connected server, and then press ENTER again.
  8. This will display all 5 FSMO roles. The one that has Schema is the one we need to back up.
  9. Type q 3 times to exit the Ntdsutil prompt.

Step Two - Ensure you have your DSRM password

  1. Most of the time, even if this is known, it has not been changed in a long time and is likely due.
  2. Follow instructions to reset DSRM password from KB322672
  3. This allows your backup to be authoritatively restored in the case you need to. Without this password being correct, your backup may not be usable.

Step Three - Take a system state backup (or two)

  1. I recommend taking an ntbackup.exe (Windows 2003) or Windows Server Backup (Windows 2008) if you are more comfortable with Microsoft restore procedures.
  2. I recommend taking another backup using whatever third party vendor product you typically use, if you are more comfortable with their restore procedures.
  3. I usually recommend taking BOTH of the above for the Schema Master FSMO role holder.

While I have YET to run into any issues or problems with Schema extensions, if I ever did, I know I want a really good backup or two!

Backing up Exchange 2010 and Exchange 2007 SP2 using Windows Server Backup

2/17/2010 Update

Hi there - this was written in August 2009, 3 months before Exchange 2010 released, and these screen shots are Windows 2008, not Windows 2008 R2. We are now almost 3 mos PAST RTM of Exchange 2010 and there are starting to be other options for backup solutions. I am trying to keep a running list of these here:

Now, on to the main article:

In both Exchange 2007 SP2 on Windows 2008 and Exchange 2010, Microsoft has enabled Windows Server Backup to allow VSS backups of the Exchange database. I hope to shed some light on how to configure these backups, for both one off backups as well as scheduled daily backups.

First off - how to take a ONE off backup.

Launch Windows Server Backup on your mailbox role, and click the "Backup Once…" action.

Not having a schedule at this point, only "Different Options" is a choice

Only full server or custom is a choice. I am OK with Full server, but I will go with Custom for this.

Now, we can see that there is not much granularity to selections. Since this is VSS based, it has to be by disk. You can also see there is no "Information Store" to choose. Select ALL volumes that host Exchange (this is why I am OK with Full Server above)

For location - if I choose local drive, I will have only the DVD as an option. You cannot back up a drive to a drive that it is backing up. (this is the one down side of VSS in my opinion. It was nice to exclude e:\backups and save them there)

Specify location:

If you specify a location already used, you will receive this message.

This is nice because unlike old scheduled ntbackup.exe BKF files, we won't have an ever-growing backup set that is not being watched.

Something to note here - when you do this as a scheduled job, it needs to be a local disk. A network share will not suffice. I have used an iSCSI SAN device

This is an IMPORTANT step. If you choose copy, your tlogs won't be flushed, and your databases will not register as backed up.

Confirm the settings (not pictured screen) and then click BACKUP and you can watch progress of the VSS backup. Here the shadow copy is produced.

Exchange 2010 consistency check being run:

This backup process flushed transaction logs in Exchange 2010, marked the databases housed on Exchange 2010 as backed up. The backup set is larger than my actual Exchange data would be since I am backing up all the binaries on C every time.

This is a GREAT tool and I am very glad Microsoft listened to the need for an included backup utility.