Friday, January 30, 2009

Implementing a Windows 2008 R2 domain controller

Since I already have a home 2008 domain that has some production work in it, I opted to install a new forest and domain for beta code, so that if I did break anything or need to rip R2 beta out, it won't affect other services. Once this OS releases, I will definitely be migrating to it at home and will review the 2008 to 2008 R2 upgrade process some more.

Run Dcpromo from Start-Run and this ran for a short while, but that might be because I allocated hardly any RAM to this box :)

Advanced Mode… I like the sound of that!

And we get this glorious warning about security.

After reading up at, there is a workaround if you do experience any of the symptoms of this. So I will press on:

I went with a simple and easy to remember name, that I likely will never use again on this blog, unless I do additional 2008 R2 features on this domain.

This then checks DNS and Netbios for existing names that would conflict, then prompts you for the NETBios name (yes, still, with the netbios, but a fair amount of things still rely on it!)

Now, the Forest Functional Level (FFL)

I of course chose the R2!

Time to delve into the reviewers guide for what features are unlocked with the 2008 R2 Forest Functional Level, which contains everything 2008 did, plus the AD Recycle Bin, which when enabled provides the ability to restore objects without stopping AD and doing a Directory Restore.

For more on the AD Recycle bin, check out:

Of course, choosing the R2 Forest functional level means I would also be doing the 2008 R2 domain functional level. This includes all previous DFL features, plus Authentication Mechanism Assurance. You can read more about this feature here:

Once past the FFL screen, we are asked what other DC options to install with. Being the first DC in a new forest, I cannot choose much here. I do wonder why I would even be given the option to not install DNS.

I had set a static IPv4 address, but left IPv6 using dynamic addressing, so I got a warning:

I chose to ignore this for now, and chose YES.

Accepted the default storage locations, and then set my DSRM password:

I was *really* hoping that "advanced" let me choose a different site name than "Default-First-Site-Name" this time around. Oh well. I think if it was an option here, far fewer installations would be scared to change this.

A quick review of my settings, and we are OFF:

I really like the reboot checkbox, so I checked the box. This is really nice to have, especially if you were bringing up new DC's en masse for a larger existing domain and didn't want to have to keep checking on it.

Once rebooted, I see a new MSC, the "Active Directory Administrative Center"

My stars, this looks very different!

Oh wait, there's what looks more familiar..

Next post will cover USING the recycle bin!

Saturday, January 24, 2009

OCS 2007 R2 Deployment, Part 3

Here, we have already prepared Active directory, Installed OCS 2007 R2, and we are now ready to Configure the Server and Certificate.

I removed here and used my "external" domain of And yes, I am generic enough to name my home domain

I highly recommend automatic logon:

Choose which domains you will support automatic logon for. I chose both, even though I plan on being the main one.

I skipped this for now. If and when I do an Edge server (I plan/hope to, but unless I get a new network connection and firewall, it might not be fully functional)

Review my settings, and click next!

And it's done! I can choose to review the log, but will skip that for brevity.

Now, an important note… this is the internal pool, so an internally signed SSL cert is fine.

However, the variety of options provided is a NICE change from R1.

Now here is the important part. The name here is the "friendly name" but I still like to use a valid DN here. The MS default is OCS2007R2 (the machine name)

I skipped my org name and department screens, but here is where MS got smart and made the auto-fill work a lot better:

Now, keep in mind, OCS 2007 R2 is going to want split brain DNS. So internally, and need to point to the pool server. If and when I move to having an edge server, then I will need those same names to externally point to the edge interface, and I will need the SSL cert there to have the same names. Keep in mind here, internally, additional SAN names are free. Externally, you want to limit this if possible. If I want to do federation, the external cert will need to be third party trusted certificate. If just remote user access, you can get away with internally signed as long as the remote machine/user are both domain members so the internally signed cert is trusted. If you expect home machines and office communicator mobile R2, you want a third party certificate.

Yes, my two person network has a Root CA.

Final review before committing:

I chose to assign the cert immediately, why make you wait for another blog post?

Viewing the cert:

And we are DONE!

Tune in next time as we configure the Web Components Server Cert and Verify our config both with the wizard and with some person to person OCS communications!

Friday, January 23, 2009

Setting up TS Gateway

TS Gateway on Windows 2008 is a solution that allows one to connect to resources on a remote terminal server without using a VPN connection. It connects a client to the remote resource using port 443 and can be used in conjunction with TS Web Access or TS RemoteApp. Traffic is encrypted using TLS 1.0. There are three ways to deploy TS gateway: for use with Network Access Protection (NAP), ISA server, or by itself. I will address the NAP scenario here.

Step 1: Install the TS Gateway role service. From server manager, click "add roles" and add the terminal services role. On the "select role services" screen, select TS Gateway. Allow Server Manager to install the additional required role services as well (RPC over HTTP, IIS 7, NPS).

Step 2: Configuring Certificates in TS Gateway. Once you have added the appropriate role services, you will need to obtain a certificate for use with TS Gateway. The certificate can be self-signed, or you can use certutil to create a certificate request for a third-party certification authority. If you choose a third-party certificate, you'll want to make sure the vendor participates in the Microsoft Root Certificate Program so that the certificate is automatically trusted by clients.

With the self-signed certificate, each client computer connecting to the terminal server will need to add the certificate to the trusted root certification authorities store for their user account, either manually or through group policy.

The common name of the certificate should match the external DNS name of the TS Gateway server.

Once you have your certificate, install it in the personal store for the computer account on the TS Gateway server. Now open the TS Gateway Manager from Administrative Tools, right click the server name in the right-hand pane, and go to properties. On the SSL Certificate tab, select an existing certificate and point it to the location of your new cert.

Step 3: TS-RAP and TS-CAP policies. Before clients can connect using TS Gateway, you must set up two policies: Terminal Services Connection Authorization Policies (TS-CAPs) define who is allowed to connect to a TS Gateway server. You can specify either local or Active Directory user groups who are allowed (or denied) access to terminal services, and decide which devices can be redirected when connecting to TS Gateway. You can also specify what authentication method you want the client to use – password or smartcard.

Terminal Services Resource Authorization Policies (TS-RAPs) identify which network resources users can connect to using the TS Gateway server. You can create TS-Gateway managed computer groups, or use Active Directory defined user groups to create a TS-RAP policy.

You will be prompted to create at least one TS-RAP and TS-CAP policy as part of the initial TS Gateway configuration. Creating TS-RAP and TS-CAP policies:

  1. Enter a name for the TS-CAP policy

  2. Choose the authentication method you want clients to use to connect, then add allowed user groups or even computer groups that are allowed to connect to the server.

  3. Choose the which devices are allowed to redirect from the client:

  4. Review the summary and click Finish (here I am creating both a TS-CAP and a TS-RAP, so I don't have a finish option yet.)

    Creating a TS-RAP:

  5. Enter a name for the TS-RAP policy:

  6. Choose which user groups the TS-RAP will apply to:

  7. Here you can specify which resources clients are allowed to connect to using either active directory security groups (computer objects), or TS-Gateway managed computer groups.

  8. Choose which port RDP should run on. I will leave the default (remember, TS Gateway operates over port 443 on the internet. 3389 only needs to be open internally)

  9. Review the configuration and click Finish.

    ****The policies work just like the old IAS policies in 2003 – order matters!****

    Step 4: Configuring the Client for Connection to TS Gateway

    First, you must ensure that you have purchased a trusted third-party root certificate, or that you have installed the self-signed certificate either manually or through group policy into the Trusted Root Certification Authority store for the client's user account.

    Also, the client must be running at least Windows XP SP3 or Windows Vista – make sure you have at least RDP 6.1.

    Now the RDP client should be able to automatically detect the TS Gateway settings, but for me, it takes longer to connect every time when the RDP client has to search for the settings. So I would rather specify manually in the "Advanced" tab of the Remote Desktop Client:

  10. Open the client, expand options, and go to the Advanced tab. Click "Settings" under the Connect From Anywhere section

  11. For server name, enter the same name used on the common name of the TS Gateway certificate (also the DNS name of the TS Gateway server):

  12. Select the computer you want to connect to, and off you go!

Tuesday, January 20, 2009

Exchange Named properties limits and risk mitigation

Huge thanks to my co-worker, Matt Bastek (ill link a email or URL when he gets me one) for a lot of the leg work on this one in hunting it down. I merely played the role of reporter here.

Lately, we have heard a few customers reporting Event ID 9667 in their Application event log stating a failure to create a named property.

And reporting that some users were no longer able to receive email. The NDR sent to another internal user was:

#550 5.2.0 STOREDRV.Deliver: The Microsoft Exchange Information Store service reported an error. The following information should help identify the cause of this error: "MapiExceptionNamedPropsQuotaExceeded:16.18969:3D010000, 17.27161:00000000AC000000000000000000000000000000, 255.23226:00000000, 255.27962:7A000000, 255.27962:56000000, 255.17082:00090480, 0.16993:80030400, 4.21921:00090480, 255.27962:FA000000, 255.1494:00000000, 255.26426:56000000, 4.6363:0F010480, 2.31229:00000000, 4.6363:0F010480, 2.22787:00000000, 2.22787:00000000, 2.22787:00000000, 4.5415:00090480, 4.7867:00090480, 4.4475:00090480, 4.4603:00090480, 4.5323:00090480, 255.1750:00090480, 0.26849:00090480, 255.21817:00090480, 0.24529:00090480, 4.18385:00090480". ##

We investigated, and found out that there is a named properties table that is written to with additional headers from any internet messages. You can add these headers yourself, but often it is third party applications that are scanning, routing, marking emails with their stamp.

Here is the Microsoft article we found on the issue:

Typically these are harmless and do not pose a risk. For many applications they are relied upon.

The issue we found with mail stopping delivery was due to the named properties database hitting its maximum limit as defined by the registry. This is 8192 rows by default, but is user configurable at HKLM\System\CurrentControlSet\Services\MSExchangeIS\SERVERNAME\STORENAME-GUID\

DWORD Value "NonMAPI Named Props Quota" Based on our findings this has a hard limit of 32767.

From here, we learn the default maximum header size is 64kb.

64kb is ABOUT 2300 unique named properties. Knowing this, I sent a 61k message, hoping to fly under the radar. The email never came in and sat in the queue. I downed the VM and bumped RAM from 1024 to 2500 at this point.

In order to monitor the named properties rows, you need to enable the perfmon dll for Exchange.

Then you can view/monitor your row count:

The below screen was the result of a single email on a test domain with 10 new headers.

I sent approximately 10k of random x-headers for this jump from 1200 to 1602

Here is the email - looks harmless to ANY user:

Unless you look at the headers:

So I sent a few emails to get the job done.

Now, it was about here I noticed I didn't get all my emails! So I checked the queue viewer and found that it was queuing up.

OK, great, maybe there was another layer of protection here. I started trying smaller snippets of header after this. Smaller gains in more emails, but I still only sent a handful.

Now, this is a VM with 2GB of RAM, so a memory issue isn't shocking. After waiting and submitting a few smaller emails as well, the works started flowing and they all went through:

And finally, I sent the final blow. All told, I only sent 25 emails for a total of around 120kb of data.

And what happened?

Not much, unfortunately.

In the event log, a warning, then a failure:

After that, I tried sending:

OWA to OWA internally - worked.
SMTP in without additional headers - worked.
SMTP in with additional headers - worked. Errors as shown above, but no non delivery of messages.

No service really denied. All aspects of email continue to work, but your event log begins to flood with those 9667 event id's These events are apparently pretty harmless.

Possible Risks:

  • If your NamedProperties were full and you tried to add a new SMTP based service relying on new X-headers, and it might not function right.
  • If you had existing NamedProperties applications (anti virus, anti spam, sharepoint, OCS, etc) these might have issues functioning correctly.
  • Your application event log could be filled with the warnings and subsequent errors.
  • Eventually, you could begin to have email denied to recipients in that database. While I was not able to reproduce this (this is a zero load Exchange server) the entire project here started because a customer running Exchange 2007 SP1 RU4 complained about this.

According to Microsoft, there are two fixes.

  1. Up the registry entry at HKLM\System\CurrentControlSet\Services\MSExchangeIS\SERVERNAME\STORENAME-GUID\ (effective until you hit the hard limit of 32767)
  2. Create a new DB and move mailboxes to the new store. Time costly option for little gain other than quiet event logs.

Details on both of these are here:

Overall, I am not sure if this is really a risk, and if it is, I would love to know how.

Please, let me know.