Wednesday, November 27, 2013

Remove active filter drivers that could interfere with CSV operations

One of my cluster nodes had DPM previously installed on it. As a result, I would get re-directed Cluster Shared Volume (CSV) whenever it is moved to this node. In the cluster warning, I saw:
Cluster Shared Volume 'Volume1' ('CSV Volume name') has identified one or more active filter drivers on this device stack that could interfere with CSV operations. I/O access will be redirected to the storage device over the network through another Cluster node. This may result in degraded performance. Please contact the filter driver vendor to verify interoperability with Cluster Shared Volumes. 
Active filter drivers found: SIS (HSM) 
 I search through the Technet on DPM 2012 on how to remove its SIS filter. Unfortunately, this page isn't updated for WS2012 that has no ocsetup.exe anymore. Hence, the replacement for ocsetup.exe is DISM /Online. To remove SIS filter:
Dism /online /Disable-Feature /FeatureName:SIS-Limited
Reboot the system after removal and the CSV is no longer re-directed.

Tuesday, November 26, 2013

Moving Roaming User Profiles files to another Server using WSMT

I have tried using "robocopy" and even Windows backup to transfer roaming user profiles (RUP). There would be surely files that I couldn't move due to file permission and ownership preservation. In WS2012, there is a built-in Windows Server Migration Tool (WSMT) to facilitate the file transfer (including server roles).

Here are the steps in summary (click this link for full Technet guide):

Step 1: Install WSMT
Install WSMT feature using server manager on the target WS2012 server.  
Step 2: Register and deploy WSMT
Start an elevated cmd prompt from the WSMT tool. Create a deployment folder that is accessible to the source server. Depending on the OS of the source server, an example of the command for WS2012 (default on "C:\Windows\System32\ServerMigrationTools\") would be: 
SmigDeploy.exe /package /architecture amd64 /os WS12 /path [deployment folder path]
Copy the deployment folder to the source computer. Register and deploy the source computer by running .\Smigdeploy.exe
Step 3: Move local users and groups
To preserve local permissions, you may need to move the local users and group first. To export from the source computer: 
Export-SmigServerSetting -User All -Group -Path [storepath\UsersGroups] -Verbose
To import into the target server:
 Import-SmigServerSetting -User All -Group -Path -Verbose
Step 4: Start file moving
Before moving, permit both UDP 7000 and TCP 7000 on the windows firewall. On the target computer, start the reciever:
Receive-SmigServerData 
To begin file transferring, run the following WSMT cmdlet on the source computer 
Send-SmigServerData -ComputerName [DestinationServer] -SourcePath d:\users -DestinationPath d:\shares\users -Recurse -Include All -Force 
You may want to observe any errors at the progression update on the command prompt.

Tuesday, November 12, 2013

802.1x with MAC based authentication

For end devices that are 802.1x compliant, RADIUS authentication on them would be performed using either username/password or certificate. What about devices that aren't 802.1x compliant, such as network printers? The next best authentication on them would be MAC based.

MAC based authentication aren't as secure, as MAC addresses can be easily spoofed. Cisco called this "MAC Authentication Bypass" (MAB) while Microsoft called this "MAC Address Authorization".

How can we make Cisco MAB works with Microsoft NPS server?

Step 1: Enable "mab" on every switch port
On Cisco switches, assuming that the usual dot1x configuration are already in-place, you'll just need to add the command "mab" on every 802.1x enabled switch port connecting to end-devices.

Step 2: Add new MAC-based connection request policy
On Microsoft NAP server, add another new connection request policy and enable PAP authentication. This new PAP policy should be placed after the main 802.1x policy, so that the 802.1x compliant devices can get authenticated in a more secure way first. As Cisco switches uniquely identify MAB requests by setting Attribute 6 (Service-Type) to 10 (Call-Check) in a MAB Access-Request message, add this condition to the MAC connection request policy.

Step 3: Tell the authenticating server to use Calling-Station-ID as MAC-based user name
Set the User Identity Attribute registry value to 31 on the NPS server. This registry value location is: HKLM\SYSTEM\CurrentControlSet\Services\RemoteAccess\Policy. If it doesn't exist, create a new DWORD value.

Step 4: Add a new AD user account for each MAC device
The new user account must be named (all lower case with no space or dash) exactly as the connecting MAC address for each non-802.1x device e.g. aa00bb11ccddeeff format. Its password must also be set as the same as MAC address. Hence, creating such accounts might fail due to domain-based complex password policy. The good news is we can use Fine-grained Password Policy to overcome it.

Step 5: Test it
Connect a non-802.1x device and test. Observe the outcome on the event viewer of the NPS server. Take note of any errors and troubleshoot accordingly.

Wednesday, November 6, 2013

Setting up SQL AlwaysOn cluster for SCVMM 2012

SQL AlwaysOn is a new feature in MS SQL 2012 that supports SQL cluster without the need of a shared storage. The primary node will host the database file in its local storage and sync with the other standby copy in the secondary node. Since SCVMM 2012 supports this feature for HA, I followed this Technet blog for guidance.

I installed SQL server as standalone on each node using default values. The objective is to create an Availability Listener object for VMM to connect to the new database. I managed to follow through until the last step to replicate the test database failed.

After searching through the Internet, I realized that it's not correct to use default mode for SQL HA.

1) Always use domain-based managed service account for SQL instance (don't leave it to Network service).

2) Ensure the service account has access right to the end-point listener.

  • On SQL mgmt console, find out the name of mirroring endpoints
  • SELECT name, role_desc, state_desc FROM sys.database_mirroring_endpoints
  • Grant the service account access right to this endpoint 
  • GRANT CONNECT on ENDPOINT::{Mirroring_Endpoint} TO [Adomain\Otheruser]

3) After creating the new availability group successfully, check that the TestDB synchronization state on both nodes are "synchronized" (not "synchronizing" or "not synchronized"). Right click on the AG and click "Show Dashboard".

4) If they are not synchronized, click on "Properties" of the AG. Change all availability mode to "Synchronous Commit". Test the failover manually.

Thursday, October 31, 2013

Single Sign On for RD Web Access (WS2012)

By default, form-based authentication is enabled on Web access portal for VDI. I was googling around on how to do SSO authentication. Most sites would advise the editing of the Web.config file of the RD web host that couldn't work well. Finally, I found one post that works. Here's the extract:

OK, here are my results so far.
1) You should not edit web.config file manually. Using comment symbols corrupts this file, so IIS cannot interpret it properly (this is the cause of 'HTTP 500 Internal server error' message). Instead, you should use IIS Management Console to do the task.
Start this console and go to Sites -> Default Web Site -> RDWeb -> Pages (left-click on 'Pages' in the left column). In the right part of the console under 'IIS' section double-click 'Authentication' icon. Disable both the Anonymous and Forms authentication methods. Enable 'Windows Authentication'.
If you try to access the web interface now, you'll get popup window which asks for your login and password. This is expected behavior.
2) On the endpoint (user PC) set Internet Explorer options to allow pass-through authentication. It could be done via IE settings for each user personally, but if you have many users you should use group policy:
* Add your Desktop Broker server to Trusted Sites zone: go to User/Computer Configuration -> Administrative Tools -> Windows Components -> Internet Explorer -> Internet Control Panel -> Security. Open 'Site to Zone assignment list' setting, enable it and map Broker server FQDN to zone 2.
* Enable automatic logon: go to User/Computer Configuration -> Administrative Tools -> Windows Components -> Internet Explorer -> Internet Control Panel -> Security -> Trusted Sites Zone. Open 'Logon options' setting, enable it, and make sure that the following option is selected in the drop-down list: 'Automatic logon with current username and password'.
3) In addition, the actions mentioned above should be executed (I repeat the description here for readers of the thread to have the full list):
* Enable SSO on the RDS clients.
 ---- In the group policy applied to RDS client, edit Administrative Templates -> System -> Credentials Delegation -> Enable the policies "Allow Delegating Default Credentials" and “Allow Delegating Default Credentials with NTLM-only Server Authentication”
--- Set both with value to "termsrv/*" allows the delegation for all terminal servers, you may also specify the server FQDN.
* Open the RDWeb page. Before clicking a pool name make sure the below check box is checked: 'I am using a private computer that complies with my organization's security policy.'
After that single sign-on works nice if I access client PC by entering my login and password manually. However, if I login to the workstation using smart card, I still can access web interface seamlessly. However, after I click on a pool name, RDP client asks for login and password (or smart card PIN). I tried to enable Kerberos authentication provider in Windows Authentication in IIS, but it did not change the situation.
I begin to wonder whether the task has a solution at all. I've found the following article:http://blogs.msdn.com/b/rds/archive/2007/04/19/how-to-enable-single-sign-on-for-my-terminal-server-connections.aspx It's said there that 'Single Sign-on only works with Passwords. Does not work with Smartcards'. The article was last modified four years ago. Is this statement still valid?

Thursday, October 17, 2013

3rd Party Software Patching with Solarwinds Patch Manager

Microsoft did an excellent job by providing highly scalable and relatively simple software patch management system called "Windows Server Update Services" or WSUS in short. However, only Microsoft updates are supported. Well, 3rd party software patching for Windows OS have always proved to be challenging. You need another product called Microsoft product called System Center Updates Publisher (SCUP). However, it has to be deployed in conjunction with another System Center Configurations Manager (SCCM) - another complex product. You probably just need something simpler to add on to your existing WSUS infrastructure for patching the outdated Sun Java or Adobe Flash in your environment.

I came across this Solarwinds Patch Manager that claims to support 3rd party patching with just WSUS server - no SCUP or SCCM needed. Hence, I decided to give its free 30-day eval a try. Since it claims to be so easy, I just tried to install without reading the manual. Soon after, I realized it's a bait to get you started and I've to go start reading the admin guide from scratch.

I won't repeat the entire setup procedure. I would just summarize the steps:

  1. Install the Patch Manager and follow the wizard.
  2. Add some service accounts that have admin rights over your WSUS servers to the credential ring
  3. Add the existing WSUS servers and other AD information into the Patch Manager using "Managed Resource Enterprise Configuration Wizard"
  4. This is for signing 3rd party patches. Generate WSUS self-signed publishing certificate using Server Publishing Setup Wizard. The security folks may not like it  - I tried using a code signing pfx from the enterprise PKI but it just simply refused to accept.
  5. Since the digital signature certificate is self-signed, it has to be distributed to the "Trusted Root CA" store for all update clients. You can distribute the self-signed cert using either the "Client Publishing Wizard" in Solarwinds or the Group Policy. 
  6. Remember to enable "Allow signed updates from an intranet Microsoft update service location" on the "Windows Components/Windows Update" of the GPO settings. Otherwise, the update clients would only accept Microsoft signed updates by default.

For 3rd party software patching, here are the high level steps (all within the Solarwinds Patch Manager console):

  1. Download the 3rd party patches - usually in exe format
  2. Convert and transform the file into *.cab file, which would be signed with the WSUS self-signed cert.
  3. Publish the updates to WSUS server

There is also a demo video guide on it.

Now, I tried updating the outdated Java on my first update client. Excitedly, I saw this when I clicked on the "Check for updates" on the security control panel. I thought it was going to succeed.

Alas, there was error 0x800b0004 in installing. Going to the "WindowsUpdate.log" on the "C:\Windows" folder. I saw the following logs:
Validating signature for C:\Windows\SoftwareDistribution\Download\898736be7c675a750734920f38c55636\66b811b903ecd87fef17e4dc58d2aaa52688917b:
Misc Microsoft signed: No
Trusted Publisher: No
WARNING: Digital Signatures on file C:\Windows\SoftwareDistribution\Download\898736be7c675a750734920f38c55636\66b811b903ecd87fef17e4dc58d2aaa52688917b are not trusted: Error 0x800b0004

Using the certificate mmc, the self-signed WSUS certificate is on the trusted Root CA store. But why Windows refused to trust the digital signature? I looked at the error again and noticed that the digital signature was not from a Trusted Publisher.

I manually added the self-signed cert to the "Trusted Publisher" store to the update client. After retrying to install the update, the success message appeared:

Wednesday, September 18, 2013