SharePoint Three-Tier Network Zoning Architecture

This topic came when there were policies in some organizations that require SharePoint to comply with the “three-tier architecture” requirement, meaning WFE, APP and DB should be in three network zones, and WFEs CANNOT connect to the DBs directly! It is common to see SharePoint farms servers placed in three zones, but most of the time WFEs are allowed to connect to the DB Server. However, this is not acceptable to the organizations that has the “three-tier” policy in place.

I saw in some projects a proxy server is implemented in the APP zone to bridge WFE and DB. It works. However, that proxy can easily become a bottleneck in larger implementations. So far the most common way of implementation is to place one or multiple reverse proxies in the Web zone as the web layer, shifting the SharePoint WFEs to the App zone together with the App servers.

{update April 2018} What I propose in this post is an alternative using technologies on SharePoint. However, based on practical experiences, this approach is more theoretical than practical. So far, there is no production implementation yet. {/update April 2018}

This alternative approach leverages the Request Management of SharePoint. With that you may say SharePoint actually supports the 3-tier requirement. How Request Manager works is elaborated on TechNet. The design here is to:

  • Put the WFEs in the APP Zone so they can connect to the databases directly.
  • Put a dedicated request management farm in the Web Zone in between the Load Balancer and the WFEs, with databases in the same zone.

Good 3-thier Architecture

This way, the Request Management layer serves as the web layer interacting with user requests. The WFEs in the APP Zone become the content serving components that retrieve data from the databases at the back end, while they are in the secure internal APP network zone.

Q: is it OK to place a database server of the Request Management farm in the web zone?

A: Yes, there is no user data stored in the database, only farm service configuration data. And the firewall rules should be restricting network traffic to the DB server to come only from the Request Management Farm servers.

Q: When the actual WFEs in the APP zone servers content back to the users, do they transfer it directly to the users?

A: No, it responds back to the users through the Request Management farm. This is actually the key part that makes the architecture above possible as users are not able to reach the App Zone.

Q: Should Office Web Apps Server be placed in the Web Zone or the same zone as the actual WFEs?

A: It must be reachable by the end users directly, so it should be more suitable to be in the Web Zone.

PS: The diagram below shows a bad design example that uses a proxy:

Bad 3-tier Architecture

Which VM to Blame?

You have a hyper-v host and the disk space is reaching its limit although you have 4TB of storage.

disk-full

Get-VM does not really tell you about the disk space each VM occupies. The space is mainly taken by the VHDs. I have put together a script to output an inventory report on the VMs on CPU cores, memory settings and total VHD size. This will give you an idea of which VMs take the most space on your host.

The PowerShell script can be downloaded from TechNet Gallery: Get Disk Space Used by VMs

Below is an exmple of the report in the PowerShell window:

table-pswindow

Below is the same report output to the HTML:

html-report

Automate it with PowerShell – Search and Replace Strings across multiple text-based files

ileWhen setting up a new demo environment that is similar to your existing ones, you may need to just change one or a few parameters across multiple scripts files. This tool helps you accomplish the task with just one line of PowerShell command.

Function Replace-String {
<#
.SYNOPSIS
Replace-String finds a string of text that matches the criteria across multiple files, and replace it with the specified new string.
.DESCRIPTION
This command searches through a directory or the file specified, and obtain the content of the files with Get-Content PowerShell cmdlet, find and replace matching strings within the obtained content and set the new text as the content of the original file.
.PARAMETER folderPath
Accepts one directory path. If specified, all the files within the folder (not including the subfolders) will be in scope for the text search.
.PARAMETER file
Accepts the path of one or more files, e.g. "F:\temp\test\profiles.csv"
.PARAMETER oldString
Accepts a string, Regex supported. This specifies the target string to find and to replace.
.PARAMETER newString
Accepts a string. This specifies the new string with which the older strings get replaced.
.EXAMPLE
This example finds all the files within the F:\temp\test folder, replacing strings that matches the pattern "approject" + two digits, and replace them with "project25".
Replace-String -folderPath 'F:\temp\test' -oldString "project\d{2}" -newString 'project25'
.EXAMPLE
This example finds the file "F:\temp\test\profiles.csv" replacing strings that matches the pattern "approject" + two digits, and replace them with "project25".
Replace-String -file 'F:\temp\test\profiles.csv' -oldString "project\d{2}" -newString 'project25'
#>

[CmdletBinding()]
param (
    [Parameter(Mandatory=$False)]
    [string]
    $folderPath,
    [Parameter(Mandatory=$False)]
    [string[]]
    $file,
    [Parameter(Mandatory=$True)]
    [string]
    $oldString,
    [Parameter(Mandatory=$True)]
    [string]
    $newString
)
#If the user specifies a folder path, find all the files in that folder (not including subfolders), and replace the matching string of text with the new string.
#The reason why the formats are specified is that this Set-Content cmdlet can mess up with files that are not text based, such as Office Documents and pictures.
#Only use it with files types that can be edited through Notepad.
if ($folderPath -ne '') {
    Get-ChildItem  -Path ($folderPath+"\*") -File -Include *.xml,*txt,*.ps1,*.csv | ForEach-Object {
        (Get-Content $PSItem.FullName) -Replace $oldString,$newString | Set-Content -Path $PSItem.FullName;
    }
}
#If a file or multiple files are specified in stead of a foler, only find and replace string within the specified folers.
elseif ($file -ne '') {
    $file | ForEach-Object {
        (Get-Content $PSItem) -Replace $oldString,$newString | Set-Content -Path $PSItem;
    }
}
elseif (($file -eq '') -AND ($folderPath -eq '') ) {
    Write-Host "Warning: You need to specify what file(s) to process! Specify a file or file path and try again" -ForegroundColor Red;
}
}

Real Zero-Downtime Patching in SharePoint 2016

It is exciting to have Zero-Downtime Patching (ZDP) capability in SharePoint 2016. However, it requires more effort than most of us might have initially thought. While it is straightforward for other components, it requires a bit more care on Distributed Cache.

Distributed Cache doesn’t support High Availability the way that other services do. While multiple Distributed Cache servers in your SharePoint farm can help distribute the load, the data cached on each Distributed Cache server is NOT replicated to the other Distributed Cache servers. If a Distributed Cache server unexpectedly goes down, the data cached on that server will be lost. That means if you install patches and upgrades a Distributed Cache server without gracefully shutting it down, you will cause data lost!

One may argue that if there are three Distributed Cache hosts on the farm, there will be high availability for the Distributed Cache as AppFabric has as cluster quorum model. That is not true as SharePoint is not using that model!

Therefore, if you have workloads that heavily depends on Distributed Cache and have high availability requirements, add Gracefull Shutdown of the Distributed Cache Service into the patching process to have true Zero-Downtime.

Gracefully shutdown DCS: https://technet.microsoft.com/en-us/library/jj219613.aspx#graceful

Monitor SharePoint 2013 Search Components with PowerShell

This is a prototype for a PowerShell script that monitors the status of each component of Search Service Application in SharePoint 2013. This script can be saved  to a .ps1 file and run by the Window Task scheduler periodically. If it detects that any of the component is not in the “Active” state, it automatically sends an email to the administrator.


Add-PSSnapin Microsoft.SharePoint.PowerShell
#Declare variables for later use.
$ssa = Get-SPEnterpriseSearchServiceApplication
$status = Get-SPEnterpriseSearchStatus -SearchApplication $ssa
#Create an empty array to store any component that is not active.
$unhealthy = @()
#Loop through each component status, and store any one that is not active to the array.
$number = 0
$status | foreach {
if ($_.state -ne "active"){
$number++
$unhealthy +=$number.ToString() + ". " + $_.name + "`n"
}
}

#If there is any component that is not active, send an email to the admin with the component name in the email body.
if ($unhealthy.count -gt 0) {
$result = "The components below are not active:`n " + $unhealthy
$params = @{'To'='whomitmayconcern@company.com'
'From'='admin@company.com'
'Subject'='Attention! Search Service Components Unhealthy'
'Body'=$result
'SMTPServer'='smtp.contoso.com'}
Send-MailMessage @params
}

The script has been tested with a single server farm with a single Search Service Application. If your scenario is different, you should adjust the script accordingly.

Batch Enabling Auditing across Many SharePoint Sites

If you are only looking for the script and are not interested what else I say, just grab them here:

#Define the function Enable-Auditing. The URL parameter accepts pipeline input. It also enables log trimming, and log retention time is set to 30 days. This part is kind of "hardcoded", but it should not be too difficult to change it. 
function Enable-Auditing {
param([Parameter(Mandatory=$True,ValueFromPipeline=$True,ValueFromPipelineByPropertyName=$True)]$Url,
[Parameter(Mandatory=$True)]$AuditedActions);
$site = Get-SPSite $Url;
$site.TrimAuditLog = $true;
$site.AuditLogTrimmingRetention = 30;
$site.Audit.AuditFlags = $AuditedActions;
$site.Audit.Update();
$site.Dispose();
}
<# 
Run the commands to apply the settings to specific Site Collections. For the -AuditedAction parameter, input any of the following:
"All" to audit all auditable actions.
"None" to disable auditing
An array of action names to enable auditing a specific set of actions to audit, e.g. "Update", "Delete", "Search". 
Check MSDN documentation for a complete list of auditable actions: https://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spauditmasktype.aspx
#>
Enable-Auditing -URL https://teamsite.contoso.com -AuditedActions "Update", "Delete"

As the -URL parameter accepts pipeline inputs, batch action can be done to multiple site collection with one line of command such as the one below:

#The command below enables auditing Update and Delete actions on all Site Collections whose URL contains "hr".
Get-SPSite -WebApplication http://teamsite.contoso.com -Limit All | ? {$_.url -like "*hr*"} | ForEach-Object {Enable-Auditing -Url $_.url -AuditedActions "Update", "Delete"}

OK, if you are interested in my monologue discussing this function, please read on. Otherwise, the content above is all you need.

How do you make sure auditing is enabled all the time? How may Site Collections do you need to manage? Does each Site Collection has its own Site Collection Administrator?

Site Collection Administrators has the permission to change site audit settings. That’s a potential risk since they can intentionally or unintentionally change the audit settings, while auditing is usually an organization-wide policy that needs to be enforced. Sometimes, turning on unnecessary auditing is bad as well as it will make the content DB grow faster. A single piece of audit log is about 1KB. Imagine, 1000 people are visiting 100 locations in a day!

One solution is to create a PowerShell scripts running under a task scheduler that enforces auditing policies, including:

  • Actions to audit
  • Whether to enable audit log trimming
  • If log trimming enabled, how many days of log to retain

In SharePoint Management Shell, there is no direct cmdlet for this purpose yet. We can define a function to make batch operations easier.

If you run Get-Member on a SPSite object, you will find that there are a few properties related to auditing:

  • Audit
  • AuditLogTrimmingCallout
  • AuditLogTrimmingRetention

To enable/disable auditing, the trick is to set the value of the SPSite.Audit.AuditFlags property. Based on tests, it accepts strings or array of strings. So there comes the code at the beginning of this post.

How to hyperlink to a specific location on a long web page?

For those looking for a quick answer, here it is:

Put in the pattern in the address bar: <URL>#<HTML Element ID>

For example:

https://technet.microsoft.com/en-SG/library/cc262787.aspx#ContentDB

https://msdn.microsoft.com/en-sg/library/ff877884.aspx#AvailabilityModes

If you would like to read a discussion on this topic, feel free to move on. Otherwise, the answer above is all you need to know. 😉

When you are sharing a webpage with others, it may be frustrating for the reader to find the exact content you are sharing when the webpage is long. What if you can direct the reader to the exact location through the hyperlink you are sharing? For example, on a lengthy TechNet Article about SharePoint limitations, the reader is directed to the Content Database limitations directly when opening the page with this hyperlink: https://technet.microsoft.com/en-SG/library/cc262787.aspx#ContentDB

The trick lies in the suffix “#ContentDB” in the URL. So the question this post is trying to answer is how to determine what to add to the end of the URL for navigating the users to a specific location on the webpage directly?

We know in HTML we can assign an ID to a tag such as <p id=”something”></p>, which is the unique identifier of the this is specific element. You can then use this ID to locate the content to share. Not every element on a webpage has an ID attribute though. So having an ID is the prerequisite for locating the content directly.

How to find the ID of the location you are sharing if any? There are two ways, the easy way and the hard way.

The easy way exists when there are internal hyperlinks on a webpage, i.e. the hyperlink points to a location on the same page. In this case you can copy the hyperlink directly and share with others. For example, on TechNet articles, you constantly see hyperlinks to the same page.

 Hyperlinks

The hard way comes when there is no internal Hyperlink on the webpage. You will need to check the source code of the webpage for any ID that can be used.

One-pic-4-a-thousand-words