Thursday, April 27, 2017

SharePoint and PowerShell: Get all Site Collections by Template

With this script you can check your farm for all site collections using a defined template, in my example it is the community site.
TemplateID:
Community Site(COMMUNITY#0, 62)
Community Portal(COMMUNITYPORTAL#0, 63)
Community Area Template (SPSCOMMU#0, 36)


How did I get the web template id?
$web = Get-SPWeb http://domain/site/yourcommunitysite
Write-Host "Template:" $web.WebTemplate " |ID:" $web.WebTemplateId
$web.Dispose()

Wednesday, April 26, 2017

SharePoint and PowerShell: Get all Site Collections that are using Nintex

Really simple: Run through every site collection of a web application and compare the feature definition IDs with the GUIDS provided in the array. Of course you can change the GUIDS in the array and use this script for other features as well. And yes, you will get a tiny .txt at the end, remove the header and footer and use the file in Excel or in a SharePoint list!

Tuesday, April 25, 2017

SharePoint and PowerShell: Get Site Collections that are using InfoPath

This script will crawl through a SharePoint Web Application and put out all the site collections and library where InfoPath is found, specifically the site owner, the site URL and the library URL.
Within the foreach, the script checks if there is a base template "XMLForm" used or if InfoPath is enabled.
Oh! And of course you will get a .txt at the end, remove the header and the footer and use it in Excel or in a SharePoint list!



Matthew McDermott wrote a similar script before I did, check it out here: .

Tuesday, December 20, 2016

SharePoint 2013: An unrecognized HTTP response was received when attempting to crawl this item

I got this error message:
The start address http://somesite cannot be crawled.
Context: Application 'Search_Service_Application', Catalog 'Portal_Content'
Details:
An unrecognized HTTP response was received when attempting to crawl this item. Verify whether the item can be accessed using your browser.   (0x80041204)
  
The start address https://somesite cannot be crawled.
Context: Application 'Search_Service_Application', Catalog 'Portal_Content'
Details:
Item not crawled due to one of the following reasons: Preventive crawl rule; Specified content source hops/depth exceeded; URL has query string parameter; Required protocol handler not found; Preventive robots directive. (0x80040d07)

After recreating the search and different changes to the content sources and the typical missing permissions, the customer basically got stuck with this broken search. After taking a look at the settings and the web.config I found this:
<httpProtocol>
<customHeaders>
<add name="X-Content-Type-Options" value="nosniff" />
<add name="X-MS-InvokeApp" value="1; RequireReadOnly" />
</customHeaders>
</httpProtocol>

What does this do?
1. <add name="X-Content-Type-Options" value="nosniff" />
Every file will bring a MIME type with it, which can differ from the specified MIME type. Internet Explorer can check the files, if the files should be handled different and choose a different application or handling of the file. But this will also lead to a security issue, for example: you upload a modified JPEG with a script included, this JPEG could possibly start to run the code if handled the wrong way. Basically the script will get detected and because the MIME type detected by IE is different from the specified MIME type, IE will start to run the script.

2. <add name="X-MS-InvokeApp" value="1; RequireReadOnly" />
With InvokeApp the Internet Explorer can start an application (like Office) and hand over the URL to the application. The file only open in a read only state with "RequireReadOnly" set in this line.

It is fine to remove those lines if you are not running an external website in your SharePoint environment. As soon as you allow anonymous access, you should put those lines back in. But those lines will also create some issues with the search. Removing them fixed my issues.

Saturday, October 22, 2016

Windows Container: Create local user on microsoft/nanoserver

Last time I showed you on how to download Windows Container images, this time we will work with one of the containers!
First of all download the microsoft/nanoserver image: docker pull microsoft/nanoserver
Now run the image: docker run -it microsoft/nanoserver cmd
The command "cmd" will open a cmd box on your container. To add users open Powershell by running powershell.exe in this container cmd windows.
And now we can add new users by using this command:
Finally add the user to the Administrators group:
For verification, run this script: