[Azure] Building a bot using DLL / WebProject

In previous articles I explained how you can build bots using the Microsoft Bot Framework and the Azure Bot Service. The latter is built on top of Azure Functions, one of my favorite components in Azure. Both the Functions and Bot teams are releasing stuff in a fast pace, but sometimes this leads towards the two not being 100% in sync with each other. This post addresses one of these issues, namely the Bot Service having old-style templates for new instances.

When you create a new Bot Service instance and download the code, you get a solution with .CSX files. These are used in Azure Functions and they still work great. The issue though is that when you load these in Visual Studio and want to debug your code locally, there’s no IntelliSense to go with them. Althought this is on the teams backlog, it’s not there yet and if you’re a VS dev like I am, you probably can’t live without it ūüôā


.NET Class library as Function App

Two months ago, Donna Malayeri (who’s doing absolutely awesome work on the Functions team) wrote this post¬†detailing how you can build a web project which uses the local functions runtime to host and debug the code. This brings two great worlds together: it allows you to build code in Visual Studio with all the benefits (Intellisense!) whilst utilizing the func.exe CLI runtime as well.

The post does a very good job at explaining how to set this up, but what if you want this for your Bot Service instances?


Converting a Bot Service

To convert your bot service instance to a Web Project, here’s what you need:

  • A web project (well duh…).
  • The CSX files that you want to convert to ‘regular’ C# classes.
  • The function.json file that defines the endpoint for your bot.
  • A appsettings file should you have one. This is typically where your Microosft App ID and password are stored.
  • The project.json file. You don’t really need this, but it’s handy to look up which packages your instance is using.

And here’s the steps:

  • Follow Donna’s post to set-up a new web project.
  • When creating the classes, create one for your dialog and one for your entry point. You can combine them as well of course, but I personally like to separate them. In the example these are named¬†Dialog.cs and¬†DialogEntryPoint.cs.
  • DialogEntryPoint.cs will contain the contents from the ‘run.csx’ file. This the entry point that’s being called when the user communicates with your bot. It’s referenced in¬†function.json as follows:

    Note the scriptFile and entryPoint settings which point to the output DLL and the class/method which handles the incoming message.
  • Dialog.cs contains your dialog code.
  • Ensure that you set-up the project to load the correct NuGet packages. Normally you will need¬†Microsoft.Bot.Builder.Azure¬†which will include some dependencies. If you include this, ensure the project is set-up to run .NET framework version 4.6 instead of 4.5.

Lastly, you need to configure the start options of the project. This is detailed in Donna’s post as well, but for your reference, below are the settings I used. Note that the location of func.exe might differ based on the installation type you used.


With everything configured, running the project should now start your bot in the func.exe runtime and hook up VS at the same time for local debugging! Awesome!



Sample @ github

If you’re struggling, I took the liberty of adjusting the sample project from the blog post and making a bot specific one from it. You can use it to see how it has been set-up. Be aware that my sample is based on the LUIS bot service, so it does require a settings file with your specific LUIS keys to actually work. Should you have any questions or remarks; feel free to leave them below!

Check out the code: https://github.com/jsiegmund/BotAsWebProject

[Azure] Cortana skills + LUIS pre-built domains

A while ago I was planning on doing a few posts on bots, but I never really got to that it seems. I did get into the bot building business though, so here’s one about a little bit more advanced use of bots: language understanding. And although I might write “more advanced”, I don’t really mean it. That’s because LUIS makes things so much easier. LUIS? Yeah, that’s short for Language Understanding Intelligence Service. And it’s awesome. (more…)

[SP20xx] Are you keeping up (part 1)?

It’s never a bad thing¬†to look at what’s coming. This future peeking seems to be hot in the¬†SharePoint world, with guys like Dan Holme,¬†Benjamin Niaulin and Daniel McPherson¬†giving their take on what’s in store for us. Interesting views which of course always include things like cloud, mobile devices and new ways of working for your end users, whatever generation you want to call them.

In my day-to-day work though, I am still mostly involved with enterprise grade customers who might be thinking about that stuff, considering it, but most definitely they’re not there yet. On the contrary; they have a long way to go. So I wanted to write this post to give an overview of things those companies can do today with their current landscape, in order to prepare for that future. And do not worry: the conclusion will not include you writing apps from now on.

In this n-part blog series, I’ll discuss some of the ‘hot topics’ and my views on¬†what choices¬†enterprise grade companies need to make.


SP2013: back-up failing with “Have tried to perform backup/restore operation twice on all in-sync members”

I ran into an error with back-up procedures on SP2013. The error in the log file was:

FaultException: Management called failed with System.InvalidOperationException: 'Job failed: Have tried to perform backup/restore operation twice on all in-sync members in cluster SPe6b9ae739be3.0, but none succeeded. Last failure message: Microsoft.Ceres.SearchCore.Seeding.SnapshotTransferException: Could not send chunk ms\%default\gen.0000000000000309.state: Localpath: [0-700> to target BackupDirectoryTarget[directory=\\backupshare\spbr0001\I.0.0,validateTransfers=False]
at Microsoft.Ceres.SearchCore.Seeding.SnapshotSender.SendChunks(ISnapshot snapshot, ISeedSource source, ISeedTarget target, SeedStatus status, Func1 checkAborted, Int32 targetFragIndex)
at Microsoft.Ceres.SearchCore.Seeding.SnapshotSender.FirstPhaseTransfer(ISeedSource source, ISeedTarget target, Action
1 updateProgress, Func1 shouldAbort)
at Microsoft.Ceres.SearchCore.Seeding.BackupWorker.BackupWork.DoFirstPhaseWork()' at at Microsoft.Ceres.SearchCore.IndexController.BackupService.ThrowOnFailure(JobStatus status)
at Microsoft.Ceres.SearchCore.IndexController.BackupService.ProgressFirstPhase(String handle)
at Microsoft.Ceres.SearchCore.IndexController.IndexControllerManagementAgent.WrapCall[T](Func
2 original)

Searching for this, I found several explanations. They all have one thing in common: it is search related (pretty obvious) and means that search was unable to write to the back-up location.

I¬†seems that search handles a part of the back-up procedure from within¬†it’s own processes.¬†That is, the search service running on SharePoint servers. This means:

– Your search servers need to be able to access the share which is being used for the back-up.
– The service account running search needs to have permissions to write to that share.
– And there should be enough space left, but that’s a no-brainer I hope.

In my case, the second bullet was where my problem was. I granted the service¬†account¬†for search¬†permissions to the share (Full Control), which solved the error. Same is mentioned in this blogpost by Amol Meshe, but he wasn’t sure on the answer.

[SP] RIP SharePoint Developers

Ok. I admit this is a kind of dramatic opening to this blog post and that its contents is probably not as dramatic as you might think. But last week at SharePoint Conference Europe, it finally hit me. The SharePoint developer is no more.

Let me explain. We have all heard Microsofts message about SharePoint apps since SharePoint 2013. It’s all about apps, every developer needs to learn apps. And whether you agree or not, it does sound the bell for a new age in development land. Microsoft is pushing it’s cloud model. And when Microsoft starts pushing, you’d better move in their direction or be prepared to move elsewhere. And to be honest, this push is probably for the best.

Let me explain, again. If you are a SharePoint developer, chances are you have been laughed at by your fellow developers. You’ve always been using old technology. Your Javascript and jQuery skills were seriously lacking. You have been creating weird XML files with Visual Studio, which usually broke things without explaining why. We’ve all been there. There are not that many really good SharePoint developers and that’s because you need a certain level of persistance to get there. Not everyone makes it. People are easily intimidated by everything that can (and will) go wrong in the beginning. Life as a SharePoint developer isn’t always great, that’s just a true fact. So why is it wrong that we will move to a new model? Well, same reason people don’t like Windows 8. It’s weird, it’s new and we do not know it that well, yet.

It’s a fact that SharePoint developers will have to learn new skills when they want to keep up. You all know what I’m talking about. HTML5, Javascript, funky frameworks like Angular, MVC, and the list goes on. I’m guilty myself as well, not keeping up my web development skills. Mainly because SharePoint didn’t have those thing so there simply was no need to learn.¬†Well, times already changed.

So why do I think the SharePoint developer is dead? Well actually, I don’t. Not yet, that is. The “yuk, SharePoint” point of sight still lives on in development land. So it’s going to take a while untill our hip webdevelopment colleagues finally find out SharePoint apps are nothing more than webapplications which use some API’s. Sure, you’re able to deploy XML’s with some lists and stuff, but why would you do that when you can also do the same in provisioning code? Code is easier to create, better to maintain and every developer gets it. So screw those XML’s. Really, apart from a different project type in Visual Studio, a SharePoint app isn’t that much different from any other web application when you take a close look.

When companies start using their normal web development guys to do SharePoint projects, those same companies will find the learning curve to be much less steap than it has been before. It’s far easier to train an existing webdev some API’s and call them “SharePoint developer”. And since there is a big market out there for those same SharePoint developers, why wouldn’t they?

When that happens, we’re done. Our app buddies will create cooler things in lesser time. They will use probably nothing out of SharePoint, maybe store a document or two via the API’s. But companies will still be happy as their apps run “inside SharePoint”, are “fully integrated”¬†and now even¬†look cool, are repsonsive and much more user friendly than before. Uh oh…

This was my biggest take away from last weeks conference. Whether we believe in the app model or not, it will drive change. And if that change is the best for SharePoint, I honestly do not know. What I do know, is that you developer need to start thinking about this and begin to train yourself when you want to keep up. And believe me, I have been skeptic as well, very even. But I now have seen the light and will start doing some Angular in my evening hours. Times are changing!

“Content type is in use” when editing Document Set

I ran into a weird problem today, when editing a document set. I created my own content type inheriting from the default Document Set. I added a new content type to the set and wanted to delete the default “Document” reference. I got an error telling me: “Content Type is in use”. Strange, since I hadn’t even deployed this Document Set type to any library.

After some fiddling around I finally found out what the problem was. Under “Default Content”, the drop down box will have “Document” selected. And even when you didn’t specify a file as default content, that still counts as “in use”. So simply change the dropdown selection to your own custom content type and voila: you can now remove the default document from the allowed content types list.

Waarom het ESM een goed idee is

Vandaag werd er in de tweede kamer gestemd over het ESM: het Europees Stabiliteits Mechanisme. Sommige van u zijn daar niet zo blij mee, vooral mensen die sowieso al niet zo pro-Europa zijn. Ronduit bedroevend vind ik de reacties onder het artikel op nu.nl die je hier kunt lezen. Ik zal niet flauw zijn en er een paar hele domme uit quoten, lees ze zelf maar. Een groot deel klopt simpelweg niet eens (qua feiten, danwel grammatica).

Nu moet ik bekennen, mijn mening is niet helemaal kleurloos. Ik heb familie wonen in Griekenland en daar krijg ik natuurlijk wel eens wat van mee. Ook die familie is niet helemaal kleurloos, want mijn oom is een tijd minister geweest in Griekenland en lid van het Europarlement. Niet dat dat veel uitmaakt, want zijn ministerie (iets met landbouw en tabak als ik me niet vergis) had bijzonder weinig met economie te maken. Ik kan in ieder geval uit eerste hand bevestigen dat ze daar wel degelijk een hoop hebben moeten inleveren (dus ook hij als oud minister), nu meer moeten betalen, maar gelukkig ook nog genoeg geld hebben om rond te komen. Mijn neven en nichten vrezen al wel tijden voor het ergste en zouden dolgraag een baan in het buitenland nemen om van de ellende af te zijn. Maar da’s geen geweldige oplossing, dat snap ik en dat snappen zij ook wel.

Ok√©, dus waarom zou Nederland dan mee moeten doen aan het noodfonds? Ten eerte is er nogal wat bangmakerij links en rechts. Bekijk bijvoorbeeld dit filmpje op YouTube eens. Als je dat zo ziet, dan ga je je afvragen waarom een land daar uberhaupt mee zou instemmen. Het filmpje vertelt maar de halve waarheid. Heeft een land geen stem? Onzin, een land heeft bij iedere beslissing een veto-stem. En bij crisis situaties dan? Nee inderdaad, dan heeft een land geen veto stem meer als er een meerderheid van 85% of meer is. Maar de angst dat ‘de grote landen’ dan de dienst uitmaken is onzinnig, want ook die hebben bij elkaar geen 85% aandeel. Er zijn wel 3 landen die een aandeel hebben groter dan 15%, namelijk Duitsland, Frankrijk en Italie. En die zouden dus kunnen voorkomen dat er een meerderheid komt. Het enige wat ze daarmee zouden bereiken is dat er juist geen geld uitgegegen wordt. Die drie landen samen hebben ongeveer 64% van de inleg (en dus van de stemmen), dus ook niet voldoende om aan die 85% te komen. Nederland staat overigens 5e op de lijst met ongeveer 6% van de inleg.

Dan zou het 40 miljard kosten. Wederom onzin. We kunnen om maximaal 40 miljard gevraagd worden. Daarvan worden er nu 4,6 ingelegd. Een bodemloze put is het dus ook niet, want die bodem ligt vast. En daarna moet er altijd opnieuw onderhandeld woden. Hebben we weer zeggenschap, fijn! En hoewel dat nog steeds een boel geld is, geven we jaarlijks veel meer uit en bedraagt het begrotingstekort per jaar al tientallen miljarden. “Maar moeten we dan geld inleggen als we al zoveel lenen?” Nee, geld inleggen wanneer je al dik rood staat is natuurlijk nooit fijn. Maar het toont wel aan dat we bereid zijn om voor elkaar klaar te staan. Dat Europa sterker is dan een enkel land alleen. En daar schieten we nu per direct misschien niet zoveel mee op, maar op de lange termijn wel zeker. Iets met samen sta je sterk.

Een ander voordeel is dat de financiele markten door deze maatregel gerust gesteld worden (hoewel die eigenlijk vinden dat er minstens 1000 miljard verzameld moet worden). Daarnaast is er ook nog zoiets als het Internationaal Monetair Fonds (IMF) wat ook centen heeft voor dit soort situaties. De direteur daarvan heeft ook gemeld dat het waarschijnlijk meer bijdraagt wanneer Europa zelf met dit soort acties komt om de goodwill te tonen. Win-win dus.

En dan is er natuurlijk nog de PVV aanhang die heel graag anti Europa en anti Griekenland zijn. Geen cent meer richting het zuiden, ze zoeken het zelf maar uit! Leuk en aardig, maar daar komen jullie tig jaar te laat mee jongens. Snap niet dat Wilders zijn kiezers durft voor te houden dat hij het liefst per direct stopt met Europa. Dat gaat allang niet meer, newsflash!? Wat dat betreft stond er wel een aardige reactie onder het artikel waarin de PVV stemmer werd gevraagd om dan liever te emigreren naar een land wat echt niks met Europa heeft. Blijft er niet veel over hoor. Ja centraal Afrika misschien, maar ik weet niet of Henk en Ingrid daar nu zo zullen aarden.

Mensen, het ESM is niet bedacht om uw geld te laten verdampen. Het is ook niet bedacht om nu in tijden van crisis de boel nog maar even wat erger te maken. Griekenland heeft enorme hoeveelheden geld geleend bij allerlei banken zoals Goldmann Sachs (waarover hier nog een aardige docu) omdat ze nergens anders konden aankloppen. Het ESM is een poging om dat voortaan te voorkomen. De les moet niet zijn dat we maar moeten stoppen nu het moeilijk is. De les moet zijn dat we leren van onze fouten en er alles aan doen om ze in de toekomst te voorkomen. En da’s nu precies waar het ESM een klein stukje van is.

Nog steeds niet overtuigd? Dan stel ik voor dat Europa Griekenland inderdaad maar helemaal laat vallen en dat u dan daar maar gaat wonen. Heeft u de komende jaren gegarandeerd niks meer met Europa te maken. Deal?

Meer lezen?

SharePoint 2010: deploying webservice to single webapplication

When you want to¬†extend¬†the default API functionality on your SharePoint site, you can add your own webservices to it. There are plenty of articles out there telling you how to do so. But they all have one thing in common; your SVC file mostly ends up in either the _layouts or _vti_bin virtual directories of SharePoint. There’s not much wrong with that, untill you want to deploy to a multi tenant, shared server. Since those virtual directories all point to the same folders on disk, this means your SVC is shared between every SharePoint webapplication on your server.

Mostly this still isn’t a really big problem since your users won’t know the SVC is there and probably don’t have a clue as to what it’s for and how to use it. But let’s face it; it’s not real pretty either. Unfortunately, SharePoint doesn’t really give you any other options. You can place your SVC file in the websites root dir of IIS, sure, but that means manual deployment and no guarantees your file will stay there since SharePoint manages those folders by itself. There’s the option of creating an entire seperate site in IIS which hosts your webservices, but you’d have to manually deploy that to every front-end server in your farm, not great either.

Well, with the help of the community, I found a solution which uses the SharePoint packaging and deployment model and gives you an option to deploy to a specific webapplication! The key is using the local resources folder of a webapplication. Here are the steps to take:

  • Add New item… ‘Empty Element’ to your project.
  • Delete the elements.xml file which it creates.
  • Add your .svc file into the element.
  • Right click the .svc file and pick properties.
  • Under deployment type, set it to ApplicationResource
  • You can customize the Deployment Location, however it will always be under the Resources folder of your inetpub folder (couldnt find a way around this yet)
  • Make sure the element is added to your package, and not to any other features.
  • Deploy and you will find your service at http://webapp/Resources/service.svc

That’s it! Works like a charm and optimizes security. For your webservice code, it doesn’t really matter that much where the svc file is located.

Oh yeah; make sure you’ve got the TokenReplacement setup correctly if you want Visual Studio to replace the assembly reference in your SVC file properly. Check out http://www.chaholl.com/archive/2010/03/10/how-deploy-a-wcf-service-to-sharepoint-2010.aspx¬†for that.


Thanks to Dennis George for providing this solution on StackExchange!

New Visual Studio uservoice request

I already blogged about a Visual Studio uservoice request sometime ago. Now… I¬īM BACK! With a new and shiny one. This time it¬īs about the Visual Studio Team Foundation Service preview which is online since BUILD. If you haven¬īt tried it yet; do so. It’s a very powerfull system which combines your entire ALM (Application Lifetime Management) into a single webbased system. It integrates with Visual Studio (of course), but you can use other systems too. And theres a link with Excel for reporting purposes, there’s really too much to tell.

But as with all software, there’s always room for improvement. So I added a uservoice item requesting for a way to group or categorize items on the product backlog. There isn’t a valid way to do this at the moment, and after a short thread on the forums, I decided to add a uservoice item. Feel my hurting? Then please vote it up so the team will look at it for a future version.

My uservoice request is located here:


SharePoint 2010: Changing the cookie expiration for Forms Authentication

We have several claims based site on which we use Forms based authentication alongside Windows based authentication. The forms based users were regularly complaining that the “Remember me” checkbox “wasn’t working”. Well as usual, it seemed to work for me so at first, I blamed it on cookie policies, cleaner tools, stuff like that. But the comments were persistent so I began digging a little deeper.

First, I thought altering the web.config would suffice. In normal ASP.NET web applications, you can edit the tag and add a timeout for the cookie. But SharePoint handles the cookies for itself, so changing those parameters doesn’t really do anything.

So what’s the way to change it then? The power lies in the service handling the security token requests: the SecurityTokenService. Configuring a longer timeout proves to be quite easy using Powershell. Use these commands:

$sts = Get-SPSecurityTokenServiceConfig
$sts.FormsTokenLifetime = (New-TimeSpan -Days 90)
$sts.WindowsTokenLifetime = (New-TimeSpan -Days 90)
$sts.ServiceTokenLifetime = (New-TimeSpan -Days 90)

It’s quite straightforward: the FormsTokenLifetime configures how long forms tokens are valid. The WindowsTokenLifetime does the same for issued Windows tokens. The ServiceTokenLifetime sets the timeout for the security token service cache.

You can check if the timeout changed by using Firefox and inspecting your cookies before changing the settings. Compare the cookie expiration date to the date your get after changing the values. Make sure to delete the cookies first, so new ones are issued. If the settings were correctly updated, the expiration date should have slided.

If you want to implement a sliding expiration, check out this blog post: http://blogs.southworks.net/fboerr/2011/04/15/sliding-sessions-in-sharepoint-2010/