Branding your Hybrid Identity Solution, Part 1: Introduction

Many organizations embrace the new reality of Hybrid Identity.

For many of them, the increased level of security towards both on-premises resources and cloud services is the main reason to do so: Single sign-on (SSO) and multi-factor authentication (MFA) are two main drivers to onboard on Microsofts vision.

BrandingWhen looking at People, Process and Technology, the three critical success factors for both organizational transformation, but also complementary parts of a successful Information Security strategy, the need for Security Awareness becomes apparent.

In my opinion, security awareness is technologically assisted by branding and disclaimers. That’s why, in this series, I’ll focus on adding these to your Hybrid Identity deployment.

 

About this series

In this series, we’ll look at the following components of a typical Hybrid Identity implementation:

  1. Azure Active Directory Logon Pages
  2. Active Directory Federation Services (AD FS) Logon Pages
    (based on AD FS on Windows Server 2012 R2)
  3. Azure Multi-Factor Authentication Server’s AD FS Adapter
    (based on Azure MFA Server version 7.2.0)
  4. Azure Multi-Factor Authentication Server’s User Portal
    (based on Azure MFA Server version 7.2.0)

I deliberately follow this outline throughout the series, because for any Hybrid deployment, this is the sequence you might want to adopt. While we get further in the series, you might or might not have implemented the components referenced. This way, you can quit early.

 

My customizations

Since this series delivers real-world information, I’ll walk you through the actual customization steps. For this I need some branding resources. As I’m from the Netherlands, I’ll replace a lot of the blue interface elements in a default implementation with orange elements. (color code #ff8000). For picture resources I’ll use depictions of typical Dutch tradition and/or heritage.

Resources

I’ve prepared the following resources for my branding:

  • Two square pictures with the logo
    Each of these pictures is 240 pixels wide and 240 pixels high. One of these pictures is stored with a transparent background based on an initially light background (white), and the pother is stored in the same way, but based on an initially dark background (black). I saved them as *.PNG files. When you save these pictures, make sure they’re not over 10KB in size.
  • One wide picture with the logo and (company) name
    The recommended picture is 280 pixels wide and 60 pixels in height for Azure AD.
    The recommended picture is 280 pixels wide and 35 pixels in height for AD FS.
    I saved this picture as a .PNG with a transparent background, based on a background that was initially white. Again, keep it under 10KB in size.
  • One big picture as a background
    The recommended picture is 1420 pixels in width and 1200 pixels in height for Azure AD. The recommended picture is 1420 pixels in width and 1080 pixels in height for AD FS.
    This is the main resource. I saved it as a *.JPG file and kept it under 200KB in size.
  • One disclaimer text in US-English
    This is the legal stuff I add to the logon pages. Since Azure AD limits this text to 256 characters, I created a US-English one that complies with this limitation.
  • One disclaimer text in Dutch Since the logon pages can show different disclaimer texts for different browser language settings, I also created a disclaimer text in Dutch. This text also counted less than 256 characters.

When you create these resources beforehand, It’ll be easy to apply them through this series.

Note:
While you could use any reasonable sized graphical resources you’d like, the above image sizes prevent scroll bars in the Graphical User Interfaces (GUIs).

 

Further reading

“People, Process, and Technology”
Brand protection and abuse: Keeping your company image safe on social media
Information Security Service Branding – beyond information Security Service Branding
Security Culture Framework

0  

Configuring the ClaimsApp Demo for Azure Active Directory Authentication

IdeaMost people who have attended one of my sessions, know I love to show off the power of claims using the ClaimsApp. This web app is not very fancy, but it does a heck of a job, just by displaying all the claimtypes possible, or configured for the Relying Party Trust (RPT) in Active Directory Federation Services (AD FS), in a table.

I’ve explained how to set this up over at 4Sysops.com.

 

About the ClaimsApp

The ClaimsApp is barely more than the PassiveRedirectBasedClaimsAwareWebApp example from the .NET Framework 3.5 Windows Identity Framework (WIF) Software Deployment Kit (SDK), configured for authentication from your Security Token Service (STS), based on Active Directory Federation Services (AD FS).

Note:
I intentionally change nothing to this app, or make it available in any other way than to download the official Microsoft bits, so the entire process is as clear as can be. Of course, you can customize the app as much as you want to.

Requirements for the ClaimsApp

You’ll need the following items to create your own ClaimsApp:

  • A Windows Server 2012 R2 installation with an Internet connection, available to the Internet using TCP443.
  • The .NET Framework 3.5 Windows Identity Framework (WIF) SDK
  • The SXS folder from the Windows Server 2012 R2 DVD
  • A valid TLS certificate for Server Authentication and Client Authentication for your ClaimsApp URI, for instance www.domain.tld. The built-in WebServer certificate template will suffice. The certificate is added to the Personal store of the Web Server hosting the ClaimsApp, with the private key, any required trusted root certification authorities and any required intermediate certification authorities.
  • An Azure Active Directory Premium (P1) subscription, or up.

 

Setting up the ClaimsApp

The following steps will help set up the ClaimsApp:

Setup the Web Server

  1. On a Windows Server 2012 R2-based server installation, while logged with an account with Administrator privileges, go to Server Manager.
  2. In the top grey pane, click Manage and then select Add Roles and Features from the context menu.
  3. Click Next > on the Before You Begin page of the Add Roles and Features Wizard.
  4. Click Next > on the Select Installation Type page to accept the Role-based or feature based installation option.
  5. Click Next > on the Select Destination Server page.
  6. On the Select Server Roles page, select Web Server.
    Click on Add Features.
    Click Next >.
  7. On the Select Features page, select Windows Identity Foundation 3.5.
    Click Next >twice.
  8. Scroll down. Expand Application Development and select Asp.Net 3.5.
    Click on Add Features.
    Click Next >.
  9. Before you click Install, follow the Specify Alternate Source Path link. Specify the path to the Windows Server 2012 R2 SXS folder.
  10. After installation, click Close on the Installation Progress page.

Configure the Web Server

  1. From Server Manager, in the grey top bar, select Tools and then click Internet Information Services (IIS) Manager.
  2. In the left pane, click the server’s name to expand it.
  3. Click Cancel in the pop-up window that asks if you want to get started with Microsoft Web Platform to stay connected with latest Web Platform Components.
  4. In the left pane, also expand Sites, and then select the Default Web Site.
  5. In the right Actions pane, follow the Bindings… hyperlink.
  6. Add… a binding.
  7. In the Add Site Binding window, select https as the Type.
  8. Select the www.domain.tld TLS certificate.
    Replace domain.tld with your domain information.
  9. Click OK.
  10. Click Close to close the Site Bindings window.
  11. In the left pane, select Application Pools.
  12. On the main window, select the DefaultAppPool.
  13. In the right pane, click the Basic Settings… hyperlink.
  14. Select .NET CLR Version v2.0…. for the .NET CLR version:. Click OK.
  15. In the right pane, click the Advanced Settings… hyperlink.
  16. On the Advanced Settings window, scroll down a tad.
  17. Change the value for Load User Profile from False to True.
  18. Click OK to close the Advanced Settings window.
  19. In the left pane, right-click Default Web Site and select Add Application… from the context menu.
  20. Specify ClaimsApp as the Alias: and C:\Inetpub\wwwroot\ClaimsApp as the Physical path:.
  21. Click OK when done.
  22. Close the IIS Manager.

Install the Windows Identity Framework SDK

  1. Run WindowsIdentityFoundation-SDK-3.5.msi.
  2. Select I accept the terms in the License Agreement and click Next two times.
  3. Click Install.
  4. Upon installation completion, deselect Open Readme and click Finish.

Create the ClaimsApp

  1. Copy the contents of the PassiveRedirectBasedClaimsAwareWebApp folder from C:\Program Files (x86)\Windows Identity Foundation SDK\v3.5\Samples\Quick Start\Web Application to C:\Inetpub\wwwroot\ClaimsApp
  2. In this newly created folder, open default.aspx.cs with the built-in Windows text editor (notepad.exe).
  3. Use Ctrl + F to search for instances of ExpectedClaims. Comment out the second instance, including the brackets on the line under it and the three lines under the instance.
  4. Save the changes by pressing Ctrl and S simultaneously.
  5. Use Ctrl + O to open another file in the same folder: Web.Config.
  6. Use Ctrl + F, but this time use it to search for Microsoft.IdentityModel. You’ll find it somewhere five-sixths of the way through.
  7. Delete the section.
  8. Use Ctrl + S to save these changes.
  9. Use Alt + F4 to close Notepad.

Now, normally, the next step would be to use FedUtil.exe to create a new Web.Config for the ClaimsApp, based on the information of an on-premises Active Directory Federation Services (AD FS) implementation. But this time we’re integrating the app with Azure Active Directory instead of Active Directory Federation Services (AD FS).

 

Integrating the ClaimsApp with Azure AD

Discussing with Raymond last night, we found a really easy way to add a little magic to our ClaimsApp, using the same trusted FedUtil.exe. Rood hart

Perform these steps in the Azure Portal:

  1. Navigate to the Azure Portal.
  2. Log in.
  3. Navigate to Azure Active Directory in the left navigation pane.
  4. In Azure Active Directory, click Enterprise Applications.
  5. Click All Applications.
  6. In Enterprise applications – All applications – Click Add.
  7. In Add an application, click the Non-gallery application tile.
  8. In Add your own applicat…, type a name for the application, like ClaimsApp and click Add.
  9. In the list for the new application, click Single Sign-on.
  10. Select SAML-based Sign-on as the mode, by using the drop-down menu.
    New options will appear underneath the mode field.
  11. Define IDENTIFIER and REPLY URL. Use https://www.domain.tld/claimsapp/ for both values. Replace domain.tld with your domain information. Don’t forget to add the trailing slash.
  12. When done, scroll up and click Save in the top ribbon.

Next, perform these steps:

  1. Navigate to C:\Program Files (x86)\Windows Identity Foundation SDK\v3.5
  2. Double-click fedutil.exe to run the Federation Utility Wizard.
  3. For the Application configuration location, browse to the web.config file in C:\Inetpub\wwwroot\ClaimsApp. For the Application URI, specify https://www.domain.tld/claimsapp/. Replace domain.tld with your domain information. Click Next > when done.
  4. On the Security Token Service window, select Use an existing STS. Use
    https://login.microsoftonline.com/domain.tld/federationmetadata/2007-06/federationmetadata.xml as the STS WS-Federation metadata document location. Replace domain.tld with your domain information.
  5. Select Test location… When you get a load of gibberish in Internet Explorer, you’ll know it works.
  6. Close Internet Explorer.
  7. Click Next > four times.
  8. On the Summary screen, select the option to Schedule a task to perform daily WS-Federation metadata updates. Click Finish.
  9. Click OK when the Federation Utility Wizard is done configuring.

You can now access the ClaimsApp, using Azure Active Directory credentials. The ClaimsApp displays the claimtypes passed through by Azure Active Directory.

Enjoy! Glimlach

 

Further reading

Building a BYOD lab in Microsoft Azure
.NET Framework 3.5 Windows Identity Framework (WIF) Software Deployment Kit (SDK)
Building My First Claims-Aware ASP.NET Web Application
ADFS Error : Server Error in ‘/claimapp’ Application
“claimapp” demo app failing with “The computer must be trusted for delegation” error

0  

Azure AD Connect v1.1.443.0 is here

Microsoft released a new version of Azure AD Connect yesterday. It is dubbed the March 2017 release, but internally listens to version 1.1.443.0. It comes with an pretty long list of fixes and new features, to coincide with the General Availability (GA) of Azure AD Connect Health for Windows Server Active Directory last week:

 

What’s New

Azure AD Connect sync

  • Get-ADSyncScheduler cmdlet now returns a new Boolean property named SyncCycleInProgress. If the returned value is true, it means that there is a scheduled synchronization cycle in progress.
  • Destination folder for storing Azure AD Connect installation and setup logs has been moved from %localappdata%\AADConnect to %programdata%\AADConnect to improve accessibility to the log files.

AD FS management

  • Added support for updating AD FS Farm SSL Certificate.
  • Added support for managing AD FS 2016.
  • You can now specify existing gMSA (Group Managed Service Account) during AD FS installation.
  • You can now configure SHA-256 as the signature hash algorithm for Azure AD relying party trust.

 

Fixes

Azure AD Connect sync

  • Fixed an issue which causes Azure AD Connect wizard to fail if the display name of the Azure AD Connector does not contain the initial onmicrosoft.com domain assigned to the Azure AD tenant.
  • Fixed an issue which causes Azure AD Connect wizard to fail while making connection to SQL database when the password of the Sync Service Account contains special characters such as apostrophe, colon and space.
  • Fixed an issue which causes the error “The dimage has an anchor that is different than the image” to occur on an Azure AD Connect server in staging mode, after you have temporarily excluded an on-premises AD object from syncing and then included it again for syncing.
  • Fixed an issue which causes the error “The object located by DN is a phantom” to occur on an Azure AD Connect server in staging mode, after you have temporarily excluded an on-premises AD object from syncing and then included it again for syncing.

AD FS management

  • Fixed an issue where Azure AD Connect wizard does not update AD FS configuration and set the right claims on the relying party trust after Alternate Login ID is configured.
  • Fixed an issue where Azure AD Connect wizard is unable to correctly handle AD FS servers whose service accounts are configured using userPrincipalName format instead of sAMAccountName format.

Pass-through Authentication

  • Fixed an issue which causes Azure AD Connect wizard to fail if Pass Through Authentication is selected but registration of its connector fails.
  • Fixed an issue which causes Azure AD Connect wizard to bypass validation checks on sign-in method selected when Desktop SSO feature is enabled

Version information

This is version 1.1.443.0 of Azure AD Connect.
It was signed off on on March 6, 2017.

 

Download information

You can download Azure AD Connect here.
The download weighs 78,2 MB.

 

Concluding

This version is the first version in three months for Azure AD Connect and it appears to be a version that will be delivered through Azure AD Connect’s Automatic Upgrade feature (when using Express Settings).

Finally, here’s the version to test your Azure AD Connect Lifecycle Management. Knipogende emoticon

Further reading

Version 1.1.380.0 of Azure AD Connect fixes a bug in multi-domain scenarios 
Azure AD Connect 1.1.371.0 offers PTA and S3O preview capabilities
Azure AD Connect version 1.1.343.0 with support for Windows and SQL Server 2016
Azure AD Connect version 1.1.281.0 has been released

0  

Join me for an Active Directory and Virtualization webinar, in cooperation with Veeam

Active Directory: Security and Virtualization

This year, as a Veeam Vanguard, I’m hosting a series of three Active Directory Domain Services webinars, together with Timothy Dewin and hosted by Veeam.

Now that we’ve got the basics covered in our Active Directory 101 session two weeks ago, It’s time to talk Active Directory virtualization on March 7, 2017.

I’m very excited for this session, because I can talk about the evolution of Hyper-V (and to some degree other hypervisors as well) help keep virtual Domain Controllers safe and running optimally. As I know most of you have already deployed virtual Domain Controllers, I’ll share my insights on how to do it properly. If this interests you, please join me. Glimlach

You can join the EMEA session at 2 PM CET, or you can join the Americas session at 1 PM EDT. Both sessions are (nearly) identical.

 

Sign up

Sign up for these webinars for free here.

 

About the Veeam Active Directory Webinars

The Active Directory Deep Dive series of webcasts consists of three Active Directory Domain Services-oriented webinars, that I’m hosting together with Timothy Dewin and Veeam.

February 21: Active Directory 101

Get into Active Directory basics and best practices, including:

  1. Deep dive into specifics of Active Directory service roles
  2. Domain Controllers deploying, grouping and interaction with DNS and DHCP services
  3. Proper configuration of AD

March 7: Active Directory and Virtualization

Deep dive into the latest changes of Active Directory, including:

  1. Challenges and recommendations with virtualizing Domain Controllers
  2. How Domain Controller Cloning saves your bacon
  3. Five key enhancements in Active Directory security in Windows Server 2016

March 21: Active Directory Backup and Restore

Do you know how many people couldn’t restore their ADs due to bad configuration?
Learn how to:

  1. Accordingly configure your backup jobs
  2. Avoid fails at restores
  3. Verify the recoverability of every Active Directory backup

Each webinar is repeated on the same day, to accommodate attendees around the globe. The first session is scheduled for 2PM CET. The second session is scheduled for 1PM EDT.

0  

KnowledgeBase: Logging in to the Intune Company Portal App results in an error “Could not sign in” on Android phones with Chrome 56, and up

AndroidThis morning I read a blogpost by John Arnold on the Intune Support TechNet Blog on a strange Intune-related error on Android Phones when accessing the Company Portal app.

As it turned out, this is an Active Directory Federation Services (AD FS)-related certificate issue, so I thought I’d share it here as well.

 

The situation

When you use Microsoft Intune, end users in your organization can use the Intune Android Company Portal app to install apps, check compliance and retire devices among other things. It’s a helpful resource for organizations looking to adopt a Shift Left strategy to lower support cost by enabling end users to solve common IT problems themselves.

In Hybrid Identity implementations, all authentication requests to Microsoft Online Services, including the Company Portal and apps, can be redirected to an organizations Active Directory Federation Services (AD FS) implementation.

Devices running current versions of Android are configured to automatically update apps. Under the hood, the Android Company Portal app on Android leverages the built-in Chrome browser.

 

The issue

When an Android device has (automatically) upgraded its browser to Chrome version 56 (and up), and the end user for an organization that leverages Hybrid Identity using Active Directory Federation Services (AD FS), opens the Android Company Portal app, the app shows an error:

Error
Could not sign in. You will need to sign in again. If you see this message again, please contact your IT admin.

 

After pressing OK, the error persists.
Of course, IT admins are swamped with calls, this way.

 

The cause

The error is caused by the Active Directory Federation Services (AD FS) implementation using a service communications certificate that utilizes the SHA-1 hashing algorithm.

Starting with Chrome 56, Google enforces its policy to stop support for certificates that utilizes the SHA-1 hashing algorithm.

 

The solution

The   service communications certificate for the Active Directory Federation Services (AD FS) implementation needs to be replaced by a certificate that utilizes the SHA-2 hashing algorithm.

Note:
Certificates for intermediate Certification Authorities (CAs) and Root Certification Authorities (CAs) may, for now, remain SHA-1 certificates.

The links below walk you through creating a certificate request for a future-proof AD FS service communications certificate.

 

Concluding

It’s time to say goodbye to your SHA-1 certificates.

Further reading

AD FS Certificates Best Practices, Part 1: Hashing Algorithms
AD FS Certificates Best Practices, Part 2: Key size
AD FS Certificates Best Practices, Part 3: CNG-generated Private Keys
AD FS Certificates Best Practices, Part 4: Token Signing and -Decrypting Cert lifetime

0  

How not to offer Guest Wi-Fi

Nearly all men can stand adversity, but if you want to test a man’s character, give him a horrible internet connection.

– Loosely based on a quote by Abraham Lincoln

 

Some jobs are worse than others. Some environments are more toxic than others. Some things can just annoy the heck out of you. On the top of my list of annoyances is definitely horrible Internet access at customers.

I won’t go into the details, but this customer had decided not to provide me with a workstation, user account or regular network connection, yet wanted me to completely overhaul their Active Directory. Communication was solely possible using mail or SharePoint, since their workstation only allowed a specific type of USB-devices.
You could call it a highly-secure environment.

In this scenario, an Internet connection is key for exchanging information. Luckily, guest Wi-Fi was offered. Every day, at the reception I could ask for a standardized piece of paper with a 12h passcode.

The passcode system was a horrible experience. Here’s why:

  • The Wi-Fi network itself had a WPA2 key. This adds secondary security, but since the signal is also receivable from the parking lot, the network is heavily used and the WPA2 key was never changed, it adds almost zero security. Yet, it resulted in numerous It’s taking longer to connect messages when reconnecting.
  • The Wi-Fi network issued an authentication portal. Good. Microsoft Internet Explorer’s default page wasn’t treated as a page were you’d be welcomed with the authentication portal, so after I successfully connected to the Wi-Fi network, I needed to enter a well-known, non-https URL in the address bar of my browser, before getting access to the authentication portal.

Note:
It appears this is a common issue with Cisco-powered guest networks. Horrible.

  • The Wi-Fi network issued an authentication portal. Good. Except for the fact that the portal used a self-signed or otherwise publicly untrusted SSL/TLS certificate that made me first have to click through my browser’s Are you sure you want to access this page? warning page. “How hard is it to issue a publicly trusted TLS certificate, right!?
  • The authentication portal allowed me to enter the passcode, consisting of a username and a password. Both values needed to be entered in the authentication portal and the portal did not allow copying or pasting from or to these two fields. Yet, the values one needs to enter to gain access, included characters beyond the word characters, heavily relying on keyboard lay-out. Without being able to see the entered password, or copy pasting it from the visible username field, this is a hassle with different keyboard lay-outs and Caps lock.
  • After the authentication page, a page with terms and conditions was shown. A radio button I accept was offered and a Continue button would then light up green. Every first time I accepted the terms and conditions and hit the Continue button, I was redirected back to the authentication page, where subsequent authentications and acceptances of the terms worked flawlessly. Being taught to read the terms and conditions, this took quite some time. Having employees tell you that the second time you accept, you give away your soul wasn’t helpful either.
  • After clearing the authentication portal, the browser would always be redirected to the customer’s website. After all the hassle it took to enter an address they accepted as a valid target, after seeing it being carried through the authentication portal in the url string, it just gets chopped off at the end. I guess it’s one way to improve your Alexa scores…
  • Every two hours, the session would expire. Sometimes the authenticated session would expire. At other times the IPv4 address lease would expire. It would not pop-up anywhere, the networking connection would just show up Limited. In the middle of an Outlook Sync, in the middle of a down- or upload.
  • When the IPV4-address lease would break, I could not simply disconnect and reconnect the Wi-Fi network. I needed to either restart my (Windows-based) device (which did not always work) or temporarily connect to a different network.
  • Wi-Fi band 1 was actively blocked. This is the default band for most phones, including my Windows Phone for Internet connection sharing. Tethering was not an option, and connecting to it as an alternative network only worked every once in a while.
  • Yes, this Wi-Fi only offered IPv4 addressing. No IPv6, although the Internet provider, providing the actual bandwidth and such, advertises with the fact that they offer IPv6 now.
  • The piece of paper did not include any support information. There was no-one to share my issues with or find a better solution. The customer had an incident report system, but since I didn’t have an account, I couldn’t log any incidents of support questions for my situation…

Going through this process several times for roughly 40 working days, eventually added up to me wanting to punch someone in the face.

Please, if you provide Guest Wi-Fi, make it a less horrible experience as depicted above.

Thank you.

0  

Join me for an Active Directory 101 webinar, in cooperation with Veeam

Active Directory 101

This year, as a Veeam Vanguard, I’m hosting a series of three Active Directory Domain Services webinars, together with Timothy Dewin and hosted by Veeam.

The first webinar in the series is the Active Directory 101 webcast on February 21, 2017.

I’m very excited for this session, because for me it is a way to return to basics with Active Directory. Starting at absolute zero, I’ll explain the logical and physical components of Active Directory, so you can hop on to this technology with ease.

For every beginning Active Directory admin, but also for experienced admins who are only just starting managing Active Directory, this is the webinar to start off. If this is you, please join me. Glimlach

You can join the EMEA session at 2 PM CET, or you can join the Americas session at 1 PM EDT. Both sessions are (nearly) identical.

 

Sign up

Sign up for these webinars for free here.

 

About the Veeam Active Directory Webinars

The Active Directory Deep Dive series of webcasts consists of three Active Directory Domain Services-oriented webinars, that I’m hosting together with Timothy Dewin and Veeam.

February 21: Active Directory 101

Get into Active Directory basics and best practices, including:

  1. Deep dive into specifics of Active Directory service roles
  2. Domain Controllers deploying, grouping and interaction with DNS and DHCP services
  3. Proper configuration of AD

March 7: Active Directory and Virtualization

Deep dive into the latest changes of Active Directory, including:

  1. Challenges and recommendations with virtualizing Domain Controllers
  2. How Domain Controller Cloning saves your bacon
  3. Five key enhancements in Active Directory security in Windows Server 2016

March 21: Active Directory Backup and Restore

Do you know how many people couldn’t restore their ADs due to bad configuration?
Learn how to:

  1. Accordingly configure your backup jobs
  2. Avoid fails at restores
  3. Verify the recoverability of every Active Directory backup

Each webinar is repeated on the same day, to accommodate attendees around the globe. The first session is scheduled for 2PM CET. The second session is scheduled for 1PM EDT.

0  

Things to know about Billing for Azure MFA and Azure MFA Server

moneyjarOur friends at Microsoft have embraced the cloud as a way to give us the benefits of Pay-per-Use for our licensing needs. This is good news for any person, responsible for billing in an organization that relies heavily on Microsoft products.

When thinking about Azure Multi-Factor Authentication (MFA), as a service for, for instance, Azure Admins, it makes sense to be billed per billing period, or even to pay per 10 authentications.

This seems well-documented on the Microsoft documentation website for the Azure Multi-Factor Authentication Service (the cloud-only variant). However, for Azure Multi-Factor Authentication (MFA) Server (the on-premises variant, leveraging the same cloud-based Azure Multi-Factor Authentication engine as the cloud-only variant), it’s not that obvious.

So let’s dive into it.

 

Billing details for Azure MFA

According to Microsoft’s documentation on Azure Multi-Factor Authentication Pricing, the following Questions & Answers (Q&A) provide all the information you need:

How does Multi-Factor Authentication billing work?

The ‘per user’ or ‘per authentication’ billing/usage model is chosen when creating a Multi-Factor Auth Provider in the Microsoft Azure classic portal. It is a consumption-based resource that is billed against the organization’s Azure subscription, just like virtual machines, websites, etc.

Does the ‘per user’ billing model charge based on the number of users enabled for Multi-Factor Authentication or the number of users who perform the verifications?
Billing is based on the number of users enabled for Multi-Factor Authentication.

We’ve done a bit of research and found, that when using the Per-User license model, user objects have the following characteristics:

  • A (synchronized) user object can have any of the following values for the StrongAuthenticationRequirement attribute:
    • clear
    • When the attribute is clear, the user object is not configured or enrolled for Azure Multi-Factor Authentication.
    • enabled
      When enabled, a user object is configured for Azure Multi-Factor Authentication and enrolls the first time, a policy enforcement point (PEP) is passed that requires multi-factor authentication.
    • enforced
      When enforced, a user object is enabled and has passed a policy enforcement point (PEP) that requires or required multi-factor authentication, and the user object is enrolled (configured) for Azure Multi-Factor Authentication.

A Per-User license for Azure Multi-Factor Authentication is billed, when the user object’s StrongAuthenticationRequirement attribute reads either enabled or enforced.

However, user objects may already be assigned an Azure MFA license. No separate Azure MFA license is billed for the user object when any one of the following license is assigned, since these licenses contain the Azure MFA sublicense:

  • Azure AD Premium (P1)
  • Azure AD Premium P2
  • Enterprise Mobility + Security (EM+S) E3
  • Enterprise Mobility + Security (EM+S) E5
  • Secure Productive Enterprise (SPE) E3 (previously Enterprise Cloud Suite, or ECS)
  • Secure Productive Enterprise (SPE) E5

If no applicable license is assigned, Microsoft attaches the Azure MFA per-user license.

Tip!
You might want to check for user objects that are enabled vs. enforced to prevent license costs for user objects that never pass a multi-factor authentication-enabled policy enforcement point (PEP). The script I previously shared to get to know the colleagues using Azure Multi-Factor Authentication provides this information, albeit more focused on the method used.

Tip!
You might want to check for user objects that are enforced, but are also configured with the BlockCredential attribute. This is the attribute that is filled with $true when a synchronized Azure AD user object falls out of scope of Azure AD Connect’s synchronization rules, for instance because it is disabled in the on-premises Active Directory Domain Services environment.

 

Billing details for MFA Server

Now, of course, when an organization wants the full functionality of Azure MFA (except for App Passwords), they’ll implement the on-premises Azure Multi-Factor Authentication Server.

Licensing for Azure MFA Server is interesting, and most closely resembles the Per-User licensing module of the Azure MFA Cloud variant: An Azure MFA license is needed for every enabled user in the PhoneFactor.pfdata database, used to store the multi-factor authentication information on all (synchronized) user objects by the Azure MFA Server(s) on-premises.

Note:
The default setting when you manually add a user object to the PhoneFactor.pfdata database or synchronize a user object from Active Directory or an Oracle LDAP directory, is Enabled, if a phone number is provided.

Again, for user objects that have an Azure AD Premium P1, Azure AD Premium P2, EM+S E3, EM+S E5, SPE E3 and/or SPE E5 license assigned, no separate Azure MFA license is billed for the user object; all these licenses have Azure MFA included as a sublicense. If no applicable license is attached, Microsoft attaches the Azure MFA per-user license.

When multiple Azure MFA Servers are part of the same Azure MFA Server Group, the PhoneFactor.pfdata file is replicated amongst all Azure MFA Servers in the group and the user object is only billed once.

Microsoft states it sends information to the cloud-based Azure Multi-Factor Authentication engine to perform multi-factor authentication (by asking it to place a phone call, sending and receiving text messages and/or pushing a notification). However, as part of this information, the license state is also sent per user, along with information to uniquely identity the user object between the Azure MFA variants.

Tip!
You might want to pay close attention to Enabled users in the Azure Multi-Factor Authentication Server Management User Interface, that have no multi-factor authentication information, like a phone number. These user objects are billed, but cannot perform multi-factor authentications.

AzureMFAUserList

The Filter User List link on top of the Users list in the Azure Multi-Factor Authentication Server Management User Interface, can be followed to expand the filtering option. This provides the opportunity to select Enabled, so only enabled user objects are shown in the list, temporarily. Sorting on the Phone column, then, provides a quick overview.

 

Concluding

Billing for Azure MFA and Azure MFA Server is straightforward, but there’s a couple of gotchas to be aware of.

0  

I’m a 2017 Veeam Vanguard

Today, Veeam updated their Vanguard page. This marks the end of the 2017 Veeam Vanguard Nomination and Renewal processes.

For me, it means I successfully renewed my 2016 Veeam Vanguard Award. I still remain one of the three Dutch Veeam Vanguards, together with Joep Piscaer and Arne Fokkema.

I feel honored.

 

About Veeam Vanguards

The Vanguard program is led by the Veeam Technical Product Marketing & Evangelism team and supported by the entire company. It’s a program around the community of Veeam experts that truly get Veeam’s message, understand Veeam’s products and are Veeam’s closest peers in IT.

Veeam Vanguard represent Veeam’s brand to the highest level in many of the different technology communities. These individuals are chosen for their acumen, engagement and style in their activities on and offline.

There’s a full list of Veeam Vanguards here.

0  

Supported Azure MFA Server Deployment Scenarios and their pros and cons

Just like Microsoft is able to differentiate between different sizes and maturity levels of customers in its licensing, so is Microsoft’s on-premises Azure Multi-Factor Authentication (MFA) Server product.

Azure MFA Server allows for four Microsoft-supported deployment scenarios:

  1. Simple Deployment
    One All-in-one Multi-Factor Authentication Server implementation
  2. Redundant Deployment
    Two All-in-one Multi-Factor Authentication Servers with replication
  3. Stretched Deployment
    A back-end Multi-Factor Authentication Server hosting the Management UI, the PhoneFactor.pfdata database and the MFA Web Service SDK and a front-end webserver running the MFA User Portal and (optionally) the MFA Mobile Portal
  4. Complete Deployment
    Stretched deployment with multiple back-end and multiple front-end servers, using load-balancing, throughout.

 

Simple Deployment

In the Simple Deployment scenario, you’d place one Azure Multi-Factor Authentication Server on the internal network, and be done.

This server would be configured with the core Azure Multi-Factor Authentication components; the MFA Management UI and the PhoneFactor.pfdata database.

Optionally, the MFA User Portal can be installed, if the organization wants to enable end-user self-service for MFA (changing their phone numbers, their method, fallback questions and/or wants to delegate admin privileges to service desk personnel or other stakeholders within the organization. This feature needs IIS.

Optionally, the MFA Mobile Portal can be installed, if the organization wants to enhance the end-user MFA experience with the Microsoft Authenticator app. This feature needs IIS.

MFA Simple Deployment Scenario

Pros

The Simple Deployment scenario offers a quick implementation. It can be achieved under an hour, depending on the Internet speed available, and only requires one Windows Server.

It is not a complex implementation to maintain, once implemented. In one relatively simple maintenance window, you can perform an in-place upgrade of all the functionality and revert back to the previous version, if need be, the entire server at once. You don’t have to work together with other admins that have exclusive rights on web servers, for instance.

Cons

The Simple Deployment scenario does not offer high availability. When the server becomes unavailable, authentication stops. When the database gets mangled, a backup needs to be restored, or a new implementation needs to be performed.

The Simple Deployment does not offer the additional security, the more advanced deployment models offer, since all of the functionality is combined in one server on an internal network; Database and web functionality are not separated. In the unlikely event of a successful exploitation of vulnerabilities in the User Portal, other websites running on the same server or other applications hosted on the same server, the database could be compromised or deleted. Reasoning the other way around, traffic between the User Portal and the database (server) cannot be restricted or monitored.

This deployment model does not scale.

 

Redundant Deployment

To address the issue of not offering high availability, two MFA Servers can be implemented, as part of an MFA Server Group in the Redundant Deployment scenario. This way, replication is established between MFA Servers in the group for the authentication settings and preferences, stored in the PhoneFactor.pfdata database.

While the MFA Server core components themselves and RADIUS authentication do not require it, a load balancer should be used for the MFA Web Service SDK, the User Portal and the Mobile Portal to be made highly-available (and thus any components communicating to these, like the MFA AD FS Adapter).

MFA Redundant Deployment Scenario

PROS

In contrast to the Simple Implementation scenario, this scenario offers redundancy. If one server fails, the other server still holds the authentication settings and preferences.

Note:

Special care should be given to the Master Server role placement and the server(s) running the Directory Integration synchronization task.

This model scales. You can add additional servers. By default, additional servers become Slave servers to the initial Master server, but you can switch the Master server role to any server, if need be. Only the Azure MFA Server Master server has read/write access to the PhoneFactor.pfdata database. After changes, replication between the Azure MFA Servers distributes these changes to all servers.

CONS

High Availability (HA) comes at a price. Unless you’re merely using RADIUS or IIS authentication with a two-server setup, a load balancing solution is necessary.

This model scales, but it does so in per-server steps. When the bottleneck is with a particular Azure MFA Server component, the scale for the particular component cannot be increased without increasing the scale of all the other components in the deployment model, too.

Like the Simple Deployment, the Redundant Deployment does not offer the additional security, the more advanced deployment models offer, since all of the functionality is combined into All-in-one servers on an internal network; Database and web functionality are not separated.

 

Stretched Deployment

To address the security of the deployment, the MFA Server components can be divided between a back-end and a front-end server:

  • MFA Back-end Server
    The back-end server runs the Azure MFA Server core components, like the MFA Management GUI, logging, Directory Synchronization. To allow communication with the front-end server, it also features the MFA Web Service SDK within IIS. The back-end server is placed on an internal network, or, when security policies dictate, on a separate network, only allowing the intended traffic with the front-end server, directory servers, DNS, time sources and MFA-enabled applications.
  • MFA Front-end Server
    The front-end server is configured with IIS and offers the MFA User Portal and MFA Mobile Portal. This server is placed in a perimeter network and optionally configured as a Server Core installation. When you use MFA Server with AD FS, it makes sense to publish the MFA User Portal and MFA Mobile Portal through the Web Application Proxies.

A detailed description of the MFA Server components and their traffic flows is available on 4Sysops.com as part of my MFA Server series there.

MFA Stretched Deployment Scenario

Pros

The Stretched Deployment scenario offers, as its name suggests, a more secure implementation by separating the database and the web functionality. The traffic between the components can be monitored. Additionally, when there is a security incident with either/both the User Portal and Mobile Portal, its publishing can be disabled, without affecting the core Azure MFA Server functionality. In terms of security, we prefer to enable mutual authentication on the connection between the application and database tier, allowing ONLY connections from the web application.

When you have an IIS-based webserver in a perimeter network, you can reuse that server as the server hosting the MFA User Portal and MFA Mobile Portal.

Cons

The Stretched Deployment scenario does not offer high availability. When the server becomes unavailable, authentication stops. When the database gets mangled, a backup needs to be restored, or a new implementation needs to be performed.

 

Complete Deployment

In the Complete Deployment scenario, multiple back-end servers and multiple front-end servers work together to offer an highly-available secure deployment of the Azure MFA Server functionality.

Just like in the Redundant Deployment scenario, two or more back-end MFA Servers can be implemented, as part of an MFA Server Group in the Redundant Deployment scenario. This way, replication is established between MFA Servers in the group for the authentication settings and preferences, stored in the PhoneFactor.pfdata database.

Load balancing is utilized for both the webservers running the MFA User Portal and (optionally) MFA Mobile Portal, and the back-end servers running the MFA Web Service SDK. The MFA Server core components themselves and RADIUS authentication do not require load balancers.

MFA Complete Deployment Scenario

Pros

The Complete Deployment offers high-availability. Each MFA Server component may endure a failure in its tier, without it affecting the MFA Server functionality, as long as special care is given to the Master Server placement in combination with Directory Synchronization.

The Complete Deployment offers a secure implementation by separating the database and the web functionality. It is recommended to install the UserPortal and MobilePortal on the same web server. We additionally prefer to implement mutual authentication and monitoring of the connections between the tiers, after allowing only the traffic needed between the components.

The Complete Deployment scenario offers a platform for capacity management, by enabling admins to scale each of the components relatively independent of other components.

The Complete Deployment scenario offers the right implementation strategy to cope with the absence of a true MFA IIS Plug-in.

Cons

The Complete Deployment scenario can be hard to troubleshoot, due to its relative complexity, compared to the other deployment scenarios. It’s also more time-consuming to set up and (way) more costly.

 

Concluding

Implementing an authentication verification measure, like Microsoft’s on-premises Azure MFA Server, requires some thought to do right.

Choose the right scenario, to meet the requirements of your organization.

Related blogposts

Azure Multi-Factor Authentication features per license and implementation
Choosing the right Azure MFA authentication methods
Azure Multi-Factor Authentication Server version 7.2.0.1 adds Oracle LDAP Support (among other features)

Further reading

Azure Multi-Factor Authentication – Part 1: Introduction and licensing
Azure Multi-Factor Authentication – Part 2: Components and traffic flows
Azure Multi-Factor Authentication – Part 3: Configuring the service and server
Azure Multi-Factor Authentication – Part 4: Portals
Azure Multi-Factor Authentication – Part 5: Settings
Azure Multi-Factor Authentication – Part 6: Onboarding
Azure Multi-Factor Authentication – Part 7: Securing AD FS
Azure Multi-Factor Authentication – Part 8: Delegating Administration

0