D

Darryl Mitchell

Senior Product Manager @ Appian

Automating /etc/hosts file entries via API

I ran into a really frustrating problem at work today. My employer-provided laptop is locked down extremely tight for security and compliance reasons and part of that lockdown is the utilization of Cisco's Umbrella software to route certain DNS queries only via VPN.

For whatever reason, DNS resolution via Umbrella is completely broken at the moment for my Windows 10 device. This means any of the internal DNS zones I need to get to... I can't. This includes critical tools to my day-to-day like our internal ticketing system and code repositories. And since I don't have admin privileges either, my options are limited.

Now there are easier ways to solve this than what I'm about to outline:

  • remove Cisco Umbrella completely
  • have an IT admin add static entries to my hosts file

But neither of these solved the problem because 1. IT hasn't been able to figure out how to get their MDM software to stop re-installing Cisco Umbrella and 2. the DNS entries I need rely on dynamic, load-balanced AWS IPs. So, I had to get more creative. They are also zero fun.

In typical fashion, Linux will bail Windows out here. I have an Ubuntu Linux VM running in VirtualBox on my laptop that I use for various tasks (git, *nix tools, etc.). So, I just had to find a way to bypass Cisco Umbrella DNS resolution, which would be far easier from inside the Ubuntu guest where I have total control.

The best way I could think of was to use a DNS API (which bypasses traditional DNS) to query the public A records for the hosts I needed and then format those and add them to /etc/hosts. Of course all of this would need to be scheduled or at least automated, ideally. Here's how I did it:

  1. I chose a free (up to 500 queries a month) DNS Lookup API that I've had some exposure to in the past.
  2. I then started working on a bash one-liner that could: query the records via curl, grep only the relevant IP addresses, format them into the hosts file format, and get those into the hosts file. Here's what I came up with:
curl -s "https://www.whoisxmlapi.com/whoisserver/DNSService?apiKey=[mykey]&domainName=[externalDomain]&type=A&outputFormat=JSON" | grep -E -o "[0-8]{1,2}[.][0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}" | sort -u | sed -e 's/^/[externalDomain] /'

The hardest part of this process for me personally was by far the regular expression. Because the JSON output from the Whois API includes both rawText and address fields it was very difficult to get to a regular expression that would pick up reliably on the IP addresses I needed without excluding some IPs altogether or adding additional characters I didn't want. I ended up making the concession that I'm excluding 9 from the first octet of IPs because I couldn't figure out how to get the RegEx to stop picking up on the raw text u0009 that comes before the IP address in that field. I am comfortable with that sacrifice for this application.

It was at this point that I realized I needed to either dump the output into /etc/hosts directly, or find some other way. After some brief searching I found another project I have used previously, update-hosts-file. After getting it all configured with the desired hosts file entries (I also had some static internal addresses I wanted to maintain), I created a simple shell script that..

  1. Deletes the existing hosts modules in update-hosts-file's config

    rm -f /usr/share/update-hosts-file/modules/custom/available/sub_domain_com
    
  2. Re-creates them based on the one-liner I wrote earlier with fresh hosts entries

    curl -s "https://www.whoisxmlapi.com/whoisserver/DNSService?apiKey=[mykey]&domainName=[externalDomain]&type=A&outputFormat=JSON" | grep -E -o "[0-8]{1,2}[.][0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}" | sort -u | sed -e 's/^/[externalDomain] /'
    
  3. Executes an update of the hosts file via update-hosts-file

    update-hosts-file update skip
    

Once I confirmed that was all working as expected, I wanted to ensure this script runs itself on some frequent basis. This VM is actually shut down/rebooted once or more per day, so I added the script to systemd so that it executes on each boot. This means every time I start the VM, the DNS entries I need will be queried and added to the hosts file automatically. The systemd unit file looks like this:

[Unit]
Description=Update Corp host IPs on startup
[Service]
ExecStart=/home/darryl/.config/[script_name].sh
[Install]
WantedBy=multi-user.target

Note that I put the script in my .config directory since it's a git repo.

I'm certain that more seasoned Linux admins could have done this in a more seamless way, but it works for me. :)

5-7-700

5.7.700

tl;dr if you're getting this error, it's an automated threshold that Microsoft Support has to reset or expire over time

check your message trace logs for signs of abuse and go ahead and call support

I'm currently working on an ~800 user hybrid Exchange deployment and due to some issues with the existing 2010 environment decided to deploy an Exchange 2016 server to handle hybrid duties. Becauuse the customer is switching from Mimecast inbound/outbound to EOP/ATP I decided to route all messages from on-prem out through Office 365 immediately rather than just letting on-prem route directly and Office 365 route directly. Centralized routing reversed, basically.

Everything worked fine for 24 hours, but then I got frantic calls from the customer that they couldn't send or receive e-mail. Looking at the logs I saw rejected messages with the error code:

5.7.700-749 Access denied, tenant has exceeded threshold

I've done a ton of these deployments and never seen this error, but common sense told me that it was related to some sort of abuse prevention. I started running message traces and saw no indication that anyone had been phished. I double checked my connectors to make sure I hadn't accidentally created some sort of open relay on-prem that was abusing EOP as a smarthost. I couldn't find anything.

I called Microsoft and thankfully got a helpful engineer on the first try (not the norm, unfortunately). He immediately determined because we had gone from zero messages to over 1000 unique recipients in a day that we had triggered an automated abuse threshold. He confirmed that suspicion via health checks on his side and used an internal script to reset the threshold, which restored the mail flow within an hour or two.

The bad thing about this is you're entirely at Microsoft's mercy. I was able to restore mail flow to the users on-premise outbound by creating a new send connector, but the mailboxes on Office 365 were not able to send or receive, and even some inbound traffic to on-prem users was being rejected by EOP. If Microsoft had replied they were going to wait 24 hours to fix the issue I was prepared to offboard the few mailboxes we had already migrated back to on-prem, and even cut MX back over until we could figure out what was happening. Thankfully all that was avoided.

So, pro-tip, it may be better to slowly scale up your outbound traffic vs going from zero to 1000 unique recipients in a day, or perhaps to alert Microsoft if you're planning to do it that way. And kudos to Microsoft for having Tier 1 support tools capable of fixing what ultimately was a simple false-positive.

WinRM error 0x80338012 in Windows 10

Working on getting MFA working with my PowerShell connect script for Office 365 I ran into an issue where a WinRM command wasn't working on my machine. Apparently I had never set it up before, so the command:

winrm get winrm/config/client/auth

was not working. I was getting an error:

WSManFault
Message = The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".

Error number:  -2144108526 0x80338012
The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".

I decided to try running "winrm quickconfig", and got an error:

WinRM is not set up to receive requests on this machine.
The following changes must be made:

Start the WinRM service.
Set the WinRM service type to delayed auto start.

Make these changes [y/n]? y

Error: One or more update steps could not be completed.

Could not change the WinRM service type: Access is denied.
Could not start the WinRM service: Access is denied.

I launched another PowerShell session, this time as administrator, and ran the same command with success. Now the result of winrm get winrm/config/client/auth looks much better:

Auth
    Basic = true
    Digest = true
    Kerberos = true
    Negotiate = true
    Certificate = true
    CredSSP = false

https://docs.microsoft.com/en-us/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell?view=exchange-ps

External mail is denied to mail-enabled public folders

We came across a somewhat obscure error in support this week that piqued my interest. A customer had created new mail-enabled public folders and set permissions for external users to send to them, but external senders were getting a bounce message:

550 5.4.1 [<sampleMEPF>@<recipient_domain>]: Recipient address rejected: Access denied

After some searching online we discovered that changing the domain type from "authoritative" to "internal relay" was the possible solution, and after we tested, it was confirmed.

But, why?

Enter "Directory Based Edge Blocking" (DBEB), a service provided by Exchange Online that rejects messages that can't be resolved in the directory at the edge. Exchange Online does not synchronize mail-enabled public folder addresses with the directory, so when DBEB is enabled, mail sent to public folders from external users is rejected.

DBEB is turned on by default for authoritative domains in Exchange Online because when you set a domain to authoritative Exchange assumes it knows about any address you'd want to send to. Changing a domain to "internal relay" means you're telling Exchange that other addresses could exist, and to route those messages out via an Outbound Connector.

Migrating Server 2003 to Microsoft Azure

I was tasked recently with migrating an entire datacenter off of VMware and on to Azure, and their production servers were predominantly Server 2003. Yes, seriously.

There is no documented process for migrating Server 2003 to Azure because Microsoft doesn't support running Server 2003 in Azure. But, I was able to find some small tips here and there, and after many months (!) of testing I was able to come up with a relatively foolproof process for getting Server 2003 VMs out to Azure. These are rough notes, so you'll need to fill in some gaps with your own Azure knowledge.

The first thing to know is the Azure VM agent doesn't run on Server 2003, so you're not going to get any of the helpful reset networking, RDP config type stuff you get with 2008R2+ machines. What that also means is, since Azure lacks a console, you need to make absolutely certain your machine is reachable over the network when it boots in Azure.

Import necessary PowerShell modules:

Import-Module AzureRM
Import-Module 'C:\Program Files\Microsoft Virtual Machine Converter\MvmcCmdlet.psd1'

Login to Azure account:

Login-AzAccount

Login to 2003 VM and add local admin - this is critical because the domain will be uncontactable when booted in Hyper-V

Note: cached domain credentials can work, but only if you logged in recently

Download VMDK from VMware datastore - ensure you have enough space locally, as it will "expand" if it's thin provisioned

Also note the -VhdType FixedHardDisk and -VhdFormat vhd as Azure only supports Fixed VHDs, not dynamic or VHDX.

ConvertTo-MvmcVirtualHardDisk -SourceLiteralPath .\Desktop\testserver.vmdk -DestinationLiteralPath 'C:\Users\Administrator\Desktop' -VhdType FixedHardDisk -VhdFormat vhd

Create new Hyper-V VM using converted VHD

  1. Launch Hyper-V
  2. New > Virtual Machine
  3. Set Name
  4. Generation 1
  5. Set RAM, Networking
  6. Use existing virtual disk - choose VHD (NOT VMDK)
  7. Finish and launch VM

Login to Hyper-V VM

  1. Install Hyper-V integration tools - gives mouse access
  2. Uninstall VMware tools
  3. Reboot
  4. Run Windows Updates
  5. Shutdown - make sure no blue-screen/clean shutdown because the "Add-AzureRmVhd" command you're about to run will check the filesystem for corruption

Set a resource group and destination for the uploaded VHD:

$rgname = "Test-RG"
$destination = "https://your.blob.core.windows.net/vhds/testserver.vhd"

Upload VHD:

Add-AzureRmVhd -ResourceGroupName $rgname -Destination $destination -LocalFilePath .\Desktop\testserver.vhd

Create global VM variables:

$location = "Central US"
$vmName = "myVM"
$osDiskName = 'myOsDisk'
$vnet = Get-AzureRmVirtualNetwork -Name "test-vnet" -ResourceGroupName $rgname

Create managed disk:

$osDisk = New-AzureRmDisk -DiskName $osDiskName -Disk (New-AzureRmDiskConfig -AccountType StandardLRS -Location $location -CreateOption Import -SourceUri $destination) -ResourceGroupName $rgname

Create security group and rule to allow all local traffic:

$nsgName = "myNsg"
$allowAllRule = New-AzureRmNetworkSecurityRuleConfig -Name "LocalNetwork-AllowAll" -Description "Allows all traffic from local subnets" -Access Allow -Protocol * -Direction Inbound -Priority 100 -SourceAddressPrefix "192.168.0.0/17" -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange *
$nsg = New-AzureRmNetworkSecurityGroup -ResourceGroupName $rgname -Location $location -Name $nsgName -SecurityRules $allowAllRule

Create a network interface:
Make sure your subnet Id is actually $vnet.Subnets[0].Id

$nicName = "myNicName"
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgname -Location $location -SubnetId $vnet.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id

Create VM config:

$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize "Standard_A2"
$vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id
$vm = Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $osDisk.Id -StorageAccountType StandardLRS -CreateOption Attach -Windows

Deploy VM config:

New-AzureRmVM -ResourceGroupName $rgname -Location $location -VM $vm

If your VM isn't reachable over the network in 5-10 minutes, you can check Boot Diagnostics in Azure to take a look at where the login process is - Windows logo screen, updating, etc.

Not an easy process by any stretch, but this process brings together several hours of searching and notes, so should point you in the right direction.

Understanding mail relay options with Office 365

One of the most commonly misunderstood aspects of an Office 365 rollout is what to do with scanners and other applications that need to relay mail. Microsoft publishes documentation on this but in my experience many who are unfamiliar with SMTP flow quickly get in over their heads.

I generally recommend relaying via the MX record for all applications unless your source can't support it. This has several advantages:

  • it can be configured to support both internal and external recipients
  • it doesn't require a mailbox license
  • it has the least requirements for authentication

Note that you always have to send as an accepted domain in Exchange to make this work.

I've seen an uptick in support requests with a new SMTP error code in Office 365:

451 4.4.62 Mail sent to the wrong Office 365 region. ATTR35.

It seems that Microsoft (somewhat quitely) began enforcing MX record external relay authentication. If you're relaying directly to your record as @domain.com to an EXTERNAL recipient, say @domain2.com, then you must create a connector to let Office 365 know this is an authenticated source. You don't have to have a certificate or user for this method, only a specific IP address.

I've also seen customers do a DNS lookup for their MX record until they get an IP address, and then use that IP address for legacy equipment that doesn't support a hostname (i.e. AS/400). Do not do this. Microsoft rotates IPs and it's never a sure bet that an IP in use at any given moment will always work. If you need to relay to an IP address, you should create an internal relay that DOES support a DNS-based hostname - Exchange Hybrid, IIS, or Postfix, for example - and relay to that.

Source:
https://support.microsoft.com/en-us/help/4057301/attr35-response-code-when-mail-is-sent-to-eop-exo

PowerShell Profile Syncing

I switch devices a lot - laptops, desktops, management VMs, etc. - and generally whatever device I'm on I'm using PowerShell extensively.

I predominantly used Mac/Linux from 2006-2017 and got used to certain tools - openssl, dig, whois, etc. - that I need in a shell, and Windows doesn't have those tools (even if it has some native "equivalents", like Resolve-DnsName). Ironically, I made the switch back to Windows specifically because at the time my most used PowerShell module, MsOnline, wasn't showing any signs of making the jump to MacOS. But then they switched to AzureAD. I digress..

It's easy enough to download these tools and create aliases in a PowerShell profile for them, but it makes switching between devices a pain. The solution I came up with is to store my PowerShell profile, downloaded utilities, and scripts in OneDrive (or Google Drive or Dropbox) and just point to that with a symlink on each machine.

By default, PowerShell stores its profile in the user's Documents folder in a folder called "WindowsPowerShell". To begin, backup that file (assuming you're already in ~/Documents/WindowsPowerShell:

cp .\Microsoft.PowerShell_profile.ps1 .\Microsoft.PowerShell_profile.ps
1.old

Next, create a symlink in that location (~/Documents/WindowsPowerShell):

New-Item -Type SymbolicLink -Target 'C:\Users\Darryl\OneDrive\Utilities
\Microsoft.PowerShell_profile.ps1' -Name Microsoft.PowerShell_profile.ps1

Close and open PowerShell, and you should be able to use the profile that you symlinked to.

One issue I've found is different machines can have different usernames - for example "darryl" vs. "darryl.mitchell". This means that statically setting paths inside the PowerShell profile won't work. Thankfully, there's a PowerShell environment variable for that: $env:USERPROFILE

So, when you're specifying aliases with paths (I have dig, whois, openssl, and some other functions/scripts) make sure to use that variable so you avoid issues with differing profile names. Example:

Set-Alias dig "$env:USERPROFILE\OneDrive\Utilities\bind9\dig.exe"

Tips for Decommissioning Microsoft Exchange

I've seen a lot of customers over the years assume that once they've moved to G Suite or Office 365 it's safe to just turn off Exchange and forget about it. While it is possible to manually remove an Exchange server from Active Directory, it's a.) not supported and b.) still leaves remnants of schema on user objects that are difficult to manage without an Exchange server. It's also really difficult to install an Exchange server in the future if you ever need it (as many Office 365 customers do for directory sync management).

I've made it common practice to require customers I deploy on G Suite or Office 365 to retain an Exchange server for management of the Exchange schema in Active Directory. In Office 365's case, Microsoft acknowledges this is necessary and provides a Hybrid Exchange key free of charge for Enterprise subscriptions. I generally configure this Hybrid edition for a customer on whatever the newest version of Exchange is - recently 2016 (2019 is still in RTM). This leaves the need to decommission the old Exchange server (generally 2010). Nearly every Exchange server is stubborn to decommission to some degree, so I wanted to compile a few tips.

1.) Migrate all mailboxes - including audit and arbitration - from the server you're decommissioning

Get-Mailbox -Server [Source Server]
Get-Mailbox -Arbitration -Server [Source Server]
Get-Mailbox -Audit -Server [Source Server]

2.) Manually stop AND disable Exchange services and IIS on the server you're decommissioning

This is one of the most common issues I see when going through the uninstall process. For whatever reason, Exchange services fail to stop and leaves the Exchange install in a corrupted state.

3.) If the uninstaller does fail, it will leave the registry in a "watermarked" state that doesn't allow proceeding with uninstall. You can fix this in two ways:

  • on each role, there may be an "Uninstall" key set - delete this
  • each role should have an "UnpackedVersion" and "ConfiguredVersion" key set - if "ConfiguredVersion" is missing, the installer assumes it can't proceed. If the installer fails late enough (i.e. at "Step 9: Stopping services" then the "ConfiguredVersion" key will have been cleared from all roles and you'll have to go in and re-add it manually. Just add the key as a Binary String and set the value to match the "UnpackedVersion".

Easy DirSync hard-matching

If you've ever accidentally ended up in a scenario where you have two accounts for the same user in Office 365 - one synced with AD, and one in the cloud - it can be challenging to recover from this. Azure AD Connect generates a value to store as the "ImmutableID" in Azure AD, which uniquely identifies/ties the user to the correct on-premise account.

99 percent of the time this value is derived from ObjectGuid on-premise. There used to be a manual way to generate this value and set it as the immutableID manually, but it appears the way Azure AD Connect is generating this value has changed over time and that value no longer generates correctly.

The easiest way I've found to fix this is to take the value that Azure AD Connect generates for the synced object, then delete both objects, paste that value in to the correct account, and let it all sync back up. Example process:

john.smith@domain.com (In Cloud)
john.smith689@domain.onmicrosoft.com (UPN Conflict / Synced)

1.) Get-MsolUser -UserPrincipalName john.smith689@domain.onmicrosoft.com | select ImmutableID

2.) Copy value and save somewhere

3.) Move john.smith689@domain.onmicrosoft.com out of the Azure AD Connect sync scope - i.e. use a temporary OU and exclude from sync, or if you have something like "Disabled Users" already excluded you can use that.

4.) Run sync and confirm account has been deleted.

5.) Set-MsolUser -UserPrincipalName john.smith@domain.com -ImmutableID [value saved from earlier]

Using Microsoft Intune to push non-Microsoft apps

Mobile Device Management is quickly becoming a viable alternative to Group Policy in today's cloud-first world. What used to require a domain-joined machine with group policy can now be achieved with an MDM-enrolled machine and configuration or compliance policies.

Several things have made this possible: Microsoft overhauled Intune last year to make it part of the native Azure interface, recent Windows 10 builds shipped with an MDM agent built in, and Azure Active Directory join is taking the place of (or supplementing) legacy Active Directory.

I'll preface this by saying: Intune is powerful, and only getting more powerful by the day. It can configure endpoint encryption, Windows 10 updates, Office apps, LOB apps, and even run remote PowerShell scripts at this point. The latter piece REALLY unlocks a lot of potential, but I'd rather focus on what's natively possible today in the interface.

For this post, we'll focus on Google Chrome, which is fairly ubiquitous on corporate PCs today.

  1. To get started, you need to download the Google Chrome Enterprise bundle and unzip it.

  2. In the unzipped folder, go to the "Installers" folder. You'll see a "GoogleChromeStandaloneEnterprise" MSI file. You'll use this later when you upload the app to Intune.

  3. Open your Intune portal

    • go to "Mobile apps"
    • then click "Apps"
    • Click "Add" at the top
  4. An "Add app" blade appears. Click the drop-down and select "Line-of-business" app at the very bottom.

    • Click "App package file" and select the "GoogleChromeStandaloneEnterprise" MSI file you downloaded earlier.
  5. Save and exit. Now, click the "App information" button.

  • Publisher: Google
  • Ignore app version: YES
  • and update anything else you want
  1. Click ok. The MSI file will begin uploading in the background.

Once the upload is complete, the only thing to do is decide how you want to assign the app. I assign Chrome to all devices, but you may have other apps that need to be restricted. For that, you can create device-specific groups in Azure AD to limit the scope.