When deploying a VM running certain operating systems such as Windows Server, a license is added to the boot disk of that instance. This license is used for billing PAYG licensing.
This article lays out the steps to prepare an image that can be used for GCE and GCE bare-metal instances to run Proxmox Virtual Environment (PVE). PVE is a solution similar to VMware and Nutanix that allows for management of fleet of VMs. Especially for scenarios like datacenter exits or migrations from on-premises customers look for solutions that allow them to easily transition to a virtualization solution if they are not ready for standard Google Compute Engine (GCE) VMs or if there are technical constraints that requires them to explore alternatives.
In a previous article I have explained how to change the provisioning of a VM from being preemtible to Standard. There are situations where you have a VM deployed with the Standard provisioning model but you want to reduce its cost and the workload is stateless or interruptible.
Customers moving Windows Server workload to the cloud often leverage bring your own license (BYOL) to optimize licensing cost. At some point customers may decide to change the licensing model. Reasons could be restrictive licensing terms constraining which versions can be deployed or optimizations such as reducing the amount time a VM and by extension the licese is running per month, for which a permanently assigned license is not the ideal choice.
Spot VMs is a great way to reduce cost for interruptible, stateless and fault-tolerant workloads like batch processing or containers. Starting these types of VMs follows the same principles as regular VMs. The following snippet launches a C4A Spot VM:
Google Cloud Migration Center is a great way to understand the total cost of ownership (TCO) for a migration to Google Cloud by running automatic assessments or uploading information about an estate using the output of tools such as RVTools.
Platform services in Google Cloud act in the context of a service account. While these default service identities are mostly generated automatically, it is not always deterministic when they are created. Some are created when the API is enabled, others will only be created on first use of the API. This makes it hard for managing IAM permissions for these identities - especially when employing infrastructure as code like Terraform.
It could happen. Total mahem. An administrative pricipal for a project was accidentally leaked. An attacker has taken you projects hostage. You need to recover and fast. Restoring project access is the least of your worries your concern is to restore services. Luckily you have all workloads protected with snapshots! All deleted by the attacker! This is an exaggerated and hypothetical scenario but I have seen similar things happening. In this article I’m exploring an approach to protect against such a scenario.
Cloud Workflows provide an easy way for platform automation and integration without the need to write any code. It also integrates seamlessly with Event Arc and other platform components.
When running Storage Spaces Direct in Cloud environments where disk resources can be provisioned at a moments notice with any capacity, it can be the norm that disks will be (hot) added from a cluster to account for growing capacity or performance needs.
Identity-Aware Proxy (IAP) is a powerful tool in the tool chain of Google Cloud administrators and users. It can be used to control access to cloud-based and on-premises applications and VMs running on Google Cloud.
One of the most requested features from customers that deploy Cloud SQL for SQL Server (Cloud SQL) has been Active Directory integration which was released last year. Since then Google Cloud has added cross-project capability which allows you to connect your Cloud SQL instance into a project that is different than the one hosting Managed Microsoft AD (Managed AD).
Many enterprises that migrate their IT estate to cloud will face the question on how to continue to support operations across workloads that remain on-premises and workloads that are migrated to cloud.
Sole-tenant nodes are an important service on Google Cloud Platform to run workloads that require workload isolation or need to comply to specific licensing requirements that demand dedicated infrastructure. A detailed description what a Sole-Tenant Node is and how it is different from general fleet VMs can be found in the Compute Engine documentation.
Sole-tenant nodes are being used by customers for workload isolation and also for licensing compliance (e.g. bringing Window Server licenses). Throughout the life cycle of a sole-tenant node there might be the necessity of moving virtual machines to another node group or even to another machine family (e.g. moving to N2 from N1). Refer to the documentation, to learn more about Node affinity and anti-affinity options.
Who doesn’t want to version their Jupyter Notebooks? Integrating Cloud Source Repositories and AI Platform Notebooks is not hard but I wasn’t able to find documentation to set it up.
If you happen to use a Mac based keyboard and Google Cloud Shell you might be facing some issues when typing special chars such as \, |. There is a simple fix, that’ll get you going.
Google just released beta support for Cloud Spanner dialect for Hibernate ORM. This enables Java (and other JVM based languages) developers integrating their applications directly with Cloud Spanner and helps them to increase productivity. Support for Hibernate ORM is built on top of the open source Cloud Spanner JDBC driver.
Rethink. # In May I started my journey with HorseAnalytics serving as their CTO. Just a couple of weeks into my tenure the unthinkable happened: money ran out and we were not able to secure bridge funding to see the motion we already set into motion to an end.
Yes. Visio is still a thing. To draw expressive diagrams with the correct visual representation of the underlying services you need to have access to quality stencils.
Many use services like DynDNS to make systems behind a dail-up or dynamic line accessible from the outside. But if your primary DNS is hosted somewhere else (e.g. Azure DNS or some other provider) and this provider offers APIs to interact with the domain records, it is pretty easy to write a script that will take of updating the IP when it changes.
If you have not been working with Management Groups to manage Azure at scale, I recommend to review the documentation. You should also take a look at the Microsoft Cloud Adoption Framework (which was just recently updated).
When you begin to operationalize your deployments and want to add monitoring one of the things you might do is to create shared dashboards as part of your deployment. A shared dashboard is basically the same as a non-shared dashboard except it is a full Azure resource, lives in a resource group and can be created through ARM.
Migrating the build pipeline for one of our apps to Azure DevOps turned out to be a rather lengthy process. Building on my local machine with the same version Xcode and CocoaPods ran through seamlessly, but the CI build was failing continuously.
If you use Management groups to manage Azure at scale you may get hit with a bug in the Azure portal, that I discovered today. If you rename the Root Tenant Group the portal stops showing any previously create management groups. Their assignment are still active and you can still manage them using PowerShell or CLI but the portal will start show the out-of-the-box experience.
Both development and production environments are deployed in a fully automated fashion using ARM templates. As part of these deployments we also create Notification Hubs and related authorization rules.
For automatic deployment of test environments we are spinning up App Service instances and want to automatically set connection strings for the database and other services in the same template.
We are using RestSharp for functional testing of our backend services. As part of this process we need to upload images and compare the uploaded bytes against the expected result.
One of the major tasks since starting at HorseAnalytics has been to streamline our development efforts. Centralize the codebase on Azure Repos, refactor the code so that it can not only be built on Windows but also on other platforms like Mac OS.
In my first days at HorseAnalytics, one of the first tasks was to review the codebase and streamline the build and release process. That meant to move all repositories over to Azure DevOps so that we can use the pipelines to build and release new versions of our products.
Today I needed to create a test database for one of the products I’m working on. In the backend it uses LINQ to SQL against a SQL Azure Database. Exporting the production database and cleaning it up with millions of records in it turned out to be not the most efficient way of creating an empty test database.
After spending close to eight years in different roles at Microsoft, my time with Microsoft has come to an end. I accepted a new role, starting on May 1st, as Chief Technology Officer with HorseAnalytics an animal tech startup focused on providing a telemetry data platform for horses.
Governance is one of the major adoption challenges when it comes to cloud computing. Organizations find themselves “not ready” to consume cloud services whether that perception is more a gut feeling or comes from experience.
The data available through the Office 365 Security & Compliance Center contains data for all entities stored in that particular Office 365 / Azure Active Directory tenant.
A while ago I wrote an article on how to estimate data consumption for Log Analytics. Since then there were changes to the way that Log Analytics tracks the volume and the cost associated with data flowing into the workspace.
The data available through the Office 365 Security & Compliance Center contains data for all entities stored in that particular Office 365 / Azure Active Directory tenant.
Today I was honored to present Server Management for hybrid environments as part of the Virtual Windows Server Summit 2019 webcast. Markus Klein and I have been talking about hybrid backup with Azure Backup and hybrid patching with Azure Update Management.
In most enterprise environments governance dictates how alerting for a certain resource needs to look like. Until just recently Azure Monitor only supported to configure alerts based on individual resources and this had to be replicated throughout the organization.
Deploying resources and workloads at scale requires a healthy amount of automation. Automation helps to deliver consistent and repeatable results. I’ve tried to categorize some of the technology and provide some pointers to areas of application and pros and cons.
A request that I get many times is to provide guidance and best-practices on how to implement monitoring and operations management with and/or for Azure. Most organizations have subtle differences which makes it hard to put out a generic concept of how to enable this. There are high-level ideas but in my discussions with customers and partners I’m hearing that this is not detailed enough.
I’ve learned about a “hidden feature” recently that enables some cool scenarios. Log Analytics or Azure Data Explorer aficionados will probably know all about functions already but for Application Insights this has not been documented yet and is not visible through the Azure portal.
When deploying Azure File Sync one question many customers routinely ask is how the network bandwidth required can be calculated. This depends of course on a variety of factors.
A colleague of mine, Tyler Ayers, has written a pretty neat Azure Function that tracks changes made to Azure Resource Manager Providers (ARM) and show these changes in a timeline.
This is why my job is so much fun. I have been working with the CISO of a large retailer that operates multiple independent web portals (some with shopping functions enabled).
In February I had the chance to attend a session by Yuri Diogenes, Program Manager at Microsoft, on how Azure Security Center works and how to demo it in a real life scenario.
Azure File Sync lets you tier data from on-premises systems to a cloud share based on Azure File. Local nodes can act as hot-caches, caching data based on access and modification patterns. Azure File Sync supports multi-master sync so that you can deploy those caches to multiple offices and replicate data across your organization.
Yesterday Microsoft announced the evolution of DevOps. Or rather the evolution of DevOps tooling from Microsoft known as Visual Studio Team Services: Azure DevOps.
This problem is probably as old as there are monitoring tools on the market. “How can I exclude legitimately deallocated VMs from alerting” is a question that I’ve heard many times.
This problem is probably as old as there are monitoring tools on the market. “How can I exclude legitimately deallocated VMs from alerting” is a question that I’ve heard many times.
This problem is probably as old as there are monitoring tools on the market. “How can I exclude legitimately deallocated VMs from alerting” is a question that I’ve heard many times.
I’ve had a question today coming from a colleague that has no prior experience in System Center Operations Manager (SCOM). He wanted to know how data flows from connected agents (regardless of Windows or Linux) to Log Analytics and subsequently to Azure Security Center.
Every once in a while you might need to create an alert which runs a Log Analytics or Application Insights query. When designing the alert you need to define some attributes: the query, the time period, the frequency and the threshold.
A customer of mine had the following rule configured:
It was his expectation that this rule would be triggered when in the last five minutes at some point more that 200 requests/s were being made to the App Service. Unfortunately this is not the case. The rule will sum the number of requests for the last five minutes and if that number is > 200 the rule will trigger.
The Microsoft Monitoring Agent is able to send data to more than one workspace at the same time. Unfortunately only a single workspace at a time can be configured through the Azure Portal.
Security is top of mind for most Azure customers. To have peace of mind when it comes to security for assets running on Azure, Microsoft continuously works to improve on the security recommendations Azure Security Center provides:
It just has become easier to manage Azure Security Center at scale. While not all aspects of Azure Security Center can be automated yet Microsoft just released updated Swagger definitions for working with Azure Security Center. This includes an updated documentation where you can directly try requests to the API against your tenants you have access to.
In a time before cross-resource queries were possible the Application Insights Connector would copy data from Application Insights to a Log Analytics workspace. With the emergence of cross-resource queries the duplication of data is not required anymore as queries can be sent to both (or even more) entities at the same time in real time.
If you are archiving diagnostic logs or activity logs to a storage account through Azure Monitor be aware that on Nov 1, 2018 there will be a breaking change in the format.
When using Azure Backup to manage (geo-)distributed backups across a company you may find that the Azure Backup Reports with Power BI is limited to a single storage account. Unfortunately the reporting telemetry coming from Azure Backup needs to be written to a storage account that is in the same region as the Recovery Services Vault.
Photo by Ilya Pavlov on Unsplash Change Tracking is a versatile feature that allows to monitor changes on a system (both Windows and Linux). Change tracking covers software installation, changes to services, daemons, Registry and the file system. This is available for both cloud based (Azure, AWS, GCP), on-premises and service provider hosted systems (given network connectivity to Azure).
Photo by Matt Artz on Unsplash In many scenarios there is the requirement to enrich or lookup data with meta information from the infrastructure. In this scenario a file with machine, location and other meta information was placed during deployment on the VM for both Azure and AWS.
Photo by rawpixel on Unsplash When setting up Azure File Sync one of the requirements is to have the Azure PowerShell Cmdlets (AzureRM) installed. Many customers have proxies deployed which control internet egress. Many of these also use authentication to secure internet access.
Photo by Clem Onojeghuo on Unsplash To automate the pipeline of the theme I use for my blog to be built and automatically deployed it is required to interact with the Ghost API. Nothing fancy.
Photo by rawpixel on Unsplash I like to share on LinkedIn and asked myself why I’m not sharing the articles I write for my blog. The you-just-have-to-do-it factor, plain laziness and lack of time are the primary reasons why I have not posted links to LinkedIn so far.
Photo by rawpixel on Unsplash Currently billing for Azure Security Center is reported on a per-node, per-month basis. Starting July 1st 2018 this reporting will be changed to per-node, per-hour to achieve more granularity in billing. Billing is still pro-rated thus you’ll only pay for the time a node was actually using the service.
Photo by Samuel Zeller on Unsplash Since Friday May 25th 2018 the General Data Protection Regulation (GDPR) took effect across Europe. It governs how data should be processed and provides extensive rights to the person which data is used.
Centralized integration with an identity provider is a common ask. It provides increased security and removes the reliance on out-of-band managed user accounts.
Photo by frank mckenna on Unsplash Monitoring the container infrastructure which is running your applications is important. With the emergence of managed Kubernetes such as Aure Container Service (AKS) this becomes more tricky as part of the infrastructure is managed by somebody else.
Photo by Igor Ovsyannykov on Unsplash Important step in bringing SAP on Azure to the customer. The M-series that was GA’ed in December 2017 and is memory optimized (up to 128 vCPUs and 4TB RAM) has been certified by SAP to run services.
Almost all Azure management services run in/for any cloud. Among them is Update Management which automates OS patching for both Linux and Windows machines whether they are running on-premises, in Azure or in other clouds.
An interesting question came up in a conversation today: How are the costs for Azure Security Center Standard pricing tier calculated for nodes that are stopped?
Azure Policy is a great tool to define governance controls in Azure. With addition of the compliance pieces this feature which was part of Azure for quite some time finally had it’s appearance on main stage (deep dive on implementing governance at scale in this video from Ignite 2017 by Joseph Chan and Liz Kim)
Nearly every customer I talk to about Azure management asks me this: “How can I do process monitoring?”. As there is currently no way to directly instrument either the Windows or the Linux agent to do explicit process monitoring another way need to be found.
Around Ignite 2017 Azure Security Center was migrated to use Log Analytics as its foundation both for collecting data through the same agent and storing most of its data.
After really starting to manage the Inner Circle for Azure Security & Management community and sharing some useful information I collect every day, I asked myself: “This is exactly what could be published on a blog so that a broader audience can benefit from it”. Thoughts thought, here we are with a brand new blog running Ghost on Azure in a Docker container (kids will be kids, nerds will be nerds…).