Julian Wraith

Menu Close

Tag: SDL Web

Using AWS to scale SDL Web Publishers

In a recent blog post, Brandon Mahoney asked How Big Can CMS Infrastructure Get? with the short answer being as much as you want to spend. It is true today as it always has been, that you can build any infrastructure to fulfill any set of requirements and requirements such as disaster recovery and performance traditionally can drive you to provision infrastructure that will for the most part remain idle in a data centre. Take a cloud based approach and you can change your overall footprint of infrastructure which will save costs and improve agility.

So how can you avoid over spending on infrastructure but achieve your requirements?

In a standard SDL Web infrastructure, like Brandon shares, there are various strategies you can employ to reduce the overall spend when you use public cloud. If you compare an always-on on-premises environment to an always-on cloud environment, then it could be that the difference in costs is not dramatically different – although it should be noted that businesses rarely have a good grip on what things actually cost (see AWS’ TCO calculator for some help with that). If you are not using higher-level services, like Kinesis, that are fully managed by AWS so you will still need to do some or all of the hard work in managing the environment yourself. Automation can often be applied to both something in the cloud and something on-premises (but rarely is) so that means you are ultimately you are still managing infrastructure and therefore to reduce the cost of it is to manage less of it, more efficiently without compromising performance and reliability. Five principles to this strategy are:

  • Automate everything – if you do something more than three times, write a script or process for it.
  • Build stateless applications – this way you can easier send servers to their deaths if their existence has ceased to have value
  • Scale servers elastically – easier if they are stateless but add and reduce load as needed based upon demand. Demand in the form of users but also in the form of work if servers are batch or processing based
  • Use the right instance sizes – do not just throw the nicest looking instance type at a workload. X1 instances all round! 🙂
  • Understand and use Reserved Instances, On-Demand Instances and Spot instances – they all have a potential place in your infrastructure

AWS provides you with a significant tool set to implement this strategy and I will outline, using SDL Web as the example, how you can implement these tools to approach building low cost and agile infrastructures.

Putting it into practice?

I have blogged many times on performance of publishing and why with the right setup you can achieve a high-throughput of items published from SDL Web (formerly SDL Tridion). In a previous life with a previous approach to infrastructure, I had helped a customer reach a peak of 850 thousand items publishing in a day. I suspected we could have gone higher but this was the natural load and we never got to give the production infrastructure a full stress test. The implemented infrastructure relied very heavily of physical servers and lots of them to achieve such a high throughput. But this is not the pattern we need to follow, so how do we do this?

What we want to achieve is the following flow of steps:

  1. We have a minimal infrastructure that serves our basic need
  2. We detect demand and scale up to meet that demand
  3. We scale down when the demand has subsided

With this flow, we only use the resources we actually need. For SDL Web, publishing is a fluctuating task and most organizations follow a pattern like:

  1. They do not typically publish content at night
  2. They publish several hundred (or thousand) items over a day
  3. They publish at peak times of just before lunch and just before the end of the day
  4. They have occasional significantly high peaks in load for site roll outs or large content changes

Publishing in SDL Web, is a more or less stateless process meaning that the queue and the data is outside the process itself, however, during rendering of a publishing job state is held on disk and memory. Whilst it is normal to hold some state in memory or disk, a rendering job could be a significant batch job executed by one server and this poses a complication to scaling down which we need to address.
Our high-level architecture is shown in figure 1, and consists of a database, AWS RDS, a Content Manager and a Publisher in an auto-scaling group which will have the ability to scale up and down with demand. Because this is a test architecture focusing on publishing, the Content Manager is not scaled or is redundant. In a production scenario you would probably choose to place the Content Manager behind a load balancer and in its own auto-scaling group. I have also chosen to ignore some additional complications of elements such as Workflow Agents and Search Indexing.

Figure 1 – High-level Architecture

To get the Publishers to scale horizontally, we need to understand what the demand is at a given point in time and scale accordingly. More often than not CPU Utilization is a good metric that shows demand; CPU Utilization is high, therefore you add more capacity to reduce overall utilization. However, Publishers do not work like this, they typically run at a high utilization regardless of demand and therefore this is not a good measure. Demand comes in the form of the queue of items that are waiting so we need to establish if we have 100 or 100,000 items in the queue. To do this I use a Lambda function to query SDL Web and provide a Custom CloudWatch Metric showing the Queue length. This metric will simply give us a number and I chose to get this directly from the database. You could query the RESTFUL API of SDL Web but this is both a little more complex and, in my opinion probably a slower approach; a quick database query will provide what we need for the metric.
The Lambda function is written in .NET Core to be able to leverage the native SQL Server Database drivers. We first define the method (note: the code show in this post is not production worthy code and requires more work to make it so):

public async Task FunctionHandler(ILambdaContext context)
 {

Then get the count of the database (ideally you do not hardcode the connection string but I am lazy):

Int32 count = 0;
 using (var connection = new SqlConnection("user id=TCMDBUSER;password=12345;server=dbinstance.something.us-east-1.rds.amazonaws.com,1433;database=tridion_cm;connection timeout=45"))
 {
 connection.Open();
 SqlCommand comm = new SqlCommand("SELECT COUNT(*) FROM QUEUE_MESSAGES", connection);
 count = (Int32)comm.ExecuteScalar();
 }

Now we have the count which we will pass to the metric:

var dimension = new Dimension
 {
 Name = "Publishing",
 Value = "Queue"
 };

var metric1 = new MetricDatum
 {
 Dimensions = new List() { dimension },
 MetricName = "Waiting for Publish",
 Timestamp = DateTime.Now,
 Unit = StandardUnit.Count,
 Value = (double)count
 };

var request = new PutMetricDataRequest
 {
 MetricData = new List() { metric1 },
 Namespace = "SDL Web"
 };

IAmazonCloudWatch client = new AmazonCloudWatchClient();
 var response = await client.PutMetricDataAsync(request);

And then close off:

return response.HttpStatusCode.ToString();
 }

This will give us a metric which we can then find back in CloudWatch metrics under “SDL Web” and then “Publishing/Queue/Waiting for Publish”. We are then going to set a CloudWatch Alarm to alarm when load is over 100 items in the queue for a sustained period of 5 minutes.

For my auto-scaling group, a Launch Configuration specifies the instance that will be launch when the alarm goes off. The Launch Configuration specifies things like the AMI of my publishing server, security groups to allow it to talk to the database and its role which allows it to talk to other AWS services such as SSM which we will get back to later in this blog post. When the alarm fires, an auto-scaling group policy will take decisions about what to do and is defined as follows:

When the alarm is in breach for a sustained period of 5 minutes the scaling policy will:

  • set the auto-scaling group to 2 instances if the queue is between 100 and 1000 content items
  • set the auto-scaling group to 3 instances if the queue is between 1000 and 2500 content items
  • set the auto-scaling group to 5 instances if the queue is greater than 2500 content items

As demand drops, the auto-scaling group will be set to lower amounts and will eventually return to 1 instance running (the minimum in our auto-scaling group).

So now, we have our Lambda generating our metric and an alarm triggering the auto-scaling group which will then add more instances based upon how high demand is (figure 2).

Figure 2 – Alarm

As content editors publish items they will be a small delay, 5 minutes, and then new publishers will be added to publish the content items in the queue. This is just an example on how you could do this. The way the alarm reacts and how quickly up or down you scale is all configurable. Earlier in this post I also mentioned that you typically see higher loads at certain times of the day. As such, we could just add a new publisher at 4 PM to handle the increased load proactively rather that reactively. The choice is yours on how you address this.

When the auto-scaling group removes a publisher we need to be sure that it is done in a nice way. Earlier I mentioned that the publisher is stateless but if it is rendering a content item that will be in memory and on disk. In a recent release of SDL Web, SDL added a graceful shutdown of the publisher, meaning, it will finish what it is doing before it shuts down. When an instance in an auto-scaling group is terminated, processes do not get the chance to cleanly shut down, so we need to use a lifecycle hook to pause termination. Once set, the flow of termination for a publisher is as follows:

  • Lifecycle hook on termination pauses termination and waits
  • A CloudWatch Event is written to say the hook for our auto-scaling group is waiting
  • A CloudWatch rule traps the event and fires a Lambda function in response
  • The Lambda function uses AWS Systems Manager (SSM) to stop the publisher and issue a resume on the termination

Each of our publishers is a Managed Instance which means that we can manage its configuration while it is running without needing to log on to the instance. Managing an instance can be a manual process or you can do that from a Lambda function. In this case, we are doing to run the local PowerShell command to stop a service, the publisher. The code, written in Python, is as follows:

Define the handler and the libraries we will use:

def lambda_handler(event, context):
ssmClient = boto3.client('ssm')
s3Client = boto3.client('s3')
asgClient = boto3.client('autoscaling')

set the details of what instance is in termination

message = event['detail']
instanceId = str(message[EC2_KEY])
lifeCycleHook = "SDLWebPubShutdown"
autoScalingGroup = "SDLWebPublisherASG"

Send the shutdown command to the instance in the form of a powershell command and wait until it has completed

ssmCommand = ssmClient.send_command( 
   InstanceIds = [ instanceId ], 
   DocumentName = 'AWS-RunPowerShellScript', 
   TimeoutSeconds = 240, Comment = 'Stop Publisher', 
   Parameters = { 'commands': ["Stop-Service TcmPublisher"] }
) 

#poll SSM until EC2 Run Command completes 
status = 'Pending' 
while status == 'Pending' or status == 'InProgress': 
   time.sleep(3) 
   status = (ssmClient.list_commands(CommandId=ssmCommand['Command']['CommandId']))['Commands'][0]['Status']

if(status != 'Success'):
   print "Command failed with status " + status

End the waiting termination hook and function

   response = asgClient.complete_lifecycle_action(
   LifecycleHookName=lifeCycleHook,
   AutoScalingGroupName=autoScalingGroup,
   LifecycleActionResult='ABANDON',
   InstanceId=instanceId
)
return None

Once we resume the termination, the instance is terminated and the auto-scaling group has been downsized.

Summary

Figure 3 – Complete Solution

As we can see in figure 3, we have a complete auto-scaling process which will:

  1. Measure the length of the publishing queue thus recording the current demand
  2. Use these measurements to scale the publishers horizontally to meet demand
  3. Gracefully scale back the publishers when demand subsides

In addition we received the following benefits:

  1. Because we create a metric, we can actively monitor the length of the queue on operational dashboards together with other metrics like CPU Utilization, Network and Memory (operational insights)
  2. We can easily provision new publisher instances and recycle bad instances quickly (improved agility)
  3. Reduction in the need for manual intervention (simplification)
    We improve the experience of the content editors in the use of the product

New Infrastructure Features of SDL Web 8

SDL recently released the latest version of their web content management and this releases has some interesting changes from an infrastructure perspective that I would like to highlight.

Product Name Change

Firstly, whilst not an infrastructure change, the product has changed its name from SDL Tridion to SDL Web. The new name, Web, says a little bit more about what it does (at a very basic level) but does somewhat reject the kudos and history that comes with the name Tridion. The name “Tridion” is distinct and easily recognizable, Web is a little more generic and bland in my opinion.

The version number for the new release is 8 which is a throwback to the “R” releases of Tridion before SDL took over the Tridion company. The last product named in the “R” series was R5.3 and since then there has been the releases 2009, 2011, 2013 and now 8. It’s not directly logical that Web 8 should not actually be called Web 9, but 2009 and 2011 are really R5.4 and R6 respectively.

Improved Cloud Support

The new release has some heavy focus on changes to make it easier to deploy and manage Web. First up is the improved cloud support. SDL always did support “the cloud” through the proxy of supporting specific operating systems and provided they ran as normal it did not really matter where they ran. This meant that an IaaS based deployment of Tridion was always possible.

What SDL now means to say is that SDL now “supports specific features of some cloud providers”. Those are only AWS and Azure and nothing is mentioned about Google or Oracle as a platform. SDL Web 8 adds support for Azure SQL and Amazon RDS. The documentation states “Azure and Amazon RDS” but this is an oversight as it means “Azure SQL” as Azure is the Microsoft cloud platform rather than a specific piece of technology.

All this means is that you now take advantage of the database-as-a-service offerings from these two providers providing you are not using the SDL Web legacy pack (e.g. for VBScript templates), transactional core service code (you can write this out) or implementations with certain extensions. This is because Tridion traditionally made use of distributed transactions and these are not supported on AWS or Azure and legacy style code still needs MSDTC.

In all cases, you can only use the SQL Server engines which has a cost impact over and above the MySQL engine options on AWS RDS but is cheaper than the Oracle engines.

If you use a version of SDL Web prior to Web 8, you should note that you can use Azure SQL or AWS RDS for the Content Delivery database but it is simply not supported by SDL.

Topology Management

Topology Management is new product feature that replaces the existing (now deprecated) Publishing management (e.g. Publish Targets) with a more advanced approach which clearly de-couples the configuration of publishing from the management of content and makes the configuration of delivery environment something tied to the environment (e.g. production) rather than the content management database. Managed through Powershell, the Topology Manager manages the relationships between publications and delivery environments. There is a .NET API which means automation options from other applications that are not PowerShell compliant is possible and an example of using that API can be found here.

Some key terminology to grasp with topology management itself;

  • Content Delivery Environment: in essence just the same as before and is communicated with through a Discovery Endpoint
  • Topology Type: defines the purposes like “Staging” and “Live” and can define a series of purposes to help support a publishing workflow (e.g. Staging Editors -> Staging Executive Approval -> Live)
  • Topologies: combines one or more content delivery environments which have a particular Topology Type (including Purposes).

And on the Content Management side:

  • Target Type are the same as before in that it is what the user selects when publishing to undertake the publishing and has “a” Purpose e.g. “Live”
  • Business Process Type defines how content flows through the organization (published or not) and in this context what topology type and target types, defines the Minimal Approval Status of a target type and the priority which both used to be in the publication target. The Business Process Type is in a publication and can be inherited through the child publications.

The features increase the complexity of publishing management and it is a little frustrating there is no user interface as this was one of the nice features of the current publishing approach. Whilst I support the move, it has yet to be seen how much this would be an advantage in an automation / NoOps approach over what you could already do through existing APIs.

For now, there is no need to change to the new approach, so the advice to customers would be to sufficiently test with the new approach before rolling into a production situation.

More Graceful Publishing Management

SDL has improved how you can manage publishing services. Prior to Web 8, when a publishing service was stopped, it simply forgot everything it was doing (much like a “kill -9”). With the advent of technologies like auto-scaling, this makes simply stopping a service a royal pain because a service may be busy with something that you simply just do not want to forget about. These new features are only available in the new publishing approach described above:

  • Pausing the Publisher service: Pausing means that the Publisher does not pick up any new transactions, but does keep processing deployment feedback on items already send for deployment. Assuming you can test if it has completed all feedback items it had open (?) then a graceful destroy of a server could take place.
  • Graceful shutdown of Publisher service: Shutting down the Publisher service will allow the following to take place before shut down is completed; all transactions that have not yet been transported back in the publish queue (set to Waiting For Publish) and all transactions that have the transaction state “Scheduled for Deployment” have sent their commit packages to transport.

With the approach of starting and stopping delivery environments a little more dynamically then delivery environments have some additional options to help management them:

  • Graceful deactivation and reactivation of a Content Delivery environment: halting a Content Delivery environment for maintenance is now possible from the Content Manager server. The documentation is unclear with what happens to publishing transactions being pushed to other sites as well as what happens to records (e.g. audience manager) that are written to databases belonging to redundant databases in other deployment stacks.
  • Decommissioning of a Content Delivery environment: You can decommission an entire Content Delivery environment without having to unpublish content first.

Content Delivery as a Service (CDaaS)

The major change from architecture side for delivery is the introduction of Content Delivery as a Service or CDaaS for short. This new feature means that web applications can feed content from a SDL Web using a microservice approach. This approach allows non-Tridion (Web) skilled teams to talk to Web and minimizes any impact the libraries for Web would have on other applications.

Development and maintenance of the CDaaS and connected web applications can all happen separately on their own development tracks and upgrades to CDaaS will not affect the applications using the content (assuming the interfaces are reverse compatible). You can scale the CDaaS microservice separately to your other application services (a concept drawn from microservices architectural approach) which means that applications that are not content rich need not have large content delivery farms.

Discovery Service

The Discovery Service is now the know it all of the delivery farms, with the centralized webservice being the go to point to understand what content delivery endpoints there are deployed. The topology manager (see above) needs to only talk to the Discovery Service to get everything he needs to know.

[ Update 9th Feb 2016 – Thanks to Nuno Linhares for some corrections on the version numbers and the information regarding the .NET client for the Topology Manager]

© 2017 Julian Wraith. All rights reserved.

Theme by Anders Norén.