Julian Wraith

Menu Close

Tag: CMS (page 1 of 2)

Continuous Delivery vs Content Management

Recently I had a discussion where the initial question somewhat baffled me. Having thought about it more, I want to write something about it to see if I can come to a nice conclusion. The question was; Is Continuous Delivery a threat to Content Management? The form of the question predicates that the asker thinks that Continuous Delivery is actually a threat to Content Management, but why?

As someone who tries to take the independent stance but heavily leaning on Content Management for the staple of work, my initial reaction is that no, Content Management is no threat to Continuous Delivery, but nor is Continuous Delivery a threat to Content Management. Both have a place in any internet delivery environment and such a question is a little like comparing apples and pears. But for kicks, let’s look at it in a little more detail.

What is Content Management (CMS)?
Specifically, we are talking about Web Content Management (rather than the general definition). Wikipedia describes this as:

A web content management system is a software system that provides website authoring, collaboration, and administration tools designed to allow users with little knowledge of web programming languages or markup languages to create and manage website content with relative ease. A robust Web Content Management System provides the foundation for collaboration, offering users the ability to manage documents and output for multiple author editing and participation. (source: https://en.wikipedia.org/wiki/Web_content_management_system)

Systems like SDL Tridion Web make good on the following: allowing non-technical users to edit site content (and even manipulate layout), collaborate on content, version and reuse content across multiple channels and sites. Some systems allow for additional integrations to support content create such as translation systems, DAM etc. and changes can be made to a production website in a matter of minutes. Not all CMS platforms support direct updates but rather they rely on periodic refreshes of the content.

What is Continuous Delivery (CD)?
Continuous Delivery differs in what it, as an approach is trying to resolve. Wikipedia describes it as:

Continuous Delivery is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery. (source: https://en.wikipedia.org/wiki/Continuous_delivery)

The focus here is on agility of changes in a development lifecycle with a heavy focus on automation of repetitive tasks to lead to productivity and quality improvements. These automated stages are things like build, test and deployment. They feature integrated products covering things like collaboration, unit tests, versioning and source control and typically this is focused on product (code which could be a website) development which can and often does include editing of assets such as labels, (website) content and binary objects.

Comparing the two
Both have overlap in two areas; versioning control and pipeline management. Both paradigms focus on rapid delivery of assets and both are only comparable if we are talking about the delivery of a website. A CMS is no good for supporting delivery of a desktop application. Whilst many CMSs support the delivery of code and have a web application, SDL Web does not mandate such a thing and you can develop any application you would like with varying degrees of code in the CMS itself. Currently, the recommended practice of SDL is not to include code in CMS, but to develop a separate application and have SDL Web deliver content which it can do in any form you need (e.g. JSON or XHTML). Continuous delivery specializes in the delivery of code and assets.

In Continuous Delivery, you can enter content assets into a versioning system (e.g. Git) and include that in your build which is eventually deployed. Content can be edited with a suitable IDE in a semi- non-technical form. I do not want to say completely non-technical because an IDE is typically still a technical tool. CMS systems tend to focus on empowerment of non-technical users and organizations that use SDL Web have non-technical marketing users editing content either via forms or using tools like Experience Manager. Content that is entered into the versioning system can then be pushed through the delivery pipeline into the deployment together with all the application code. This has an advantage in agility of deployment because the content will always be delivered by the deployment and the content is as available as the application. Where SDL Web sometimes has challenges is that you need to have a single, scaled and redundant source of content for the webapp. This means that always needs to be there to make the web application works. However, separating the two pipelines of code and content using CD and WCM means that you can make minute by minute changes to the website and not require the application to be redeployed. If you want to separate your web application from your CMS, then content can be delivered through content-as-a-service.

Conclusion
Every web application needs content so if you do not have a CMS then you will need to deliver your content through CD. CD will provide enough features to edit and manage content providing you have the right people and you allow the right speed of updates in the form of multiple deployments per day. What you lose by not having a CMS is the features that the CMS would bring such as content inheritance, translation, inline editing, per minute updates of content. For a simple site (i.e. a micro-site), having a full enterprise CMS is perhaps overkill especially if you do not already have a CMS. If you do, reusing content and content editing processes from the existing CMS is a considerable plus.

If your website is larger in content terms than a simple site and is really multiple sites in multiple languages with a high amount of content reuse, then using CMS and CD together, seems to be the ideal solution. You can manage all your content for all your channels (including campaigning) though one tool and develop awesome apps in record time with CD. One is not a threat to the other.

Going forward I would make a recommendation that your deployments are done in a microservice architecture and in that, your CMS content should be delivered as a service (along with all the other things like targeting). This means all deployed sites take advantage of content that is centrally managed, application deployments are not weighed down by large volumes of content assets and CMS features like content targeting are uniformly deployed on all channels.

Photo credit: Ian Brown (Flickr)

HelloWorld Extension for Tridion 2011

The HelloWorld extension is an example extension, from MVP Fondue, for the Tridion Content Manager Explorer (CME) which is designed to work on Tridion 2011. It does not do anything but give you a simple introduction into the how to hook an extension into the various points of the CME. This should give you the basis for exploring the use of extensions further.

Installing the HelloWorld example

Unpack the HelloWorld sample

Unzip the HelloWorld.zip and copy the files a directory where you will store your extensions. This does not have to be under the Tridion home directory.



Within this zip file you will find both the extension and a Microsoft Visual Solution for developing the extension further.

Create Virtual directory
In Internet Information Services (IIS) Manager, create a virtual directory under your Tridion 2011 websites under CME\Editors.

Give your virtual directory the name “HelloWorld” and point the path to the directory where you unpacked the HelloWorld example:







Grant read permissions only



Configure Tridion to load the extension

1.    Go to the CME configuration folder C:\Program Files\Tridion\CME2010\WebRoot\Configuration
2.    Select the System.Config file and make a backup copy of this file
3.    Open the System.Config in a XML (or text) editor and add the following section as the last sub-element of the XML element “<editors default=”CME”>”:

<editor name="HelloWorld">
<installpath>C:\Extensions\HelloWorld</installpath>
<configuration>config\HelloWorld.config</configuration>
<vdir>HelloWorld</vdir>
</editor>

4.    Hard refresh (CTL+F5 on IE) your browser
5.    View the extension

You can now see the extension appearing in three places:
On the Home ribbon bar:


On the “Greetings “ ribbon bar:


As a context menu:


About the extension

Implementation
The extension does not do anything productive, but instead shows you a popup box when it is clicked and as such the javascript code for this is very simple and not worth going into.  However, it does use the same concepts that any other extension would need. The HelloWorld.js (under the commands directory) implements the functions we need to enable both the popup box to show (execute) but also whether or not the button is enabled (isEnabled) and lastly if the button option is available in this context (isAvailable). IsEnabled’s task is to enable the button for the given context, for example, we don’t have the option to create a Component when in a Structure Group so the New Component button is disabled when in a Structure Group. Whether or not the button is available will depend upon actions of the user; for example maybe I only want to use my button when I have a single item selected and not when I select 2 or more items.
The javascript needed to do all these things in the HelloWorld example is quite simple and is as follows:

Type.registerNamespace("Common.Tridion.MVP.HelloWorld");
Common.Tridion.MVP.HelloWorld = function Commands$HelloWorld() {
Type.enableInterface(this, "Common.Tridion.MVP.HelloWorld");
this.addInterface("Tridion.Cme.Command", [name]);
};
Common.Tridion.MVP.HelloWorld.prototype.isAvailable = function HelloWorld$isAvailable(selection) {
return true;
};

Common.Tridion.MVP.HelloWorld.prototype.isEnabled = function HelloWorld$isEnabled(selection) {
return true;
};
Common.Tridion.MVP.HelloWorld.prototype._execute = function HelloWorld$_execute(selection) {
alert("Hello World!");
};

Configuration
In our configuration (HelloWorld.config) we define two things 1) what menus our extension is to show on and 2) what needs to be run when the extension is used (enabled, available or executed).
The run our extension we need to configure a command which in itself defines the javascript that will be run:

<commands>
<cfg:commandset id="UniqueName3">
<cfg:fileset>
<cfg:file id="HelloWorld">/Commands/HelloWorld.js</cfg:file>
</cfg:fileset>
<cfg:command name="HelloWorld" implementation="Common.Tridion.MVP.HelloWorld" fileid="HelloWorld"/>
</cfg:commandset>
</commands>

The command name “HelloWorld” is our unique reference to the command and the file is the javascript that will run all the commands.
Next, for each menu we want to run our extension from we will need to define these in the same configuration file.
Insert before the preview button on the context menu under the Greetings menu:

<ext:contextmenus>
<ext:add>
<ext:extension name="Hello World" assignid="" insertbefore="cm_preview">
<ext:menudeclaration externaldefinition="">
<cmenu:ContextMenuItem id="Greetings" name="Greetings">
<cmenu:ContextMenuItem id="HelloWorld" name="Hello World" command="HelloWorld"/>
</cmenu:ContextMenuItem>
Add to the HomePage ribbon bar:
<ext:ribbontoolbars>
<ext:add>
<ext:extension pageid="HomePage" groupid="EditGroup" name="HelloWorld" assignid="HelloWorld" insertbefore="PreviewBtn">
<ext:command>HelloWorld</ext:command>
<ext:title>Hello World</ext:title>

Create a new ribbon bar called “Greetings” and add my button to it:

<ext:extension pageid="pageid" name="Greetings" assignid="Greetings">
<ext:control />
<ext:pagetype />
<ext:clientextensions />
<ext:apply>
<ext:view name="DashboardView">
<ext:control id="DashboardToolbar" />
</ext:view>
</ext:apply>
</ext:extension>
.....

<ext:extension pageid="Greetings" groupid="EditGroup" name="HelloWorld" assignid="HelloWorld">
<ext:command>HelloWorld</ext:command>
<ext:title>Hello World</ext:title>

Enabling in the CME
As we saw when we installed the example above, the very last part of the configuration is the addition to the System.Config in the “editors” section:

<editor name="HelloWorld">
<installpath>C:\Extensions\HelloWorld</installpath>
<configuration>config\HelloWorld.config</configuration>
<vdir>HelloWorld</vdir>
</editor>

This element defines how to load the configuration of our extension, the virtual directory that is used and the path to our extension.

Download
You can download the HelloWorld example extension here. This extension is part of the MVP Fondue.

Contribute
You too can contribute to the community of Tridion professionals, feel free to comment on this post with your suggestions or changes or even use the HelloWorld example to make your own extension. Don’t forget to share it with the community!

SDL Tridion 2011 Visual highlights

Recently I attended the bootcamp of the 2011 Community Technology Preview, a preview for existing partners and customers of the latest version of SDL’s WCMS, SDL Tridion 2011.

What has changed the most – or rather the most obvious change – is the Content Manager Explorer also known as the Tridion GUI. In 2011, apart from running on all the major browsers and also an iPad, it features a redesign that will be familiar to existing users but also taking on board lots of new usability features.



One of the nicest features of the new interface is the ribbon toolbar. On the current version of Tridion, the buttons on the toolbar are somewhat hard to see and can make it difficult to see what the particular button is supposed to do. The ribbon features a big icon and some text as well which should make finding the function you want easy.



If you don’t like the ribbon you can always collapse the ribbon down to the more traditional row of icons.



Two more features I would like to highlight. Gone are the tabs on the publishing queue and it now shows you all options in the same area. It’s common for me to forget that I have other options on the other tab, so having them all in one place is better for the old folk like myself.



And then lastly I want to show you another nice feature. Error messages in-line to the interface, so now the option to feed more back to the user about what is going on. If you missed a message, you can also get a list back of the message history.



Fall in love with SDL Tridion publishing

I try never to write about SDL Tridion related topics and whilst it is useful to the SDL Tridion Community, I want to write about other things. However, it has been too long since I have written and I had to do something soon…  so here I am again and I decided to look into publishing with SDL Tridion.

What is publishing?

In short, Publishing is the mechanism SDL Tridion uses to put content on a presentation environment. Content and Templates are rendered together and HTML, XML, JSP etc. comes out the other side. When you chose to publish something, you start a chain reaction that sees your content successfully published on your website. During that process SDL Tridion makes sure all your dependencies are taken care of. That single item you chose to publish might lead to a few more items being published so that the website has no errors or inconsistencies in the content.
There are a number of factors that influence how publishing behaves and how you, as a user, can get along with it. The basic factors are:

  • The implementation
  • The content
  • The hardware

So what do we want from publishing?

Mostly you want content there as soon as possible so you can move onto the next task. However, there are allot of other people doing the same thing and on large scale environments or environments with challenges on performance you might have to queue up.

So what can you do?

You need to publish content in a way that ensures the least stress and maximizes the available time for publishing. So I have gathered here some tips that might help you.

Publish Structure Groups

When you publish anything in SDL Tridion, the number of items you select in the Content Management Explorer equals the number of jobs in the publishing queue. Each job in the queue must be completed separately and therefore has all the overhead of being treated as a separate job. However, if you want to publish allot of pages, for example; part of your site, then it makes sense to publish the Structure Group rather than all the individual pages. It will take just as long to get the task done, just with less overhead. If you are worried about failures then use the failure tolerance setting on the Advanced tab of the publishing dialog.

Use priority publishing

Most users have found the priority option in the publishing dialog. It allows you to change the standard priority to change how the publisher will pick up your publishing job. High = it goes first, Low = it goes last. Normal is everything in between. Using Low priority is handy to be able to use the available publishing time of your servers without getting in the way of normal work. So for example, you need to roll out a future site to Staging; then use low priority publishing. It will get there as soon as the publishers have time to deal with it.

Publish on off peak hours

I looked at the number of items published per day for one of my customers recently. It went like this.

  • Wednesday: 16952
  • Thursday: 21829
  • Friday: 13279
  • Saturday: 1
  • Sunday: 14
  • Monday: 1527
  • Tuesday: 2681
  • Wednesday: 357

Notice anything? Many items that could have been published were not because they did not use the weekend. The servers were not turned off; they sat there wasting that publishing time for nothing. So, scheduling a task for the weekend (or even evening) could make better use of the time available.

Publish to staging or live but not to both

Too often I see publishing jobs in a queue that are the same item but going to two different places and those two places are often Staging and Live. The staging site does need to be up to date, but Live is much more important and the process should be that once you are satisfied with your content you publish it to live, so why republish it to staging? If you must publish to staging then make it low priority or maybe schedule a complete republish to Staging in the weekend (see above). To enable low priority publishing all the time you can set the default priority to low on a Publication Target. That way all jobs going to Staging would be low priority and you never need to remember to set it.

Check the details of what you are about to publish

Before you publish, take a look at what will publish and make sure it is what you expect. You can do this using the “See items to Publish” button in the bottom left of the publishing dialog.

Plan your roll outs

When rolling out websites, plan how you are going to do it and leave enough time to get everything done without impacting regular business.

A watched pot never boils

Refreshing the publishing queue frequently might give us the satisfaction that we know that the job was completed as soon as its status is changed to “success”, but in reality it does not make the job go quicker. You might also want to change the filtering options so you only see your own successful tasks.  And do not forget, if it is in the queue, it will get published; it just has to wait its turn.

Getting the Infrastructure ROI

roi1There has been much written on CMS vendor websites by marketing gurus on how to get the best Return On Investment, or ROI, on Content Management Software. There are also a number of good articles that give good coverage on the various aspects of your investment that you should consider. From my mostly infrastructure viewpoint I see allot of areas where the ROI impact is diminished because certain aspects are not considered fully when selecting the CMS or supporting software. Most papers on your ROI mention things like maintenance and hardware costs, but there are a number of other areas where you can look to make sure that you are not overlooking a drain on your ROI.

Hardware and Software

Hardware is often mentioned, it makes sense that the more servers you need to use the more the total hardware costs and their future maintenance will be. Also to be considered is the complexity of the implementation through having more servers in it and the impact on how you manage that. Consider what it will take to make your application perform within your requirements and more importantly to what hardware you really require with regard to your organizations Service Level Agreements.

When it comes to software, most Content Management Systems require some 3rd party software. Each item of 3rd party software that is required has a cost associated with it, Oracle or SQL Server? Windows or Linux? Each one has a cost not only in the purchase of a license, but also in configuration and maintenance. For instance, in my experience, Oracle is much harder to get performing correctly than say SQL Server. Having worked for Oracle in the past, I know an Oracle database can perform well within what you need for a CMS, but you will need the DBA to set it up for you and DBAs are expensive.

After such obvious 3rd party software as the operating system there are other things like file replication software, small custom solutions, scripts, utilities etc. Each one will effect the money you need to spend on the solution. Your IT team is often used to finding technical solutions to a problem without thinking about the ROI of what they are doing. Does the solution they need to solve the problem prove to be complex, costly to maintain and unreliable or can the CMS do it for you?

Installation & Upgrades

Specialist software often requires specialists to install the software for you, but some specialist software does not require specialists to do it all or even anything at all. Costs of installation and upgrades are in people and time. Who needs to complete it, what do they cost and how much time will they take. Consider the amount of people available in the market to undertake the upgrade or install tasks and the quality that they can deliver. I would rather have a consultant take two days to do a job that another can do in one day, but actually do a really quality install. Make a trade off in the resources you will need to get the job done and to the level you require.

Configuration

Two points on this; complexity and re-usability… Too often configuration of complex applications is in itself complex, but that does not have to be so. Be sure that configuration is as straight-forward as it can be, it will lower costs of maintaining servers and increase good things like uptime (people will make fewer mistakes). Re-usability is key to deploying large applications; lack of re-usability in configuration (e.g. hard-coded paths) will mean an increase in configuration mistakes between servers and an overall higher cost of maintaining large sets of differing configuration.

Maintenance

I mentioned about the extra cost of having complex configuration, but the complexity of the implementation will affect more cost. All the 3rd party software you needed for your implementation has to be maintained, all the connections between servers, all the interfaces etc. We can reduce the cost of all these through effective IT procedures and you should look if the CMS software that provides solutions for helping with those procedures. Monitoring through SNMP is one way a CMS application can help you maintain it, using standard technologies that integrate well with existing tools an IT team might already have is another.

One aspect to consider is who will maintain your application? Do you need a specialist or more than one? Do they have the time to maintain the application? Keeping it alive is one thing, making it work for you is another. To get a specialist, do they need training and who do they get that training from? And the training should not just include the CMS but also all the 3rd party software you needed too as well as training on the implementation you built; it will be different to all other implementations out there because it is the one that suits you.

Support

dilbert_maintenance.strip

Nearly all vendors offer Support contracts to help you with problems with the software. In any typical implementation there will be parts that you have built yourself. These might be as simple as a template but could be as complex as a custom CRM integration. Typically neither of these two things fall under what a vendor support team would support. They support that you can write templates and the API that you interact with but they probably won’t support you getting your template to work how you wanted (unless they are really nice). Product versus Implementation is important to consider. If you needed to make a custom part of the implementation you need to be able to support that yourself or via the implementation partner who created it for you. If you never had to make that custom part of the implementation because it already existed in your CMS product, it will be supported by the vendor support team.

And so what if you had to create a CRM integration? You should be able to feed that back to the vendor so that they can include it in the product. How close are you to the enhancement request process?

Agile Development with SDL Tridion

Last week I attended a seminar organized by Hinttech on Agile Tridion Development. The seminar and its participants discussed the use of Agile development methods when creating sites with SDL Tridion. Agile development is something more and more customers are asking for but then how does that fit into a Tridion project? Laurens Bonnema was on hand to give his view on Agile development and how it should and should not be used. Robert Quaedvlieg from SDL Tridion was also on hand to give a view on where Agile might fit into the SDL Tridion Implementation Methodology. The Implementation Methodology is essentially a SDL Tridion variant on the traditional Waterfall model. This is the traditional project methodology and lends itself very well to projects where we need to (or do) know what we are going to build up front. Agile tends towards situations where we do not know the requirements at the start. My aim here is not to explain Agile development – you need to read one of the many good books or even Wikipedia to get a good short explanation – However, I will lay down some very basic concepts so that the rest of this document is clear. Typically, you do not know the complete requirements up front and part of the Agile process is to define the requirements or backlog. These backlog items are organized into sprints and at the end of each sprint the development team has a working product (with the features worked on for that sprint). In theory that means you have something deliverable at the end of each sprint and, in my view more importantly, you are fully away of the progress you are making. There is more to it than that but the important factor is that the priority of development can be changed at anytime without having to go back and change a monolithic requirements document. At the end, you should have a product that is what you want at the time you want it. Rather than a product which you wanted when you made the requirements.

So how does a Tridion project fit into this?

Looking at any regular Tridion project, there are a number of things that look to fit well into an Agile process and others which do not. Some of the things that do not, I do not think every really could fit well into Agile development, probably because there is nothing to develop, more something to be worked upon. However, even those things can be injected with Agile juice to make them flow easily next to the sprints.

Ignoring the Tridion Implementation Methodology, I will outline some of the various parts of a Tridion project and whether or not I think you should approach them in an Agile (A), Semi-Agile (S) or Waterfall (W) way.

Organisational
Organisational aspects of a Tridion implementation are key to ensuring a successful project in the long term. Like any organizational structure it should focus on the long term and will be the foundation on which this and future projects are built.

What How Why
BluePrint Design S The BluePrint is the corner stone of any Tridion Implementation and is key when you move forward past the end of your project. As such it needs to be fully understood before it is laid down. That said, you can change it to some degree as you go forward, so once the initial design is set you can add to it providing you are prepared to accept the impact from doing so.
Security Design S Security is who has access to what and what they can do with it. It can be decided in the basic form up front, but after that it should be flexible to be changed and grown upon.
Business Processes and Organization W This is a question of understanding the business and how it operates (or wants to operate).
Support and Maintenance W Defining the support and maintenance processes tie into the Business Processes quite tightly.

Content Management
There are two parts to any CMS implementation, the creation of a Content Management environment and then the application to consume the content. In creating our Content Management environment we decide how we are going to manage content both functionally and structurally.

What How Why
Schema Development A Will change frequently during the development cycle
Template Development A Will change frequently during the development cycle
Folder/ Structure Group Setup A This supports the template and schema development
Application Development A Building Blocks are what makes the application. These will change frequently during the development cycle
Event System Development A Will change frequently during the development cycle
Workflow Development A We already will have decided something about our business processes in a waterfall model. Workflow will change frequently as we add more and more content types
Migration of other systems S Often a risk area, migration can be treated semi agile with ease. We know some requirements from the start, however, knowing all requirements can be very complex.

Content Consumption
Consuming the content is a very general topic; it can take any form from a simple .NET application to a MVC framework or webservice. The consuming application’s job is to take deployed content and present it to the user or another application. It is very much a technical coding exercise

What How Why
Deployment Extensions A Deployment Extensions, for example, a Google search integration, can easily be part of a sprint
Consuming Applications A Often the bulk of a development activity is here and this can easily be done in an Agile way

Infrastructural & Integrations
These sorts of activities tend to involve a large amount of people and a very rigid process model. It makes agile work in this area very difficult and you would generally meet stiff opposition.

What How Why
Infrastructure Design W Needs to take into account strict processes and design parameters. Often hardware cannot be purchased until a full design is in place.
Installation W This is an activity that can sometimes be done in sprints (e.g. hardware, OS, CMS, Modules etc), but that is more from practicalities than being designed to look like sprints
Configuration S Configuration of servers should be timed to be worked on post sprint. Configuration and setup adjustments from the sprint can be implemented directly so that the resulting product from a sprint can be put into production. To do this for every sprint would mean that the hardware & software installation should have been completed before the first sprint.
Integration Development A Most integrations are development activities and therefore can easily organized into sprints.

Additional Thoughts
Many standard engineering practices should be implemented that will help you in being agile. These practices stem from traditional development practices but are often overlooked. What is important in each sprint is that all the work you have done has broken nothing from any previous sprint, so structured testing can help you achieve this. Unit and UAT testing can both aid the development process and ensure a quality product. UAT testing can also ensure that the content management environment will work well for the content editors. Getting the editors in and letting them have a play early on might just ensure that they accept the application when the last sprint is complete.

Overall you need to use common sense (in this I very much agree with Laurens). Agile is not the way to solve all evils. Not only are some things just not possible to do Agile but some people cannot (yet) do agile.

53 #ECM #ERM #E20 and #WCM Blogs to Watch — From Twitter Followers

53 CMS Blogs to watch, I made the list!

Did someone steal the meme?

“has someone stolen your meme?” was the DM I got last night. I thought about that for a few moments, can someone steal a meme? Surely the idea is a meme has a life of its own?

The CMS twiterarty have had various memes over the last year or so. Evil man Kas Thomas (note, I can’t call him an Evil Genius BTW) started it all with his CMS Vendor Meme, then recently we have had the Laurence Hart’s CMS Origins meme.
Around the end of July I started the Future of Content Management meme which a number of us answered (not all I noticed!). We reached no conclusion and why would we? The future is not so easy to read as all that… so when I saw this my heart skipped a beat. Not only is it not tagged correctly but it does not even mention the original post! So what happened, was it stolen or not?

To be honest I can’t see what came first, but judging by Irina’s post on CMS Wire being the first to mention the summit then I have month lead on that. I even checked the page source of the summit website to see if there was a first published date, but that revealed nothing more than a lack of metatags.

I can imagine that Day wants to discuss the future of content management with its customers, it is a fantastic idea. I don’t agree that Day is it, but their customers bought the software so I think for them it probably is the future. And before anyone mentions it, I don’t think _any_ vendor is the future.

Of course, I do not own the future of content management but it does make you think. I feel a little like the boy in the school yard who just had his lunch money stolen.

Hashtag: #CMSFuture
MD5 tag for your posts: 6f82f1d2683dc522545efe863e5d2b73, find more related posts

How I fell into the trap…

Laurence Hart started another meme, how we all started in the CMS business and it seems to have got popular. I tweeted that I would do something, but my Friday went a bit wrong. Meetings in two different places in the country and allot of driving put pay to writing any posts.

Since I started my Computer Science degree at university I have been building websites. Apart from the mandatory project websites I had “Jules’ f1 Garage” which had the news and results of the F1 races. It was kind of popular and the most popular daily site at the university. From that it led me to F1Rogues.com which essentially ran an alternative fantasy league for F1 fans where the dirty tricks and misfortunes of drivers were rewarded with copious amounts of points. The site still runs as a blog, but I have not posted there in a while. It is hard to maintain so many online presences at the same time but I would hate to stop it completely. This site got me started with Content Management. Originally the site, which I ran together with a friend, was built out of plain HTML files. Working from two different countries, the change and upload method did not really work out and there were too many mistakes and confusions. So I installed and setup Mambo Server. This ran for a while until I got annoyed at the inflexibility of the whole thing so wrote my own CMS. From my own CMS we were able to link in our page management as well as our fantasy league management. It worked for a long time and is still working just fine.

When I started working, I worked for a Interactive Software who sold data warehousing solutions to BAAN customers. Taken over and sold as ASG Safari the software apparently was Content Management or so the marketers would have you believe. From there I went to an e-Logistic company, called LogiGo.com, in the good old days of the dot com. With LogiGo.com I managed the infrastructure messing around with Oracle, Weblogic and allot of other web related things. After that I moved to Oracle and worked with 9iAS and 9iAS Portal server, Oracle’s idea of a Content Management system before they purchased Stellent (still misguided by the way).

Once I had managed to get bored enough implementing websites for bus companies and petrol cards I moved to SDL Tridion. I have worked for SDL Tridion for 7 years and started as an Infrastructure Consultant, implementing CMS environments for all SDL Tridion’s customers. I am now a Technical Account Manager looking at every part of the implementation and trying to get the best technical ROI for our customers.

The Future of Content Management, the follow up

The Future of Content Management is something that I have thought about for a while. But without a good conclusion and so I decided to open it to the floor of CMS Gurus. So I posted a few weeks ago and went on holiday. Not the ideal way to create a meme, but I could not wait to get started. On my holiday I did not have the chance nor the inclination to even think about it. However, a week back from holiday I owe you all a follow up post with at least the highlights.

Today, as I write this post, I am flying between Amsterdam and Chicago on my way to San Francisco. I did not take the direct flight – before anyone points that out – because of the time I have to be back. My flight this morning was overbooked but they guaranteed me a seat on the plane and told me that I would find out later where I will sit. As it turns out, I got an upgrade to business class. Moments before I found that out I heard an announcement about an option for people to upgrade to business class for 450 Euros. I tisked scornfully under my breath and mumbled something about being an idiot to take up the option. Moments later I was in business class for free and I suddenly felt allot more important. Now that is what I call value for money!

So in-between sipping my white wine and I shall have a look at what everyone wrote about the Future of Content Management…

Whilst many of you professed and inability to look into the future, it was clear you all have more than an idea on many aspects. Some of us have more of a dream than others… some of posted based upon your leaning from either ECM, WCM and commercial or open source. And some wrote their own rules to how they were going to respond. As my only rule was “there are no rules” I liked the spirit of doing something different.

I cannot really attempt to outline exactly what everyone said; it is just too much to take on in a way that would justify the meaning of each article. For that you need to read them for yourself and you will find the links at the bottom.

Vendors
With the recent acquisitions and the general downturn it is likely that the face of vendors will change more that is already has done over the course of the next year. The recent Forrester and Gartner reports have re-asserted some companies positions and surprise people with how some of the reports view other companies. Those that do well will no doubt pick on the weak until we lose a few more vendors. Is Open Source the way? Well as Adriaan Bloem pointed out Open Source is just another license. If commercial software has trappings then Open Source does too, just different ones. I am not a believer that open source will over take commercial software, just that commercial software will leverage open source (and especially open connectivity) just as well as Open Source. In that the playing field will remain level for a long time to come.

I hope and pray monolithic vendors die a slow and painful death but I just know uncreative people will continue to advise customers to invest in such solutions.

Technology
“I’ve been in this WCM industry awhile, so lets put aside the crystal ball a minute and ask if we have yet delivered on the CMS promise of 10 years ago? ”

Judging by the thoughts from everyone the simple answer is NO.

Whilst Ian was talking about making the people have the power, the quote fits right in here too. We all grumbled about the lack of standards and the continuation of proprietary standards that rule our customers. There is CMIS but it lacks a really usable implementation and JCR just is not a standard. Yes, it is if you use java but not for the rest of the world.

Uniform repository access will definitely help but mostly it is going to help with being able to migrate systems and join multiple systems together. In the end if we cannot fix even the smallest of real world problems you can forget trying to get two different CMS systems to just “Plug and Talk” On the other hand it is good to know that Sense/Netbarely has any serious CMS vendor issues that have been upsetting customers throughout the years”, even if the list was not complete.

Concepts
I spend allot of time thinking about this (well OK, a little bit of time) and it is something I like to hear people like Frank talk about. He has great views on what content is and how it should be used – but did not post on this topic (booo!). Challenges we have are how to use the content we have, how long should it exist and what even is content? Is the content that we produce going to live and die in a moment or does it have real life? Social media is perpetuating content that has a very limited life. When was the last time you looked for a Twitter post you had seen a while back? You do not, it has ceased to exist, it is an ex-piece of content. If anything Twitter is a discovery engine, you can discover what is going on, not where to buy a cheap car. This short life also means that some social content has a much more limited value and you can be more risky with it. However, most commercial CMS systems do not truly hand the power to the people, there is also limited tools to help employees create, manage and distribute content remotely or on the move which is something social media requires. For open source the picture gets better, but the most I can manage is Twitter from my iPhone.

That said, almost all vendors push social media connectivity as part of their products but as Ian points out “But, for all that, websites are still the destination – the majority of tweets are linking people with web content. “ So, do not only give us Twitter to tweet our content, give us the mobile application to write the content and then tweet it.

In the end the Twitter bubble will burst unless something happens to give it true value. If that happens the selling point of Content Management Systems will move to other new topics, and hopefully this will be a back to basics move on making content work powerfully rather than enhancing their offering with badly integrated applications that demo well.

Articles
The full list of articles is as follows:

There is still chance to contribute to the discussion by posting your view on the Future of Content Management. We did not hear from a great many people, if you post then do not forget to tag your post.

Hashtag: #CMSFuture
MD5 tag for your posts: 6f82f1d2683dc522545efe863e5d2b73, find more related posts

© 2018 Julian Wraith. All rights reserved.

Theme by Anders Norén.