Introducing Crafter CMS Javascript SDK

Crafter CMS is a headless CMS that provides all of it’s services and capabilities to consumers via RESTful APIs. This enables developers to easily build content support (externalized management of content, strings, media, etc) into any application written in any programming language.

Crafter CMS also supports language-specific (e.g. Java, Groovy, Javascript, .Net, etc) and framework-specific (Angualar6, React, Vue, etc) bindings.  These bindings make the programming model not just simple but native to the language or framework. Using RESTful APIs is easy but native bindings further reduce programming time and complexity by simplifying and reducing the code required to get the job done.

In this blog, we will focus on the language-specific bindings for Javascript, the Crafter CMS Javascript SDK.  These bindings can be used in any client side (e.g. a browser-based application) or server-side (e.g. Node.js) applications.

The High-level Architecture

Before we get into the specifics of the Javascript SDK let’s quickly review the architecture.  Below is a diagram that illustrates how the SDK works in your Javascript application with Crafter Engine and other Crafter CMS components to deliver managed content to your application.

You can learn more about the overall architecture of Crafter CMS here.  In the diagram above we illustrate the following:

  • Authors work in Crafter Studio (a web-based application) to create and manage content.  Crafter Studio provides authors with the tools they need to make updates, perform workflows and publish.  Approved updates can be published to Crafter Engine(s) for content delivery at any time via the UI without any technical assistance.
  • Crafter Engine exposes content, navigation, and structures via RESTful services that are available over HTTP(s).  Content and data are returned to the caller as JSON.
  • The caller in this architecture is Crafter CMS’s Javascript bindings library which is making requests on behalf of the application.
  • The Crafter CMS Javascript SDK provides the application code with Javascript based interfaces and classes to interact with.  There’s no need for the application code to concern itself with REST, JSON and other lower level concepts.

Pre-requisites for Using the Library

  • NPM / Yarn installed
    • The SDK releases are published to Node Package Manager (NPM.)  Example: https://www.npmjs.com/package/@craftercms/content
    • NPM will help you easily download the proper dependencies.  You can all so use NPM or Yarn to build and execute your application.
    • In our example, I will use Yarn to manage my package.json file and to assist me in building my application.
  • Access to a running Crafter Engine instance and site.
    • You may need to configure the site due to CORS (Cross-Origin Resource Sharing constraints within browsers) to allow requests from your application if the app and Crafter Engine are running from different base domains.   We’ll explain this further later.

Building a Simple Javascript Application

The point of our application in this blog is to demonstrate how to include and use Crafter CMS’s Javascript SDK in an application and to use it to retrieve content from a running Crafter Engine instance.   We’re going to keep the application extremely bare bones in order to illustrate that this library is independent of any specific frameworks or platforms such as Node.js, React, Angular6, Vue etc.

For those looking to get right into the meat of the integration, use of the SDK starts at Step 4.

Step 1: Create a basic structure and build for the app

The first thing we want to do is to create a directory for our application source.  I will call my directory simple-js. Inside that directory add a file called package.json with the following content:

{
 "name": "simple-js",
 "version": "1.0.0",
 "description": "Simple JS app that will use Crafter CMS Javascript SDK",
 "main": "index.js",
 "author": "Russ",
 "license": "MIT",
 "dependencies": {
 "@craftercms/classes": "^1.0.0",
 "@craftercms/content": "^1.0.0",
 "@craftercms/models": "^1.0.0",
 "@craftercms/search": "^1.0.0",
 "@craftercms/utils": "^1.0.0"
 },
 "scripts": {
 "test": "echo \"Error: no test specified\" && exit 1",
 "build": "webpack"
 },
 "devDependencies": {
 "webpack": "^4.17.1",
 "webpack-cli": "^3.1.0"
 }
}

 

This file describes our application, how to build it and what is needed to build it.  In the content above we can see:

  • The application metadata.
  • The application dependencies we are requiring are the various Crafter CMS Javascript SDK dependencies ( “@craftercms/content”: “^1.0.0”, “@craftercms/search”: “^1.0.0”, etc.)
  • The application build/dev dependencies that help us compile the application for distribution ( “webpack”: “^4.17.1”, “webpack-cli”: “^3.1.0”.)

Step 2: Perform an initial build

Run yarn in your application directory.  This will download all of your dependencies.

Step 3: Create an application Javascript

Note that in our package.json we’ve listed index.js listed as the main for the application.  Let’s create the src directory and create an index.js in the src directory.

I’ll use the Unix touch command to create the index.js.  This creates an empty file.

Once the file is in place you can run a yarn build command.   This will create a dist folder with a compiled main.js.  The main.js file contains a minified, concatenated set of your dependencies and your script.js code (which is currently empty.)

Step 4: Create an application HTML

Now that our application is building, let’s create a page we can load in a browser and add some basic logic.

In your simple-js directory add a file called index.html with the following content:

 <!doctype html>
 <html>
   <head>
     <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1">
     <title>Getting Started</title>
   </head>
   <body>
     <script src="./dist/main.js"></script>
   </body>
 </html>

As you can see, all this HTML does is declare the head and body of our page and call the main.js script file.

To make our application a little more interesting, let’s add some basic logic to our script.js file.  Add the following code the src/script.js file:

function component() {
  var element = document.createElement('div');
  element.innerHTML = "Hello World";
  document.body.appendChild(element);
}
component();

Now rebuild your application using the yarn build command and open the index.html file in your browser to see your application running.

Step 4: Calling Crafter CMS API’s with our application

Now that we have a very basic, working application it’s time to use the Crafter CMS Javascript SDK.

Crafter Engine Instance

In order to use the Crafter CMS APIs, you will need a running Crafter Engine instance with a project set up that you have access to.  It’s easy to get started. Here are some helpful links on getting things set up:

CORS: Cross-Site Scripting rules

If you are following this blog step-by-step you are likely running Crafer Engine on localhost and you are loading your HTML file from local disk (e.g. file:///Users/rdanner/code/simple-js/index.html.)

From a browser’s perspective, “http://localhost/…” and “file:///…” are different host domains.  A browser, by default, will stop you from making AJAX calls from code loaded in one domain to a different domain unless the receiving domain gives the browser an OK with the proper HTTP headers. This browser behavior is a protection against CORS (Cross-Origin Requests / Cross Site Scripting) type attacks.

In Crafter CMS, you can set up the proper headers for the receiving server (Crafter Engine) to support this example by taking the following steps:

  1. In Crafter Studio, for your site/project, go to Site Config (found in the Sidebar)
  2. Select the Configuration tool.  In the drop-down select the Engine Site Configuration option.
  3.  Add the following configuration and click the Save at the bottom of the screen.
    <site>
     <!-- CORS Properties -->
     <cors>
       <enable>true</enable>
       <accessControlMaxAge>3600</accessControlMaxAge>
       <accessControlAllowOrigin>*</accessControlAllowOrigin>
       <accessControlAllowMethods>GET\, POST\, PUT</accessControlAllowMethods>
       <accessControlAllowHeaders>Content-Type</accessControlAllowHeaders>
       <accessControlAllowCredentials>true</accessControlAllowCredentials>
     </cors>
    </site>

Import the APIs into your application javascript

Add the following code at the very top of your index.js script file.  Here we are declaring the classes we will use and indicating where they are imported from (the SDK.)

import { ContentStoreService, NavigationService, UrlTransformationService } from '@craftercms/content';
import { crafterConf } from '@craftercms/classes';

Configure the SDK for your Crafter Engine endpoint base URL (server name, port etc)

Now that you have declared the Crafter CMS SDK services you want to use you are almost ready to start retrieving content.  Before we can, we need to make sure the SDK APIs know where Crafter Engine “endpoint” / server is and what site to pull the content from.  To do that we have two options:

  • We can create a configuration object and pass it with each call
  • Or we can explicitly supply the endpoint configuration with each call

Since we’re going to use the same endpoint and site, we’ll set up a configuration object and pass it when we call the APIs.  To configure the endpoint we add the following code to our script after the import statements:

crafterConf.configure({
 site: 'editorial'
});

The configuration above states that our API calls will reference the site in Crafter Engine that has the “editorial” ID. By default, the configuration assumes that Crafter Engine is running locally on port 8080. To change this, add the following:

baseUrl: “YOUR_PROTOCOL//YOUR_DOMAIN:YOUR_PORT ”

crafterConf.configure({
 site: 'editorial',
 baseUrl: "https://mydotcomsite"
});

If you want the system to assume that the app and the API endpoints are on the same server you can set base URL to an empty string (“”)

Call a Crafter CMS API and get content

Make the code changes:

Now that the library is imported and configured, we’re ready to use it.  Let’s add the code to our example to make a request for content and put it on the screen for the user.   Below is the entire index.js script for our application with changes:

import { ContentStoreService, NavigationService, UrlTransformationService } from '@craftercms/content';
import { crafterConf } from '@craftercms/classes';

crafterConf.configure({
 site: 'editorial'
})


function component() {

   ContentStoreService.getItem("/site/website/index.xml", crafterConf.getConfig()).subscribe((respItem) => {
      var element = document.createElement('div');
      element.innerHTML = respItem.descriptorDom.page['hero_title'];
      document.body.appendChild(element);
    })
}

component();

What we’ve done is replace the code in the component function that was hardcoded to write “Hello World” on the screen for the user with the code that calls Crafter CMS for content and write the retrieved content on the screen instead.

Let’s take a closer look at the API call; ContentStoreService.getItem

  • The first parameter is the path of the content item we want to return.  The example uses the home page of the editorial site.
  • The second parameter is our configuration object which supplies the site and endpoint base URL configuration.
  • Lastly, you will note the subscribe operation.The SDK makes asynchronous calls to the Crafter Engine component of Crafter CMS.  Subscribe allows you to supply the code you want to execute when the call returns.  In our example, this is where we create an element, get our content value (hero_title) from the returned response and then update the application with the content.

Compile the code

Once you have updated your code, rebuild the application.

Test the application

After you compile, if you refresh your application you should see content from your Crafter CMS project show up in the updated app!

Congrats!  Your application’s content is now externalized and managed in the CMS.

Step 5: Update your content using the CMS

Now that our code is complete our content authors can update the content at any time via the CMS and we don’t need to rebuild or redeploy the application.  Let’s use Crafter Studio to update the content for our app.   You can find detailed how-to help here:  Creating and working with your First Crafter Site

In Crafter Studio, find the content you want to edit

In our case, the content we’re consuming is the hero title.  Let’s make an edit to that content:
In your site’s content preview, turn on the editing pencils and click the pencil for the Hero Title (as shown above.)


Note that the author has a preview of the content.  This is something very different and ultimately much better than what you see with other Headless CMS systems.

With Crafter, it’s always possible to provide the user with a preview of the content they are editing regardless of the front end framework.  Preview capability is key because it makes editing content easier for authors.

Make a content change and save

After you click the pencil for the Hero text make the change and then click the Save and Close button at the bottom of the form.  You will see your update in the CMS right away.

Refresh the example application

Wow! Those changes in the CMS look great (if I do say so myself :D!)  Now it’s time to see the changes in our application.  Simple, just refresh your application in the browser. No code updates required!

It’s really just that simple!  A few lines of code allows us to create applications with content, configuration and other strings that we can update at any time without needing to update or redeploy our application code.

What about workflow and publishing?

Good question!  Most of the time you want a review and approval process before content updates to applications are made public.  We call this process workflow and publishing. Crafter CMS has workflow and publishing capabilities right out of the box.

To keep our example as simple as possible I kept the architecture simple and we pointed our application directly to Crafter Studio and the preview server on localhost, port 80.

In the configuration above, the API will see updates immediately.    The architecture in the High-Level Architecture section above shows a typical production topology in which updates go through approval workflow and publishing.  Once the content is approved and published it is then available to the application.

Getting to know the SDK API

In our example application, we consumed a single service method from the SDK.  Now that we know how to use the SDK let’s take a look at the various services and service methods the SDK provides:

Service Description
Content Service
(@craftercms/content)
This package contains services for retrieving content and navigation using APIs offered by Crafter CMS.

  • Get a content object and its associated content attributes
  • Get a tree of content objects (children, trees etc)
  • Get dynamic navigation and breadcrumb structures
  • Execute URL transformations

API Specification and examples:
https://www.npmjs.com/package/@craftercms/content

Search Service
(@craftercms/search)
This package contains tools for integrating your application with Crafter Search.

  • Create Crafter Search query objects
  • Execute Crafter Search queries

API Specification and examples:
https://www.npmjs.com/package/@craftercms/search

 Conclusion

Crafter CMS’s Javascript SDK gives front end and full stack developers who work in Javascript and Javascript related technology a native programming API to work with that makes building content-rich applications much easier to do.

When we stop an think about it, almost every application has content in it.  If that content is easy to update and updating the content can be done by non-technical users at any time without a build or deployment everybody wins:

  • The content in the application is richer and much fresher
  • Developers spend time developing rather than making non-code related updates
  • Removing non-feature related deployment distractions gives us more time and makes our teams more agile.
  • Our business users can make their changes at any time which makes the business more agile.

Headless CMS capabilities provide non-technical users with the ability to update, workflow and publish an application’s content at any time without the involvement of developers and IT.

Crafter CMS’s headless capabilities go beyond traditional headless CMS systems by providing authors with an in-context preview for content editing and giving developers the freedom to use whatever front-end technology they want.

 

Integrate Crafter CMS with Jenkins to Automate DevOps: Code Forward, Content Back Process

Great DevOps helps us build better software products faster. One of the key elements of DevOps is automation within the development process across lower development environments.  Jenkins, Bamboo, Travis and many others platforms like them are used by DevOps teams to help automate the process of Continous Integration and Continous Delivery (CI/CD). To a large degree, real support for CI/CD and agile development is something that is woefully missing in Content Management, a core component of digital experience platforms.  The toolset and the architecture of today’s content management platforms make it difficult, sometimes near impossible to support let alone automate. How easy is it for your team to take content from the Production CMS and update content in lower environments like Dev, QA?  How easy is it to roll out new features?  Often times this process is not only laborious for the DevOps team, it also halts the content authoring process.    Crafter CMS’s, an open source CMS Git-based repository and Java/Spring and Groovy-backed stack are a game changer in this regard.  Further, Crafter integrates directly into to your CI/CD process.  In this article, using Jenkins as our example we’ll demonstrate how you can connect your production CMS to your developer workflow to facilitate Code Forward, Content Back(TM) workflow.
You can read more about this process in detail here:

Although our example is based on Jenkins, the scripts and flow used in this blog are applicable to nearly any automation

Understanding Where Code Forward, Content Back Automation Fits

The fact is that every step of your DevOps process is open to automation.  In this section, we’ll cover the common points of integration, specifically:

  • The point in the process where you want to move content from your production CMS “back” to lower environments to support development and testing.
  • The point where you want to promote code “forward” from the development process to the production CMS so it can be published.

Both of these points in the process are illustrated and addressed in the diagram below by the double-headed arrow labeled Code Forward, Content back.   With respect to the CMS, Code Forward, Content Back is the most important aspect of the DevOps automation.

Crafter CMS’s Git-based repository is the foundation of the automation.  Our automation running in Jenkins is going to leverage API’s within the authoring environment (Crafter Studio) to sync code and content with the development process.  APIs will also be used to publish code synced to authoring to the Production delivery infrastructure.

Implementing Code Forward, Content Back Sync

Now that we’ve established what integration we’re addressing here is, let’s focus on configuring it.  Take a look at the diagram below, this elaborates the previous diagram showing how the sync occurs.

Note that both the Production authoring and the Development “environment” has a repository.  In authoring, this is a local Git repository.  In development, this is most often a centrally hosted Git repository that supports workflow and review (like Bitbucket, Gitlab, Github, and others.)   You can think of the repository under authoring as the Content Repository and the repository supporting developers as the Code Repository.  These names (Content Repository, Code Repository) are simply labels to help describe their purpose and assist us in addressing them in the context of this article.

To facilitate this flow, the Content Repository under authoring/Crafter Studio declares the Code Repository as a remote. The primary way of “syncing” work between Git repositories is through pull and push operations.  Before you can push your work to a remote, you must first pull merge the updates (if any) from the remote.  Once done, you can push your changes to the remote.

Automating the Pull / Push of Code and Content

To help automate the process described above Crafter Studio, the authoring and repository component of Crafter CMS supports a set of APIs.  You can find a full listing of Crafter Studio APIs for Crafter 3.0 here: http://docs.craftercms.org/en/3.0/developers/projects/studio/index.html

These APIs are easily invoked by a script.   You can use the following example script in your own implementation:

codeforward-contentback-sync.sh

#!/usr/bin/env bash
studioUsername=$1
studioPassword=$2
studioserver=$3
project=$4
remote=$5
branch=$6

echo "Authenticating with authoring"
rm session.txt
curl -d '{ "username":"'$studioUsername'", "password":"'$studioPassword'" }' --cookie-jar session.txt --cookie "XSRF-TOKEN=A_VALUE" --header "X-XSRF-TOKEN:A_VALUE" --header "Content-Type: application/json"  -X POST $studioserver/studio/api/1/services/api/1/security/login.json

echo "Pull from remote (get code waiting to come to sandbox)"
curl -d '{ "site_id" :"'$project'", "remote_name":"'$remote'", "remote_branch":"'$branch'" }' --cookie session.txt --cookie "XSRF-TOKEN=A_VALUE"  --header "Content-Type: application/json" --header "X-XSRF-TOKEN:A_VALUE" -X POST $studioserver/studio/api/1/services/api/1/repo/pull-from-remote.json

echo "Push to remote (send content waiting to go to development)"
curl -d '{ "site_id" :"'$project'", "remote_name":"'$remote'", "remote_branch":"'$branch'" }' --cookie session.txt --cookie "XSRF-TOKEN=A_VALUE"  --header "Content-Type: application/json" --header "X-XSRF-TOKEN:A_VALUE" -X POST $studioserver/studio/api/1/services/api/1/repo/push-to-remote.json

Use of the script:

codeforward-contentback-sync.sh [USERNAME] [PASSWORD] [AUTHOR_SERVER_AND_PORT]  [SITE_ID] [REMOTE_NAME] [BRANCH_NAME]

USER_NAME is the Studio user (application account)
PASSWORD is the Studio user password (application account)
AUTHOR_SERVER_AND_PORT the protocol server name and port of Studio
SITE_ID the ID of the site
REMOTE_NAME the name of the upstream (typically origin)
BRANCH_NAME the name of the branch (typically master)

Example:
codeforward-contentback-sync.sh devops mydevopspw http://localhost myprojectID origin master

The script is quite simple.  It authenticates to Crafter Studio, performs a pull from the Remote Code Repository and then if there are no conflicts, performs a push.  These two operations move code updates forward to the production Sandbox (not yet live) and content back to the development process.  Only approved code that’s been moved to the “master” branch with the intention to release is moved forward.

Calling the Script in Jenkins

The first step is to create a project.  Give the project a clear name and select the Freestyle project then click OK to continue.

There is no Source Code Management (SCM) aspect of the project.  The most typical use case for Content back workflow is a scheduled event: Every hour, day, week etc.

 

The next step is to define build triggers.  Since you are calling APIs here and content back is most likely based on some schedule you define you want to indicate that there is no Source Code Management (SCM) aspect of the project.

Select “Build Periodically” and define your schedule.  Schedule definitions user standard Cron/Quartz configuration.

Finally, you must define that you want Jenkins to call your script:

Once you have done these steps you are ready to go.  Manually invoke this build any time you want directly through the Jenkins console.  I recommend testing it to make sure your parameters and schedule are correct.

Publishing Code That’s Been Sync’d to Sandbox

When you run the code forward, content back process code in the remote code repository is moved to the production authoring sandbox (content repository.)  This code is now staged for publishing.  It is not yet live.  Crafter Studio must publish the code, making it available to your delivery servers.  This in-and-of-itself is awesome: global, elastic deployment at the touch of a button.

So how is it done?  Crafter Studio provides an API that allows you to publish commit IDs.  You can provide a single commit ID or you can provide a list.  It’s typical as part of your release process to “Squash” all of the commits in a given release into a single commit ID.  This allows you to address all of the work as a single ID/moniker which makes it very easy to move, publish and roll back without missing anything.

These APIs are easily invoked by a script.   You can use the following example script in your own implementation:

publish-code.sh

#!/usr/bin/env bash
studioUsername=$1
studioPassword=$2
xsrf=AUTOMATED
studioserver=$3
project=$4
env="Live"
commit=$5

echo "Authenticating with authoring"
rm session.txt
curl -d '{ "username":"'$studioUsername'", "password":"'$studioPassword'" }' --cookie-jar session.txt --cookie "XSRF-TOKEN=A_VALUE" --header "X-XSRF-TOKEN:A_VALUE" --header "Content-Type: application/json"  -X POST $studioserver/studio/api/1/services/api/1/security/login.json

echo "Publishing Commit $commit"
curl -d '{ "site_id" :"'$project'", "environment":"'$env'", "commit_ids": ["'$commit'"] }' --cookie session.txt --cookie "XSRF-TOKEN=A_VALUE"  --header "Content-Type: application/json" --header "X-XSRF-TOKEN:A_VALUE" -X POST $studioserver/studio/api/1/services/api/1/publish/commits.json

Use of the script:

publish-code.sh [USERNAME] [PASSWORD] [AUTHOR_SERVER_AND_PORT]  [SITE_ID] [COMMIT_ID] 

USER_NAME is the Studio user (application account)
PASSWORD is the Studio user password (application account)
AUTHOR_SERVER_AND_PORT the protocol server name and port of Studio
SITE_ID the ID of the site
COMMIT_ID the squashed commit ID of the items coming from the release branch

Example:
publish-code.sh devops mydevopspw http://localhost myprojectID 378d0fc4c495b66de9820bd9af6387a1dcf636b8

The script is quite simple.  It authenticates to Crafter Studio and invokes a publish for the provided commit.  This op

Calling the Script in Jenkins

See configuration of sync script above.  The steps are exactly the same with the following differences:

  1. You will call the publish-code script instead of the codeforward-contentback script.
  2. You will ask the user for a parameter  value COMMIT_ID via the UI on each invocation and pass that to the command line as the COMMIT_ID parameter value

 

 

That’s it!  You can now publish your code releases via commits to your entire delivery infrastructure regardless of its size or distribution.

Conclusion

CMS platforms are notorious for refusing to play nice with CI/CD and agile development practices and process, automation and tools like Jenkins, Bamboo, Travis and others.  Databases and JCR repositories are one component of several fundamental, architectural limitations that make supporting CI/CD difficult for CMS platforms. Crafter is an open source, dynamic CMS with a unique Git based repository specifically designed to fit neatly in to your development practices and bring your authoring and development teams together in a way never before possible to improve and increase the rate and volume of innovation!

Content Management Meets DevOps (Part 1 of 2) How a Git-based CMS Improves Content Authoring and Publishing

Traditional CMS platforms based on SQL and JCR repositories have begun to show major signs of weakness in keeping up with today’s demands for a high rate of innovation and rapid scalable deployment on modern elastic architectures. This is nowhere more evident than the move towards headless CMS. Many CMS platforms today push headless, or what some call Content as a Service (CaaS), as the one-stop-shop solution to the struggles most CMS platforms have in providing support for scalability, multi-channel, and development integration. It’s not. Headless capability is important but it has its own limitations.

Crafter CMS, an open source Git-based dynamic CMS tackles all of these challenges head-on with a set of technologies and that incorporate lightweight development, integration with developer tools and process, and elastic scalability for content delivery that provides the ability to serve any front-end technology via API or markup with fully dynamic content.

In this two-part series, we’ll explain the basic mechanics that support content authoring, publishing and developer workflow and demonstrate how these mechanics combined with Crafter’s architecture and developer stack set a new standard for what a CMS can provide in today’s competitive digital marketplace.

Content Management and Deployment Mechanics

In this section we’ll explore the mechanics of how (non-technical) content authors work with the CMS and how their changes, once reviewed and approved, are deployed from their authoring tools to a live content delivery system.

Crafter CMS is decoupled, composed of several microservices where content authoring and content delivery services are separated into their own distinct, subsystems. This model has many advantages related to security, scalability and delivery flexibility. In a decoupled architecture, content is published from authoring to delivery as shown in the diagram below. The delivery system may be any number of independent digital channels – enterprise website, mobile app, social, augmented reality, digital kiosk or signage, e-commerce front end, microsite, etc.

Crafter CMS supports authoring via Crafter Studio that sits on top of a headless Git-based repository and publishing system. Content authors don’t need to know anything about Git. They simply work with Crafter Studio, a web-based application. Crafter Studio provides beautiful content entry forms, in-context editing with multi-channel preview, drag-and-drop layout, component placement, image cropping, and more. While content authors are performing their work, Crafter is managing all of the Git mechanics, managing locking, creating a time-machine like, Git-based version history and audit trail for them behind the scenes, all accessible to them via the Studio UI.

 

Figure 1: Crafter CMS microservices applied to decoupled architecture

Crafter’s publish mechanism deploys content from the Authoring system to the Delivery system. Content logically flows from the authoring environment to the delivery environment. The mechanism for this, given the underlying Git repo, is a “pull” type interaction. Meaning the actual network conversation is initiated from the delivery infrastructure to the authoring infrastructure, as shown in Figure 2.

Each delivery node has a Deployer agent that coordinates deployment activities on the node for each site that is being delivered on that node.

  • Delivery nodes can initiate deployment pulls either on a scheduled interval (a “duty cycle”), on-demand via an API call, or both.
  • The Deployer performs a number of activities beyond receiving and updating content on the delivery node. A list of post-commit processors is run. These can be used to execute updates on search indexes, clear caches and perform other such operations.
  • The Delivery node maintains a clone of the Authoring Git-based repository.
  • The Crafter Deployer takes care of managing the synchronization of the delivery node’s clone authoring repository from the authoring environment.
  • Git-mechanics ensure content sync is 100% accurate.

Figure 2: Crafter’s Dynamic CMS Publishing via Git

Technically speaking, Authoring does not require knowledge of the Delivery nodes. This makes the architecture more elastic, globally scalable and even enables Crafter to support disconnected and intermittent content delivery.

  • Elastically add new nodes or revive dormant nodes and they will sync to the latest without any additional wiring.
  • Create region-based depots to avoid transferring data more than once over long distances for global deployments.
  • Airplanes, cruise ships, drilling/mining locations and other remote disconnected deployments can operate with their latest pull of content, and sync up with Authoring when connectivity is available.

Figure 3: Elastic Delivery

In Crafter CMS, only approved content is published to the delivery environment. Crafter manages this by using 2 repositories for each project. One called a “Sandbox” which contains work-in-progress and the other called “Published” which represents approved, published work and complete content history.

  • Authors use the Crafter Studio UI to review and approve content via workflow.
  • Crafter Studio takes care of moving approved work between Sandbox and Published repositories.
  • Delivery nodes monitor the published repository for updates.


Figure 4: Authors work in Sandbox. Delivery nodes pull from Published.

Benefits

Crafter’s Git-based publishing model provides your authoring team with a highly reliable, highly accurate publishing mechanism that is elastically scalable, globally distributable and supports multi-channel.  Crafter CMS’s architecture enables your team to reliably deliver your dynamic content on any channel, wherever and whenever it is needed.

Further, As we’ll see in in Part 2, this architecture enables content authors to work side-by-side with DevOps, while they continually introduce new features and functionality without any disruption to the authors.

How do I set up this workflow?

The underlying Git repositories and related workflow for Authoring require no setup at all. When you create a project in Crafter Studio it automatically creates the local “Sandbox” and “Published” repositories. When you add a new “Delivery” node a simple command line script is run on that node that configures the node’s deployer to replicate and process content from the “Published” repository from authoring.

  • Instructions for creating a site via Crafter Studio can be found here.
  • Instructions for initializing a delivery node can be found here.

Conclusion

Content authors are non-technical users who need powerful but easy-to-use tools to help create, maintain and manage their digital experiences. Crafter Studio provides these users with a web-based application that makes it easy for content authors to achieve their goals. Under the hood, Crafter Studio leverages a powerful Git-based repository and deployment engine that provides authors with next-generation versioning and auditing mechanics as well as robust, elastic and distributed deployment.

Today’s digital marketplace is constantly evolving. Companies are always iterating on existing functionality with improvements and deeper integration or introducing new functionality and channels for their audiences. For today’s most innovative and competitive organizations, ongoing development and the move to DevOps is a fact of life. The companies that have the greatest success are those that have the right technology and processes through which they are able to achieve a high, sustainable continuous rate of constant, iterative development, integration and delivery, i.e., Continuous Integration and Delivery (CI/CD).

In the second half of this blog series, we will take a deep dive into how Crafter CMS seamlessly integrates with your CI/CD processes to enable your entire team of developers and content authors to innovate collaboratively without interfering with each other’s workstream.

 

Content Inheritance Basics in Crafter CMS

Crafter CMS supports content inheritance out of the box and supports it via a pluggable mechanism that allows developers to augment or override what’s out of the box.  In this article, we’ll dig into the basics of this functionality.

What is Content Inheritance

Content inheritance is the ability of the CMS to centrally manage content values.  Updating this content in one place automatically updates the value everywhere else.  This goes far beyond simple “shared components” in the sense that, as far as the system is concerned, the inherited values, in fact, belong to the content in question.  In general, with inherited content you may:

  • Centrally define default values
  • Override previously inherited values (from some other level of the graph)
  • Delete or mute previously inherited values (from some other level of the graph)

Content inheritance is useful for a wide range of use cases including translation support, microsite management and common values (hotel count, employee count, CEO name, etc) that you want to use throughout the content but want to manage centrally.

Content Inheritance Basics

Content objects in Crafter CMS are essentially structured markup, XML by default, and house data authored via Crafter Studio by content authors. Content objects are typically structured as a tree which naturally suits the notion of inheriting from a parent (not to say that the inheritance mechanics are limited to that topology). Inheritance works as follows:

Assume we have two objects, one called Parent and one called Child and they’re set up as follows:

Parent: Below you’ll see a typical level descriptor which will be the parent of another object. You’ll note the level descriptor defines multiple elements that are common to everything at this level in the hierarchy and below it. This level descriptor defines a primary CSS file main.css, a common header component default-header.xml and a common footer component default-footer.xml.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
    <?xml version="1.0" encoding="UTF-8"?>
    <component>
            <content-type>/component/level-descriptor</content-type>
            <display-template/>
            <merge-strategy>inherit-levels</merge-strategy>
            <objectGroupId>4123</objectGroupId>
            <objectId>41d1c0c5-bfc9-8fe8-2461-dc57a82b6cab</objectId>
            <file-name>crafter-level-descriptor.level.xml</file-name>
            <folder-name/>
            <cssGroup>
                    <item>
                            <key>/static-assets/css/main.css</key>
                            <value>/static-assets/css/main.css</value>
                            <fileType_s>css</fileType_s>
                    </item>
            </cssGroup>
            <jsGroup/>
            <createdDate>2/7/2016 19:40:03</createdDate>
            <lastModifiedDate>10/8/2016 19:58:30</lastModifiedDate>
            <defaultHeader>
                    <item>
                            <key>/site/components/components/header/default-header.xml</key>
                            <value>Default Header</value>
                            <include>/site/components/components/header/default-header.xml</include>
                            <disableFlattening>false</disableFlattening>
                    </item>
            </defaultHeader>
            <defaultFooter>
                    <item>
                            <key>/site/components/components/footer/default-footer.xml</key>
                            <value>Default Footer</value>
                            <include>/site/components/components/footer/default-footer.xml</include>
                            <disableFlattening>false</disableFlattening>
                    </item>
            </defaultFooter>
            <lastModifiedDate_dt>10/8/2016 19:58:30</lastModifiedDate_dt>
    </component>

Child: Below is the XML file of a page residing under the above level descriptor and is setup to inherit from it. You’ll note the definition of the merge-strategy as inherit-levels, this invokes the level-based inheritance mechanics that require Crafter CMS to look at current and higher levels for files named crafter-level-descriptor.level.xml (this is configurable). You’ll also note that this page doesn’t specify the CSS file/group of files to include, nor will it need to specify the header nor footer components.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
    <?xml version="1.0" encoding="UTF-8"?>
    <page>
            <content-type>/page/one-col-parallax</content-type>
            <display-template>/templates/web/pages/one-col-parallax.ftl</display-template>
            <merge-strategy>inherit-levels</merge-strategy>
            <objectGroupId>9cef</objectGroupId>
            <objectId>001f0955-6da3-8b7a-4e6b-6b373139d0ba</objectId>
            <file-name>index.xml</file-name>
            <folder-name>child-page</folder-name>
            <internal-name>Child</internal-name>
            <navLabel>CHILD</navLabel>
            <title>Child Page</title>
            <headerOverlap>no-overlap</headerOverlap>
            <placeInNav>true</placeInNav>
            <orderDefault_f>12000</orderDefault_f>
            <description>This is the Child page.</description>
            <disabled>false</disabled>
            <createdDate>7/31/2016 16:52:39</createdDate>
            <lastModifiedDate>8/1/2016 18:55:09</lastModifiedDate>
            <body>
                    <h1>Hello World</h1>
            </body>
    </page>

Crafter CMS will invoke the inheritance mechanics implemented in the merge strategy inherit-levels to merge the page and the level descriptor and the merge strategy will pull in the elements defined in the level descriptor into the child page before handing the new model (XML) to the rendering system. This means that when the page renders, the model will automatically contain the meta-data defined in the parent level descriptor. In our example above, the page will automatically inherit the meta-data fields cssGroupdefaultHeader, and defaultFooter.

When an element is defined by the level descriptor and then subsequently defined by a child, the child’s definition overrides the level descriptor.

This mechanism allows you to define meta-data that flows down the information architecture of the site such that an entire site can have defaults and those defaults can be overwritten by sections individual page. Some examples of real-life use of inheritance:

  • Site logo
  • Global stylesheet and JS includes
  • Global headers and footers
  • Section meta-data (flows to all pages/subsections)

The inherit-levels mechanism allows you to set level descriptors at various levels of the information architecture with lower levels overriding upper levels.

What we discussed thus far is a single inheritance strategy implementation, inherit-levels, the code to which is available here: InheritLevelsMergeStrategy.java. There are more inheritance strategies implemented out of the box with Crafter CMS and you can build your own to suit your needs.

Out of the Box Strategies

Strategy
Description
single-file
No content should be inherited.
inherit-levels
Content from Crafter level descriptors (crafter-level-descriptor.xml)
in the current and upper levels should be inherited.
explicit-parent
The parent descriptor to inherit is specified explicitly in the XML
tag parent-descriptor.
targeted-content
The page will be merged with other pages in a targeted content
hierarchy, including level descriptors. For example,
/en_US/about-us will generate the following merging list:
/en_US/about-us/index.xml,
/en_US/about-us/crafter-level-descriptor.xml,
/en/about-us/index.xml,
/en/about-us/crafter-level-descriptor.xml,
/about-us/index.xml/about-us/crafter-level-descriptor.xml,
/crafter-level-descriptor.xml.

Five Reasons Why You Should Use a Git-based CMS (Part 4 of 5)

In our previous posts we looked at Crafter CMS and its Git-based versioning (part 1), distributed repository (part 2) and deployment mechanics along with its decoupled architecture (part 3)  In this post we’ll take a deeper dive into a feature of Crafter’s Git-based CMS that provides unparalleled support and speed for innovation: branching.

Imagine a single train track that stretches from Washington D.C. to New York.  How many trains can run simultaneously on this track?  How fast can the trains go? How much control over the order of arrival do you have?

  • You can run as many trains as you have room for on the track.
  • Anyone train can only go as fast as the train in front of it.
  • The trains always arrive in the order they departed in.
  • If a train breaks down or gets put on hold, it’s nearly impossible to reorder.

With only one track, the railroad’s ability to deliver success is throttled by the amount of volume they can handle and even when capacity is not an issue, they are completely dependent on good luck with respect to order and unplanned changes and breakdowns. With railroads, the solution to this problem is implemented with switches and sidings. Sidings are branches in that single track that allow a train to pull off the main line giving the controller the ability to regulate the order, speed, and quantity of trains on the main line.

The same problems of bandwidth, throughput, and order you see in the railroad example exist with projects that need to go through your traditional CMS.  Traditional CMS platforms have no ability to branch the content and code base. They are just like that single track from DC to New York.  This means you have very little control over the order, speed, and capacity of your development. Your CMS needs branching. Let’s explore this further.

Reason #4: Branching

Nearly every CMS still sits on an architecture and repository that was devised 20 years ago.  Back then innovation was important but the world moved a lot slower.  A single pipeline of features got the job done and the CMS was less of a bottleneck.  Today, in contrast, digital channels are at the core of many organizations’ strategy for serving the customer better and beating the competition.  The organization who moves the fastest often wins.  Agility is key.

Meanwhile, infrastructure costs have gone down while development costs continue to rise.  Balancing costs with volume and speed of innovation is a major challenge of our day.  Today it’s all about great DevOps. To make DevOps really work with CMS you need to be able to branch.

Traditional software development process has included branching for eons. Developers and DevOps have long since figured out that they need to be able to work in teams, isolate work and control the order in which work is merged into the critical path for go-live. The CMS track runs right alongside the traditional development track, and at some point, it merges and the last few steps require the CMS.  It’s only now that the demand for the speed of innovation has increased that the connective tissues between development and the CMS have been put under so much stress that they are completely failing.  The fact is that today, we need that same agility all the way through the CMS and right up to the very last step of production deployment.

Even if the majority of your development is outside the CMS you still need to integrate. Consider the following example:

Because the website needs to integrate with, and ultimately deliver the functionality of the microservice, we need to perform development.  We also need to support daily content edits and continuous publishing. With traditional CMS we have no option but to use multiple CMS environments to support this scenario.  Does this approach give us multiple tracks and control over my releases?  Yes, technically it does.  But practically speaking?  No.  Not at all.

As we learned before, moving content and code between CMS environments with traditional CMS architecture is extremely difficult. The process of spinning up environments and loading code and content into them is so difficult and time consuming for any DevOps team that most won’t even consider it unless absolutely forced to.  Even then the size of the team may not support the need. The rate of innovation crawls to a near stop.

To address this problem, we built Crafter CMS v3 on top of a Git-based repository. As a result, the Crafter CMS platform is built on repository store that not only branches but also is fully distributed.  Not only are you able to easily control the order and rate of work, but it’s a snap to move work from one environment to another.

Moreover, branching not only supports DevOps and but it makes development easier.  Developers and authors can experiment, work on major features and other site enhancements in the safety of branch-based sandboxes that keep them from stepping on each other’s toes.

Conclusion

If you want to quickly innovate with your website, mobile app and other content-rich digital experience apps, you will need multiple teams working on different features at the same time. You need control of who is working on what and the order in which projects will be delivered.  Having the capability to manage these concerns with agility is the key to innovating quickly.  Traditional CMS platforms don’t support the basic feature set that enables this. Moving most of the development outside the CMS only gets you so far. You need a CMS that supports your DevOps process with features like branching and distributed repository if you truly want to be able to move fast.

Stay tuned for our next blog entry to learn another major reason why you should use a Git-based CMS!

Match Highlighting for Search in Crafter CMS

Highlighting search terms in search results is a common requirement for many websites.  Crafter CMS builds on top of Apache Solr and make implementing rich search and other query-driven experiences super simple.

In this tutorial, we’ll create a simple article search backend that highlights the search terms that were used within the results returned to the user.

Step 0: Prerequisites

If you haven’t gotten Crafter CMS set up and built your first site you can follow this tutorial to get started: Working with Your First Crafter CMS Web site.

Step 1: Build a content model for articles

The next thing we need is content to query against.  The first step in supporting content creation is defining the Content Type.  The Content type is the definition of the structure of a particular type of content.  In our example, we want to define the structure of an Article.

A simple article should have:

  • A Url
  • A title
  • An author name
  • A body

To define this we use Crafter’s Content Type management console:  Below you can see the example model including the fields and their types.

You can learn more about content modeling here Content Modeling in Crafter CMS.

Step 2: Create content

Now that you have your article content type defined you can create articles.  To create an article open the pages folder (we modeled the article as a page with a URL) in the sidebar and right click on the home page:

Step 3: Create a REST script to return highlighted results

Now that you have content you can write a RESTful controller and test it.  Let’s create a simple GET based REST controller.

  1. Open the Sidebar and locate the Scripts folder.
  2. Open the Scripts folder and navigate to “rest” folder.
  3. Right click on it and choose “Create Controler
  4. In the script name dialog, enter “search.get” and click Create.
  5. Enter the following code:
// build a query
def keyword = params.q 
def queryStatement = "content-type:\"/page/article\" " 
 
 if(keyword) {
      queryStatement += " AND $keyword"
 }
 
 def query = searchService.createQuery()
 query.setQuery(queryStatement)
 query.addParam("hl", "true")
 query.addParam("hl.fl", "body_html")
 query.addParam("hl.simple.pre", "<b>")
 query.addParam("hl.simple.post", "</b>")
 
 // execute the query
 def executedQuery = searchService.search(query)

def matches = [:]
matches.found = executedQuery.response.numFound
matches.articles = executedQuery.response.documents
matches.highlights = executedQuery.highlighting

return matches

Step 4: Execute the Script

In a browser, go to http://SERVER:PORT/api/search.json?q=AWORDTHATEXISTSINRESULTS

example: http://localhost:8080/api/search.json?q=bacon

Work Your Own Way with Crafter CMS (Series Part 1): Step-through Debugging

Most CMS platforms do a decent job of simplifying content and digital experience creation and editing for non-technical content managers.  The challenges really start once you need to innovate and development is required.  Traditionally CMS platforms have been pretty bad for developers.  They require a lot of CMS specific knowledge and don’t integrate with developer tools and process.
Here are 7 things that developers really want with a CMS:

  1. Let me work locally with my own tools like my IDE and my source code management
  2. Let me leverage my existing skills.  I want a low learning curve. Don’t make me learn a new, niche framework
  3. Let me work in teams on multiple projects at the same time without interfering
  4. Let me maintain a real development process
  5. Make the integration with the CMS seamless
  6. Don’t make me do builds
  7. Don’t make me do heavy deployments

In this installment of the Work Your Way Series we’re going to tackle item #1 (Let me work locally with my own tools like my IDE and my source code management.)   Let’s start with some background: Crafter CMS uses Git as its primary content store.  That’s the foundation of the solution for developer desire #1. A developer can mount a local clone of a Crafter CMS project directly with their IntelliJ, Netbeans, Eclipse or other IDE.  That means they can use their preferred development tools to edit and debug code and templates.  And as they work, all of the changes they make are tracked by the Crafter CMS via its native Git support.  Sounds awesome right?  Let’s learn how to get set up:

Step 1: Get a local copy of Crafter CMS running

You are going to use your IDE to update and debug your code and templates.  You’ll want a local instance of Crafter CMS running so you to execute the code and attach your debugger to.

To install and get Crafter CMS running (authoring environment) locally follow this guide:
http://docs.craftercms.org/en/3.0/getting-started/quick-start-guide.html

Step 2: Start a Crafter CMS project

Now that Crafter CMS is up and running let’s create a project so we have a place to work.  We’ll create a local project for the sake of this article, however, as we’ll learn later in this series, it’s possible to clone a remotely managed project as well.

To create your CMS project follow this guide:
http://docs.craftercms.org/en/3.0/getting-started/your-first-website.html

Step 3: Mount your IDE on top of your Crafter CMS project

For this article, I’m going to use IntelliJ.  Any IDE that allows you to connect to a remote JVM via Java Debug Wire Protocol (JDWP) should work.  Eclipse and Netbeans are other solid IDE options.

  1. Within IntelliJ, create a new project:

  • Select Create New Project

 

  • Choose Groovy

  • A: Configure your project name to something you like
  • B: Browse to the Git repo “Sandbox” under your “CRAFTER-INSTALL/data/repos/sites/PROJECTID/sandbox”
  • C: Change the module name from Sandbox to your project name

 

  • Once your project is created, open it to see all the files.  IntelliJ automatically recognizes you are sitting on top of a Git repo and will allow you to do Git operations right from the IDE.
  • Note that a Groovy project creates an “src” folder.  You can delete this.  You don’t need it.

  • Right click on your model and add a new file called “.gitignore”
  • Do not add this file to Git when prompted.

The contents of the .gitingore file should be as follows:

*.iml
.gitignore
.idea/*
  • This will tell Git and Crafter CMS to ignore your IDE project files

  • If you were to run a “git status” command in your project sandbox you would note that Git is totally satisfied and ready to rock 🙂

 

Step 4: Set up step-through debugging

Great, now that we’ve got our IDE sitting on top of our project and we’ve got the Git integration configured we’re ready to set up the step through-debugging.

  • Shut down Crafter CMS
  • Add the following lines to CRAFTER-INSTALL/bin/apache-tomcat/bin/setenv.sh (or setenv.bat)
    • Syntax in .bat will be slighly different

export CATALINA_OPTS="$CATALINA_OPTS -agentlib:jdwp=transport=dt_socket,address=localhost:8000,server=y,suspend=n"
  • Start Crafter CMS

Back in our IDE we can now configure and start our remote debugging session!

  • Go to “Run” and select “Debug”

  • Since this is a new project we’ve got to configure our debugging connection.  The next time you click Debug you’ll just select the existing configuration and click Run.

  • Click the + (plus) icon to create a new configuration
  • Select “Remote”

  • Give the debug configuration a name
  • Change the port from 5005 to 8000 (to match the value in setenv.sh)
  • Click “Apply” to accept the changes
  • Click “Run” to attach to the JVM running Crafter CMS

  • Congrats! If your JVM configuration is correct and your ports match in the debug config when you click “Run” you’ll see the “Connected” message in the console as shown above.

Step 5: Start debugging

Now that we’re connected we can start stepping through code in our Groovy Scripts.

  • Open the “Scripts” folder in your project
  • Browse down through classes, org, craftercms, sites, editorial and open SearchHelper.groovy
  • Set a breakpoint at line 56.  This method is used by the Home Page controller of the editorial website we created in step 2.
  • In a browser, open the Crafter CMS preview for the homepage of the website we created

  • When you load the homepage in Crafter Studio the rendering will pause

  • IntelliJ’s window will typically come to the foreground automatically.
  • The thread has been paused and the IDE is now in control of the thread execution.
  • You can now use the step-through debugging tools in the IDE to walk through the code.

Step 6: Magic

Found a bug?  Here’s where things get really fun.  We’re about to see the benefit of bringing Git, Groovy and your IDE together in one place.  Fixing the bug and sharing that fix with the rest of your team is a breeze:

  1. Once you understand the bug and you know what code you want to change, click Play to let the thread complete.
  2. Using the IDE, fix the Groovy code.
  3. Simply refresh the browser again to test.  Step through and verify things are working.
  4. Using the IDE to interface with Git, commit the code and push the code forward in your team mates.
  5. Find something fun to do or work on.  You have a lot more time on your hands now that you are working in a truly integrated, no-compile required, easy to code and debug CMS 🙂

 

 

Working with Content as XML/DOM in Crafter CMS

In a previous article (Querying Content in Crafter CMS) we talked about how you can query content in Crafter using content and search services. Under the hood, Crafter CMS saves all of the content you create through the authoring interface (Crafter Studio) as XML.  This XML is published from Crafter Studio to dynamic delivery engines (Crafter Engine) and is available through the content services and is also indexed in Solr and thus available through Crafter’s search services.  There are times in the rendering engine (Crafter Engine) when you may want to get access to values in the XML directly and work with the content as an XML Document Object Model (DOM) API.

In the example below I will show you how you can:

  1. Load a content object
  2. Get access to its DOM
  3. Query for DOM elements within the parent object
  4. Process the results

To illustrate this let’s imagine that we have a use case where content authors need to manage values like CEO name or Number of Offices throughout a site that might change.  They want to pepper these values throughout the site but want to update them quickly from a single place.    To do this we would use a macro/placeholder type approach in the content where the author would enter content with macro/placeholder in the place where the value should go.

Example:

Acme Anvil Co has [Office_count] offices throughout the world.

To manage the Macro/Value pairs we’ll give the authors a content type:

Once you have your macro type defined you can create a content object based on that type:

Now that we have created an author managed list of macros and their corresponding values.  Let’s create a basic controller (some assumptions to keep things simple) to process the content in a component and replace the macros found in the content with the value.

Controller example:

// Load the content object containing the macros key values
def macroItem = siteItemService.getSiteItem("/site/components/5ef8d326-3e85-9f67-e8d6-47fb5dfba21d.xml")

// Get the unprocessed content
// Here we'll put some assumptions in the code.  In reality you may want 
// to process all the content fields in the content.
def content = contentModel.queryValue("content_html")
def contentUpdate = content
// Query the macros and iterate over them performing the replace. 
// this is where we get the DOM and leverage the DOM API
def macros = macroItem.getDom().selectNodes("//item")
macros.each { pair ->
 def macro = pair.valueOf("./macro")
 def value = pair.valueOf("./value")
 contentUpdate = contentUpdate.replace("["+macro+"]", value)
}

// Make the processed content available to the template
templateModel.content_html = contentUpdate

 

You can easily tie this controller into any template by adding it to the top of the template:

<@controller path="/scripts/controllers/macro-controller.groovy" />

Most of the time with Crafter CMS, there is no reason to access the DOM directly.  From time to time you may find it more convenient or necessary.  Now you know how!

Why Developers Should Care About CMS

As developers, we’ve got a strong handle on how to manage and deploy our code assets.  Yet every one of us, at some point in our application build has said, “What about this text? What about these images? Where do these belong?”  That’s pretty universal.  Nearly every single application today has content in it.  Be it a web app or a native app; it’s full of strings, images, icons, media and other classes of content.

This content doesn’t really belong in our code base — because it’s not code.  These non-code assets make us as developers pretty uneasy.  We know that at some point a business user is going to ask us to make a change to one of those strings and we’re going to spend hours of build and deploy cycles to handle a 30-second code change.  We know that at some point we’re going to need to translate that content.  We know at some point we’re going to replace this UI with another one.   We know all these things  — and we know leaving that content, even if it’s abstracted into a string table or a resource bundle is going to come back to haunt us; no matter the abstraction: it’s part of the build, developers need to update it.  Developers and Systems folks need to deploy it.  

Smart developers separate code from content.  They make sure that the content in the application is completely independent of the build and deploy cycle of the application itself.  Where appropriate they make sure non-technical business users and have access to the externalized content and can update it and publish changes at any time.

Consider the following scenario:  Your application is for an insurance company.  Along comes major regulation change.  Now your (and every other application) needs to change its language accordingly.  If the content was separate from the application the response time and cost to make changes is minimal.  It’s business as usual.  If the content is not separate you are looking at months of builds, testing and deployment that costs time, opportunity and a tremendous amount of money,

What we’re really talking about is application architecture.  Making the right decisions has significant impacts on our development teams and process and as we’ve illustrated, our organization’s responsiveness and bottom lines.

Solving the Content Problem Through Application Architecture

This problem is very similar to another problem:  Websites.  Waaaaaaaaaaaaay back in the 1990’s people built and maintained entire websites in HTML by hand.  You had to be a programmer who understood the markup and how to update it and ultimately deploy it to a server once complete.  This model didn’t work. Once business started using the web to communicate to the masses marketing departments started getting involved.  Marketers wanted constant updates. Developers wanted to write code, not update copy.  They needed a solution.  The Web Content Management System (WCM for short) was born.  WCM offered a new architectural solution to manage websites. WCM allowed the developers to put the code into templates that contained placeholders for the content and gave authors a user interface to manage and edit the content at any time.  The content is managed separately from the code (the templates) and the two are brought together to create the final product. Everyone wins.

We need the same capabilities that WCM provides but for our applications.

Why Not Use a Traditional CMS?!

Simple.  Traditional CMS platforms have the wrong underlying design. Most CMS platforms were built or are based on the same design as CMS’s that were built 20 years ago.

  • They were built to manage pages:  Most CMSs do not have a generic concept of what content is.   These systems were built to manage pages.  You might be able to contort the system to get what you want from it but it will be a hack. You need a CMS that is agnostic to the kind or type of content you need to manage.
  • They rely on the wrong type of data tier: RDBMS/SQL databases were the primary backends for data regardless of the problem and open source options like MySQL were readily available.  You need a CMS that leverages a data tier that is more distributed and flexible.
  • They are built on the wrong technology: Many CMS platforms are built in PHP.  That’s fine for certain use cases. But frankly, Java has a larger community, more library support and a lot of powerful platforms like Solr, Elastic Search, Hadoop (and many others) that plug into a Java CMS perfectly.  Why run multiple tech stacks to accomplish a single goal?
  • They aren’t secure: Most CMS platforms couple authoring and delivery together in a single database.  That means work in progress (like pending policy updates) is available in the delivery runtime to anyone who finds a way into the database.  That can spell disaster in the event of a hack. Large companies and governments have suffered expensive and dangerous leaks due to this kind of coupled, insecure architecture.
  • They don’t support multi-tenancy: Most of the CMS platforms out there are not multi-tenant.  If you’re going to back apps with a CMS you need a CMS that will let you properly segregate and secure content between apps.  No one wants to do a CMS install PER app.
  • They don’t scale:  RDBMS is hard to scale.  You either cluster or replicate.  Both have serious issues when it comes to really large, global and auto-scale type deployments.  Supporting an app with your CMS will demand scale. Design for scale from the start.
  • They don’t fit into your development tools and process: When most CMS platforms were designed the content authors were the only audience that mattered.  Today it’s different.  Developer tasks like managing templates, CSS, JavaScript, content models and other artifacts need to be just as easy — and it needs to fit in with your existing workflow.  That means Git source code management, Continous Integration (CI), Integration with your IDE, the ability to fork and support multiple teams simultaneously.

If Not Traditional CMS, Then What? A Modern CMS, That’s What!

To get different results you need a different design.  You need a modern CMS:

  1. Built with the right technologies like Java, Spring, Groovy, Solr, Git.  These are the technologies that will be easy to hire for, will fit into and integrate with any enterprise and will have the out of the box capabilities and horsepower to back any scale deployment.
  2. Decoupled architecture that cleanly separates authoring and content delivery responsibilities on to separate, independently highly securable, highly scalable infrastructures by a publishing process.  Decoupled Architectures don’t have work in progress outside the firewall.  They are much easier to scale.
  3. Built on scalable document-oriented data tiers like XML+disk/Git and Casandra and others.  Disk-based tech is much better for streaming rich content and support global distribution better because they don’t need to be clustered.  At a minimum, you want something that’s document oriented and that distributes globally better than a than a SQL backend RDBMS.
  4. Content type agnostic:  From strings to web pages, to videos and virtual reality experiences.  The CMS needs to accept any kind of content.  That comes down to how content is modeled and how it’s stored.  Does the CMS store content in a SQL database or a JCR repo?  Red flag.  Can’t provide Headless Content / APIs AND rendered content?  Another Red Flag.
  5. Is Multi-Tenant. You need to be able to support many customers/apps on a single install. Either you support it or you don’t and today it’s a must!
  6. Supports authors AS WELL AS developers and DevOps. Developers don’t want to work in some clunky CMS.  They want to work locally in an IDE, then be able to work with their team and ultimately go through the entire workflow out to production.  A modern CMS gives them this.
  7. Open Source.  Open is better than closed. The debate is over. Is it a must?  No. But you’ll get more for your money and you’ll get a lot more innovation at a faster rate if you leverage open source.

Where can I find a Modern CMS?!

Check out Crafter CMS!  Crafter CMS is a modern, 100% open source Java based CMS built on Git!  It’s easy for authors, Amazing for developers and Awesome for DevOps!  Crafter doesn’t sit on a SQL database or a JCR repo like so many CMS platforms out there.  It’s built on Git, Java/Spring, and Solr.  It brings the power Git to CMS!  Authors have true versioning.  Developers and DevOps can work they way they are used to, with the tools they are used to based on the technologies they already know.  Crafter CMS is 100% content type agnostic, highly secure, high performance and scales like nothing else.  If you’re building any kind of application or digital experience, you have content, and you NEED Crafter CMS.  Great experiences are crafted!

 

Active Cache a RESTful Response in Crafter CMS

This article will demonstrate how to create a RESTful service in Crafter Engine that has predictable performance and reliability when your service relies on an external service.

Any time your services depend on another service there is a cause for concern. You can’t control the performance or the availability of the external service. Further, if the response of the external service is not unique across calls then there may be no real need to call out to it on each request you receive.

In this case, what you want to do is cache the request from the external service and have your service attempt to get the content from the cache. Active cache is a built in Crafter CMS capability that makes building these sort of solutions much easier. You tell Active Cache what you want, how to get it and how often to refresh it in the background. From there on, you simply ask Active cache for whatever the current response is.

Step 1: Create a REST Controller

Under Scripts/rest right click and click create controller
Enter my-data.get as the controller name
Add the following code to the controller.

 import org.craftercms.core.cache.CacheLoader
 import org.craftercms.core.service.CachingOptions
 import groovy.json.JsonSlurper

 def cacheService = applicationContext["crafter.cacheService"]
 def cacheContext = siteContext.getContext()
 def myCacheKey = "aServiceCallResponse"
 def loader = new ExternalServiceLoader()

 def value = ""
 def responseItem = cacheService.get(cacheContext, myCacheKey)

 if(responseItem == null) {
    // item is not cached, get the value
    def myObject = loader.load()
    value = myObject

   // cache the value with a loader to periodically refresh its value
    def cachingOptions = CachingOptions.DEFAULT_CACHING_OPTIONS
    // def cachingOptions = new CachingOptions() // define your own options
    // cachingOptions.setRefreshFrequency(1) // set the number of ticks for refresh

    try {
        cacheService.put(cacheContext, myCacheKey, myObject, [], cachingOptions, loader)
    }
    catch(err) {
        logger.error("error adding ${myCacheKey} to cache: ${err}")
     }
 }
 else {
    value = responseItem
 }

 return value

/**
 * Define an active cache loader that will be used to prime and then
 * periodically refresh the cache with the latest data from an external
 * service.
 */
 class ExternalServiceLoader implements CacheLoader {
    Object load(Object... parameters) throws Exception {
       def externalServiceHost = "http://api.population.io/1.0"
       def externalServiceURL = "/population/United%20States/today-and-tomorrow/"

       // call the service
       def response = (externalServiceHost+externalServiceURL).toURL().getText()

       // parse service's the JSON response to an object
       def result = new JsonSlurper().parseText( response )

       return result
    }
 }

Step 2: Execute the Service

Open a browser and hit the following URL: http://localhost:8080/api/1/services/my-data.json
Note that your hostname, ports and pageId values may differ than the example
See results