Quantcast
Channel: s_code.js – Web Analytics for Developers
Viewing all 46 articles
Browse latest View live

Know Your Version

$
0
0

I received a question via Twitter the other day:

That is a really good question! Sounds simple but there are a couple of ways to get to the version. And as an added bonus, some of those ways are good, while others aren’t.

So let’s look at some ways, starting with the bad.

(tl;dr — best way is to look at the URL)

Unsafe — Look at Javascript on Page

When you (or one of your predecessors) first implemented SiteCatalyst or Adobe Analytics, you received some “page code”, either via Adobe consulting or by downloading it from the Admin Tools.

This snippet of Javascript code contains a version!

[Screenshot]

Version in Page Code

I would bet that when (if) you updated at some stage, you probably didn’t touch this code, so the version here is more likely to tell you what version was current when you implemented than anything else.

In my case, I can see H.21, which dates back to early 2008. Yup, sounds reasonable.

Ok — Look at s_code.js File

There’s a version at the top of the s_code.js file.

[Screenshot]

Versions s_code.js Oben

Just like the version on the page, this is probably not reliable. If you update your s_code.js file the way most people do (i.e. replace only the core Javascript code), the comment at the top will soon be hopelessly outdated.

But!

Open the file in your favourite editor and search for s.version= and you will find it.

[Screenshot]

Version s_code

If you are already using the AppMeasurement for Javascript, it’ll look like this:

[Screenshot]

Version AppMeasurement

Unless someone in your organisation messed with the s_code.js file, this is the actual version of your tracking code.

Best — Look at Tracking URL

If you go to the website in question and look at the tracking URL in your favourite tool, you can also see the version:

[Screenshot]

Version in the Debugger

You should know by now that I am a big fan of Charles

[Screenshot]

Version in Charles

The version you see here is the same as the one in the s.version= we mentioned in the section above. It is therefore equally reliable.

So why is this the best option?

  1. It shows you the correct version
  2. No need to find the s_code.js file
  3. Works for all tracking, Javascript, AppMeasurement libraries, even hard-coded pixels (if they were hard-coded correctly, of course)
  4. Can easily be checked within the browser, using “Inspect”, “Firebug”, “Developer Console” or whatever it is called.

14 or 15?

Ha!

Now that is a totally different question!

Whether you are on SiteCatalyst 14 (unlikely) or SiteCatalyst 15 / Adobe Analytics is something that you can only see when you log in.

So please do!

A tool is only good when people use it.

I am almost 100% sure though that you will be on SiteCatalyst 15 / Adobe Analytics.


Filed under: Javascript, Page Code, Principles Tagged: charles, debug, javascript, s_code.js

A Short History of Processing a Hit

$
0
0

In the article about Modifying Data Server-Side, we gave a brief list of the steps that every bit of data has to go through before your friendly marketer will see it in her reports.

Let’s expand on that and also mention the different points where you can add information or retrieve it.

Let’s start in your realm, the page or app.

On the Page

This is where you primarily enter information into Adobe Analytics. You “capture” data.

How you do that is up to you, of course, and it depends on the environment as well as the goals. To drop a few keywords:

This stage ends when you call the s.t() or s.tl() method and the core Javascript code takes over.

In Transit

The next step is for the core code to call the s.doPlugins callback which allows you to automate things that should happen on every hit.

The data is eventually being sent to “the cloud”. To be exact, it goes to one of the collection servers somewhere in the world.

At this point, you can very easily see what happens, which makes this the perfect spot to debug by looking at the URL (do I have to repeat that I use Charles for that? Or did I mention that before?).

This is where your — the developer’s — involvement with putting data into the system usually stops. But don’t stop reading, because it makes sense for you to know how your marketer can influence & change (and potentially mess up!) data later.

There will also be some points where you can get data out of the system for further use, and your friendly marketer might ask you to do that.

In the Cloud

Once the data arrives on Adobe servers, it goes through a number of processing steps. You might remember this diagram:

[Screenshot]

Data Processing Steps

There is actually more to it.
  1. Collection server resolves any Dynamic Variables
  2. Collection server does an IP-based geo-lookup
  3. Collection server passes data on
  4. System applies Processing Rules
  5. System applies VISTA Rules

At this point the data in the hit has been modified to an extent, but the view is still hit-based, i.e. so far the system hasn’t applied any prior knowlege.

The upcoming “Firehose” feature that allows you to receive raw hit data in realtime sits at this point. In other words: Firehose will give you a stream of hit-level data that has gone through all the steps we have seen so far.

  1. System applies the “Visitor Profile”, which basically means it adds data it collected in the past, like eVar values. The visitor profile is matched based on the “visitor ID” (which I really need to write about!)
  2. System applies Marketing Channel Processing Rules (for the Marketing Channel reports)

So right now the hits have been enriched with data that is tied to the visitor via the visitor ID.

This is the point where data gets split: it goes into Data Warehouse (which means that Data Feeds contain what we have now), plus it is sent into a further processing queue for Adobe Analytics.

  1. System passes data into report servers
  2. Report servers process and store data for easy access in Adobe Analytics
  3. Friendly marketer navigates to a report
  4. System retrieves data
  5. System passes data through SAINT and displays report

The reports that your friendly marketer pulls are like a pre-defined view on aspects of the data that you have sent into the system. They are designed to help her find answers and make decisions.

And there is one more thing that we need to mention, one more thing you can do to help her: you can use the Reporting API to pull data for her, and you can make fancy displays or dashboards for her. The data that you pull out of the Reporting API has gone through all 12 steps.

Don’t forget to use the all new version 1.4 for added goodness.

*phew*, done.

If you are at the EMEA Summit in London in two weeks, you might want to come and see Bret Gundersen and myself at 11.20 on Thursday when we’ll talk about Processing Rules. Bret will also show a slide that’ll sum up a lot of what I wrote above.

Looking forward to seeing you there!


Filed under: Principles Tagged: api, charles, classifications, context data, data layer, debug, eVar, javascript, link tracking, plugins, processing rules, saint, simplicity, s_code.js, tag manager, variables, visitor ID

Visitor IDs & Visitor Profile

$
0
0

For months now I must have thought that an article about visitor IDs is really overdue. It is the last remaining principle aspect of Adobe Analytics that I have not explained, I think.

Also, in step 6 of my post on processing hits, I mentioned the “Visitor Profile” and didn’t really explain it.

Bad blogger!

visitor ID

When your friendly marketer looks at how she spends her money and what she gets back for it, she likes to analyse actual people rather than Page Views or Visits. It is really important for her to know that someone who came to the site via paid search two weeks ago has now bought that camera or asked for a call from a sales rep.

Looking at those two events together will tell her whether she has spent in the right places, essentially. Her view on your site is therefore heavily biased towards metrics or systems that analyse people rather than hits. Keep that in mind.

You, of course, are a developer. You know that HTTP is stateless and that if you want to remember anything on a web site, you need to store it somewhere. That somewhere might be a session, a cookie or a bunch of other places. Now sessions do usually time out, so that leaves us with cookies, right?

So, does SiteCatalyst Adobe Analytics store everything a visitor does in cookies? No, not really. That would probably break most browsers at some point.

But it does set one very important cookie called “s_vi”.

This cookie contains a tiny bit of meta data plus a visitor ID.

[Screenshot]

Contents of an s_vi Cookie in Chrome

See those “|” in the cookie value? They delimit the visitor ID.

This visitor ID is how Analytics identifies a visitor. It is sent with every hit (of course, it’s a cookie!) and the system picks it up and records it along the rest of the “variables” you send.

Woah, wait! I hear you say, cookies are per browser!

They sure are. Which is why if you use multiple — say 3 — browsers on your machine, for Adobe Analytics you will be multiple — 3 — visitors. And if you also visit the site in question with other devices, say a tablet and a phone, you will be even more visitors — at least 5 by now. This is something all analytics vendors are currently trying to solve.

Whimsical bonus tip: if you copy the s_vi cookie across all of your browsers, you will actually be counted as a single visitor across all of them! Just in case you’d want to.

3rd-party vs 1st-party

We all know that most browsers these days are picky and do not set cookies just like that. They usually refuse to write 3rd-party cookies which means no visitor ID.

Luckily, there are two fallback mechanisms.

  1. If your site uses version H.25.3 or later of the core Javascript code, you will also notice a cookie called “s_fid”, which is set via Javascript and therefore always 1st-party. The system uses the fallback visitor ID in s_fid if it can’t find s_vi.
  2. Should you be on older code, or should the browser just flatly refuse to write any cookie, the system will use the HTTP User-Agent field combined with the visitor’s IP address to calculate a visitor ID. Yes, that is a lot less accurate.

A lot of customers do therefore set 1st-party cookies. I did write about the difference briefly in the article on tracking vanity URLs but I guess I could do more. *scribbles note*

You can find more about visitor identification in the Adobe Analytics Implementation section of the online help, including information about the new “Marketing Cloud Visitor ID Service”, which will allow you to use one visitor ID across the Marketing Cloud. Definitely recommended for fresh installations and a topic I will come back to soon.

You can also provide your very own visitor ID if you want, but for most people and in most cases, this is not recommended. We’ll come back to that as well.

Visitor Profile

At this point, you can understand how “persistence” works for eVars, right? Whatever value is passed into the system, the back-end stores it according to the settings of the eVar. Over time, there will be a lot of data associated with any given visitor ID.

This is what we call the “Visitor Profile”.

Note that the Visitor Profile stores values on two levels, really: visitor level and visit level. The former persists forever (as long as your cookie lives), the latter expires when the visit does.

There are two built-in reports that illustrate this:

The “Referring Domains” & “Original Referring Domains” reports under “Traffic Sources” — while the former is stored for every visit, the latter is attached at the Visitor level, meaning as long as the system sees you as the same visitor, it’ll report your first ever referring domain.

As an example, say you found this blog via a link on Google but have since added it to your feed reader and usually come directly. Your Original Referring Domain would always be “google.com”, whereas your Referring Domain would have been “google.com” once and “Typed/Bookmarked” for all your visits since. Makes sense?

Going back to the processing steps, step 6 is where all this stored information about the visitor is applied. And what does “applied” mean, exactly?

Think about Success Events and let me explain using an example.

Whenever something important happens on your site — say a visitor shared an article —, you send a corresponding Success Event — say event15, called “Content Shares”.

You also try to encourage visitors to your site, say via social media. Your friendly marketer wants to know whether your posts actually bring people to the site and whether those people subsequently share content (VIRAL!!! Oops, sorry.).

Using the basic principle of Campaign tracking, you capture which post brought visitors to the site at the time when they land. You store that in the s.campaign “variable” (or rather a plugin in s.doPlugins() does that automatically).

I said above that all values that you send into the system are stored in the Visitor Profile, and that obviously includes the tracking code you just sent into s.campaign. So far so good.

Now when the visitor shares an article, the hit contains event15, but not the tracking code, right? This is where step 6 comes in: when the system sees Success Events, it applies them against the values that are currently stored in the Visitor Profile for all eVars that are currently enabled. Because s.campaign is an eVar, the tracking code you sent at the beginning of the visit now gets one point.

The resulting report looks something like this:

[Screenshot]

Sample Report with Campaigns and Success Event

I’m afraid the number are exaggerated, but you get the point.

In essence, the Visitor Profile is what makes props and eVars different. Values for eVars are stored in the Visitor Profile, values for props are not.

How long values for eVars are stored and what happens when you overwrite them is controlled by your friendly marketer via the management console in Adobe Analytics. We will explain those settings another day.


Filed under: Principles Tagged: campaigns, cookies, eVar, plugins, s_code.js, s_vi, visitor ID

The Visitor ID Service

$
0
0

What I write about mostly on this blog has to do with analytics, specifically Adobe Analytics, fka SiteCatalyst.

But this is just one in the large collection of tools your friendly marketer uses to do her job. Analytics allow her to capture data for analysis and observation. She can also use the data to drive aspects of her site, such as personalisation.

Other parts of her toolkit allow her to manage SEM spend, email marketing, testing, targeting, site search, social presence and a ton of other aspects of customer interaction.

Most vendors have realised that offering more of that toolkit makes for bigger deals and easier integrations, so almost all are now offering cloud-based solutions that cover more than one angle. Adobe is one of those vendors, and the offering is called the “Adobe Marketing Cloud”.

(Sorry for the long-winded introduction, I just want to make sure I explain when and why using the Visitor ID Service is a good idea.)

So when your marketer uses more than one tool, integration makes sense. Obviously.

Let’s look at an example.

Let’s say she is using Adobe Analytics and Adobe Target. Her analysts have sifted through a lot of data about how customers acted over the last months and they think they have discovered a segment of visitors that should spend more money if only they were presented more relevant banners across the site.

So your friendly marketer wants to try, and for that she needs to take the segment her analysts built and use that in Adobe Target.

Technically

Right, this is it with the preliminaries, now we’re going back into the basement, where light is neon and bits dance in heavy metal racks and through bright, yellow cables.

What does “taking a segment and using it” actually mean? How is a segment defined?

Ah! Good questions you ask, young apprentice!

Segments are essentially defined by rules, any kind of rules. They usually contain visitors, of course, because visitors is what your friendly marketer really cares about.

So using a segment in another part of the marketing cloud means that this other part must be able to identify each visitor that is part of the original segment as such.

Remember how we identify visitors? Yes: the visitor ID!

Note on privacy: Just to be explicit here. The tools do not have to know who these visitors are! It is enough to be able to identify them as the same totally anonymous ID in Analytics and Target. I think this is an important distinction, mostly because in some markets in Europe, collecting any personal data is highly restricted.

Example: I forgot my shopping bag in a corner shop, so I went back to collect it about 5 minutes later. The shop keeper remembered my face and gave me the bag. He doesn’t know who I am, but he was still able to identify me as the person who forgot the bag.

History

The Adobe Marketing Cloud wasn’t built from scratch. It came to life over time as Adobe (and Omniture before that) built some tools and acquired others, then integrated all of them. Those individual tools all had to identify visitors when they were standalone tools, and so they all had their own mechanism for doing so.

I wrote about how Adobe Analytics does/did it two weeks ago, so you should know that.

And today I want to show you the way forward, the Visitor ID Service.

The idea is that in the future, all the parts of the Marketing Cloud should be using the same visitor IDs. That would enable them to integrate very tightly, at a visitor level. Meaning: Find a segment of visitors in one tool, do something with it in another. Bliss.

I think we all agree that is a good thing, and I know your friendly marketer wants it, so the remaining question is: how do you implement it?

Prerequisites

There are some things you need to check:

Your company must be enabled for the Marketing Cloud. You should probably ask your friendly marketer or analytics manager to work on that.

If you are using AppMeasurement.js, you need at least version 1.3, and if you’re on standard s_code.js, it has to be H.27 or later.

For those of you on Adobe Target, make sure your mbox.js is version 48 or later. The AppMeasurement libraries for Flash, Flex & Air must be at least version 3.8.

And here is a big one: if you are currently tracking using 1st-party cookies, meaning you have given Adobe certificates and created CNAME records in your name server, you can drop all of that. The Visitor ID Service sets cookies using Javascript and doesn’t therefore need any CNAMEs or certificates. Read CNAME and the Visitor ID Service in the online help for more information.

Implementation

The first step is to download a Javascript file.

Just like the core Javascript code in the s_code.js file provides everything needed for Adobe Analytics, the visitorAPI.js file defines the Visitor ID Service core code.

The second step is to configure the file.

At the very top of visitorAPI.js you have to configure two or three lines. You will need to add your report suite ID into the first line and your tracking server into the second (and potentially tracking server secure into the third).

I have previously explained where to find the report suite ID, and the tracking server(s) can be found in the configuration section in the s_code.js file. Sometimes they are hidden away just above the “* DO NOT ALTER ANYTHING BELOW THIS LINE ! *” line.

Make sure the values match the values in your current s_code.js file, otherwise there’ll be tears and “visitor cliffing”. Visitors who had a perfectly valid visitor ID will be assigned a new one, temporarily inflating the count of unique visitors on the site, making your friendly marketer sad.

The third step is to add this file into all of your pages or templates.

Add it before any other Adobe Javascript files. For some of you that will mean putting it into the <head> tag which is perfectly fine.

The fourth — somewhat optional, but very recommended — step is to track whether the library works or not. The documentation tells you what to track and to wait for the coverage to be 100% before moving on. It does not tell you that you can assign the variable in the s_code.js file so you don’t have to do it on each page.

The fifth step is to tell Adobe Analytics to use the new visitor IDs.

For that you just use the new Visitor object to set the new s.visitor “variable”, like so:

	s.visitorID = Visitor.getInstance("RSID");

Those of you who are using a tag management system have it easy. You will likely be able to do the whole change in your TMS and noone will even notice!

Please make sure you test thoroughly before you put the change live, though! Visitor cliffing is not catastrophic, but it can mess up the data quite a bit and your friendly marketer might be slightly less friendly if you do that.

As usual, the documentation in the online help is pretty good, so you might want to give that a try as well.


Filed under: AppMeasurement, Javascript, Page Code, Principles Tagged: appmeasurement.js, cookies, javascript, s_code.js, s_vi, tag manager, visitor ID

Visitor ID Service Revisited

$
0
0

(Ah, the title… see what I did there?)

Just two days after my article about the Visitor ID Service, my colleagues in Engineering and Product Management released a new version of Reports & Analytics and made a couple of nice changes to the Visitor ID Service.

So, essentially, forget everything I said last time. Maybe not all of it, but there are two aspects that I have to revisit (ah, I did it again!) and one that I’d like to mention.

CNAME

The Visitor ID Service can now handle CNAME implementations.

Two new variables allow you to tell the Service which domain it should use for cookies. These are:

	visitor.marketingCloudServer
	visitor.marketingCloudServerSecure

Set those two and the Visitor ID Service can be used to track visitors across domains.

The online help explains nicely why that works, even with Safari and other browsers that have a strict policy regarding 3rd-party cookies.

Report Suite ID

Or rather not.

In my last article, I wrote that when you instantiate the Service, you do so using the report suite ID. That is no longer the case.

Instead, you now use the “Adobe Marketing Cloud Organization ID”, which you can get from Client Care when they set you up for access to the Marketing Cloud.

It makes sense to me to replace the report suite ID with something else. Some customers use the Marketing Cloud but not Analytics. Why would they specify an rsid? And what about tools that have no concept of a report suite? Why would I give Audience Manager an rsid? Or Target? It would make no sense!

So if you implement the Visitor ID Service now, you pass the MCORG ID into the “constructor” rather than the rsid.

Visitor IDs & Customer IDs

You can now specify IDs for your visitors that come from your back end systems.

For example, if a visitor logs into your site, you now know who it is and you can send that data into the Visitor ID Service along with the normal tracking or targeting.

A visitor can have more than one customer ID and you can assign those IDs to names that make sense to you.

I want to point out that, currently, that data is not used by any of the tools of the Marketing Cloud, but there is likely going to be some new cool functionality at some point that uses the data, so why not start preparing now?

Note: do not pass any identifiable information into the customer ID! Never! Hash it, if you’re comfortable with that. Encrypt it, if you’re a hero. Or just don’t.

Note

One more thing: if you decide to implement the Visitor ID Service now on the site you are responsible for, you might want to read about the Visitor ID Grace Period.

Essentially, if there is any doubt as to whether your implementation will instantly be available everywhere on your site, you should make sure to use a Grace Period because the Visitor ID Service will otherwise not hand back a visitor ID that Analytics could read on its own, thereby preventing places where Analytics is not yet using the Visitor ID Service to see an existing visitor ID.

You’d end up inflating your visitor counts and upsetting your friendly marketer.

There is also a section in the help on how the Visitor ID Service works, which I recommend.

(Sorry for the short posting this week. I am in the middle of moving to Basel in Switzerland and I’m currently lying on the floor in an almost empty house. Not the best position for typing long blog posts…)


Filed under: AppMeasurement, Integration, Javascript Tagged: appmeasurement.js, rsid, s_code.js, s_vi, visitor ID

TDD for Analytics, please

$
0
0

Let me throw out an idea here. Given that you are a developer, I hope you can a) appreciate the idea and b) help me develop it into something that works.

The idea is to use test-driven development for Analytics or maybe for digital marketing tagging in general.

Motivation

I recently attended a conference in Berlin where we discussed “agile” in the context of analytics. The participants had a lot of ideas and were trying hard to grasp the meaning of “agile” in our part of the world.

Unlike the other discussion rounds I attended, this one was more like poking around in the dark, and I think we all felt it.

The truth is: we are so far away from being “agile” in analytics that noone really has a clue even where to start! Most people do not know what “agile” means in this context. And it must be difficult for a marketer to see why they should work “agile”, whereas for you, it is almost obvious by now.

So, could we pick just one aspect of “agile” and start with that?

Funny enough, we sort of have already. I am not the only one who says that the implementation phase is NOT key. Consultants everywhere use the “crawl, walk, run” analogy to describe how an organisation should approach new stuff.

Starting with a small item (plan, deploy, learn, understand, use) is making it easier for everyone to do things and limits the risks.

And starting with something small is almost like working on a story, isn’t it?! Or at least it could be, which is why I think this might be the easiest way into the whole “agile” world.

Plan

So the idea would be to take it a step further and formalise the way new functionality is introduced into the analytics setup (or the overall digital marketing deployment setup).

Let’s use “I want to know the impact of my paid campaigns on newsletter signups” as a requirement and acceptance test case (if we did ATDD).

We would add that test to our analytics framework.

If we ran it, the test would obviously (hopefully!) fail. Good!

Then we would break it down into smaller (unit) tests, such as “when I am passing a tracking code on the URL of a page, the Javascript code shall take the tracking code and pass it into the analytics system’” and “if a tracking code is passed into the system, a Processing Rule will assign the tracking code into the campaign variable”.

Now both these rules will also fail, of course, but at this level, a developer (you!) can do something about it and provide the code to fix test 1. Your friendly marketer or an analytics administrator can fix test 2.

This way, the big, overall acceptance test will eventually pass.

Infrastructure

Up to this point, nothing has helped you.

For this idea to truly make a difference, there must be automated unit tests, built into a framework. That’s a job for the vendors, of course.

But the moment this unit test framework exists, you will have two important advantages:

  1. no more releases that accidentally break something
  2. explicit documentation of requirements in the system

There are a lot of challenges and some great opportunities here. Off the top of my head, and to open a discussion:

  • Instead of writing “Tech Specs” or “Solution Design References”, consultants could essentially sell acceptance tests or unit tests. They’d be documentation and goal specification in one.
  • Tests can be extended as technology evolves. Think about the new Visitor ID Service. If you created a test before migrating over, you’d be a lot less stressed out about it, right? Plus all your existing tests would still be there, alerting you if the change broke anything.
  • If you think about the processing of hits, you can see that there are multiple points that tests could hook into. The simpler tests could look at (or live in) the doPlugins method in the s_code.js file, whereas more involved scenarios would have tests that looked at the processed data, maybe using the Reporting API.
  • Of course the backend would have to be tweaked to enable testing in real time. Maybe the new “Live Stream” feature could help with that.

All of this is useful and makes sense from the point of view of a developer.

The one big thing that the marketer gets out of it is the knowledge that everything will be fine because the developers know what to do and they won’t break anything.

Eventually the marketer should also realise that by working with small, contained stories, she gets things more quickly and she will know up front what she’ll get.

What next?

I think there are a lot of unknowns. Apart from the changes to the backend to enable all of this, there a surely some conceptual issues I have overlooked or maybe the whole thing is utter nonsense. But I think there is something to it, and I’d love to hear your opinion and suggestions!


Filed under: Automation, Integration, Opinion Tagged: debug, implementation, simplicity, s_code.js

Tagging Forms (w/o Losing Money)

$
0
0

Our post today will touch on two separate subjects (tagging of forms and limiting server calls (and therefore cost) in certain situations). Those two work together very well, so I decided to mix them together. Two subjects in one posting. Surely that’s a good deal!

Forms

A lot of web sites live entirely to provide visitors a means of entering information. Popular examples are financial sites (where you can open accounts, order credit cards and so on), retail (where you specify delivery and billing addresses), social & health sites (where you provide updates or post content). Even the simple site search is a form, although that one is special (and I have written about it already).

Marketers love forms. When visitors type something in, it provides the marketer with information about that visitor, which will eventually lead to a better understanding.

Product owners and sales need forms so visitors can hand over information that allows them to sell products. Support like forms as well; they streamline case creation and make it easy to ask for feedback.

There is one big problem with forms, though: people eventually get sick of typing stuff into fields in their browser.

How far they go depends on a couple of things, the most important being: how badly do they want what is at the end of the form. The actual form itself plays a big role as well, of course. Make a crap form and you get crap completion rates. A lot of sites got burned when mobile devices became relevant!

Bad conversion rates are bad for a couple of reasons. If people drop out while buying something, your company will not sell as much as it could. If they drop out while they want something and then call instead, your company needs to staff a call centre, which is expensive.

Bottom line: make your forms as easy and well-oiled as possible.

Tracking Forms

When your friendly marketer asks you to track a form for her, this is what she means:

  • she wants to see a completion rate — form was called up x times and submitted y times.
  • She might want to see a “Form Abandoned” metric.
  • She wants to know which field in the form scared people off — by proxy of finding the last field that they did change.
  • She might want to know how often one visitor started a specific form — “how often did they fail? did they eventually succeed?”

The first and last requirements are relatively easy to do: track both the form page and the “thanks for submitting your information” page and you can calculate x and y and therefore the form completion rate. You can use a Counter eVar to see how many times people tried (see also Counter eVars by Adam Greco and Visitor Scoring by Ben Gaines for more on those).

But what about requirement two and three?

The naive approach would be to send a tracking request every time the visitor has filled or modified a field in the form using Custom Link Tracking.

But there is a massive issue with that: each tracking call costs money!

That’s how Adobe Analytics is billed — by server call.

So it is in your interest to keep the number of tracking calls as low as possible and forms are a very good playground for explaining one widely used method: cookies plus a call on the next page.

Serve Cookies to Keep Cost Down

The idea is actually really simple. For each state change (a filled form field, for example), you update a cookie. As soon as you track something else, you check the cookie contents and send the state along. Added insight without any more cost.

So let me describe it in more detail.

Step 1 — add plugin [optional, sort of]

Your marketer will want to see last changed form field correlated to the form that the visitor was working on, we therefore need to track the name of the form and the name of the field.

The name of the field comes from the cookie, obviously.

For the name of the form, you have two options: you can store it into the cookie along with the field name, or you can use an existing plugin called getPreviousValue, which will store the name of each page the visitor looks at and return the name of the previous one when needed.

I’ll presume you’ll use the plugin.

Step 2 — find “variable”

For those of you who still put data into “variables”: you will need to find out which variable or variables to use! It is highly likely that your friendly marketer will want you to write the name of the previous page into a prop and the last field name into another. So find out which props she has assigned and use those.

Remember when I wrote about Saving “variables”? Here’s a prime example of that. You put both page and field name into a single prop, then you (or your friendly marketer) would use the Classification Rule Builder or just plain SAINT to make a useful report.

Or you use two props, which in a lot of cases makes sense because you might need the name of the previous page for a couple of different things.

If you are using Context Data or a Data Layer: just send the data with a nice, self-explanatory name.

Step 3 — populate “variable”

Quizz question: when or where do you populate the “variables”?

Answer: on every single page on your site, but only when needed.

If you are rolling your eyes now, then you might have forgotten about the developer’s best friend, the s.doPlugins callback!

The function is called whenever you track something, be it using s.t() or s.tl().

Which means you can put code into that function that handles situations like the one we’re looking at. To be clear: there is no need to add anything to any page.

Within s.doPlugins, you add a block of code that:

  1. checks whether the cookie exists and has data
  2. reads the values for last page name and last form field
  3. writes those values along with a “Form Abendonments” event

Are you using props? Do this:

	s.prop21 = 'Blue Credit Card Application - Form 1|last name';

Using a data layer? Do this:

	dataLayer['prevPage'] = 'Blue Credit Card Application - Form 1';
	dataLayer['lastFormField'] = 'last name';

Step 4 — clear cookie

This is an important and easily-forgotten step: you have to clear the cookie!

Why?

Well, if you don’t then your code will just read it again and it will send all data again, until the cookie expires!

You have to clear it in two places or two situations: when the last form field has been tracked (best done in s.doPlugins after you read the cookie), and also when the form has actually been completed and submitted (you can do this on the click, or in s.doPlugins by checking the name of the current page. If it is the “thank you for submitting our form” page, then there is no need to track abandonment and you can clear the cookie.)

Step 5 — tag your actual form

On the form itself, you want to do two things:

  1. save the page name
  2. add handlers that store the field names when the user interacts with them

The first one is simple: just add a call to the s.doPlugins function like so:

	var prevPage = s.getPreviousValue(s.pageName,'s_pv');

The second one means you have to hook into the relevant callbacks for all form fields, like onChange, input, keyup or whatever you personally believe works best. I’m pretty sure you’ll be using jQuery or an equivalent life-saver.

Within those handlers, all you need to do is write the name of the field that called it into your cookie. You can use the jQuery Cookie Plugin for that, or you can use a function called s.c_w() that comes with the s_code.js.

	s.c_w('s_lstfldname',field_name,expiry);

It is perfectly safe to omit the expiry parameter, turning the cookie into a session cookie.

Step 6 — test, rigorously

There are a couple of moving pieces in this machinery you just built, meaning you should test it until your dreams are filled with cookies, plugins and form elements. You do remember how to debug, don’t you? Look at the URL!

Notes

The same mechanism is often used with the getPercentPageViewed plugin, which I’ll describe at some point. The plugin measures how far down visitors scroll on a given page, and we’re using cookies with for the exact same reason, cost reduction.

The big assumption behind this method is of course that the visitor will see one more page at some point. If they hate your form so much that they complete walk away from your site, the cookie will rot in their browser forever and you won’t see any data.

If that is the case (and you can find out a couple of ways, one being the Pathing Report), or if you simply don’t care about money as much as you care about data, you could track with a timer instead, meaning that you store state changes in a cookie, but instead of sending the information off on the next page, you track it every 5 seconds or so. If there was a change.

The big issue with that is the analysis. Each visitor will generate a bunch of data points, making reporting and analysis more challenging than with the original method which generates exactly one data point per form.

Some marketers would like to see additional events, like “Form Abandonments”, “Form Errors” and “Form Completions”. “Form Abandonments” can be sent along with the last field name, simple. “Form Completions” can be tracked on the “thank you” page easily.

“Form Errors” are different in that you might actually want to track them when they happen.

There are different types of errors, of course. There might be an actual error with the form, or you might call a violation of data constraints in the form an error. Either way, it is not likely to happen as often as the change of a field, so I’d probably just use Custom Link Tracking as and when it happens.

I’m sure I don’t have to tell you this: the whole article is utterly useless for those visitors who do not except cookies. But we’re talking about 1st-party cookies here, so that is pretty rare, especially when the users do want something from you.


Filed under: Javascript, Plugins, Tips Tagged: classifications, context data, cookies, data layer, javascript, pagename, plugins, prop, s_code.js, variables

With DTM you don’t need Plugins! – Part 1

$
0
0

Today’s article is a bit of an experiment. I have set myself a goal, and I’ll try to reach that goal and document it.

The goal: getting rid of plugins in the s_code.js file.

Why would I do that?

Couple of reasons:

  • Plugins in the s_code.js file make it more difficult to move to DTM. If my s_code.js had no plugins and no doPlugins method, it would be much simpler to remove it from my page code and load it via DTM.
  • Simplifying and/or minimising the amount of Javascript code can only be good from a maintenance perspective.
  • Plugins are written, provided and maintained by Consulting. That means some of them are not free. Using DTM to provide the same data or logic would therefore save money.
  • Using DTM functionality should make the setup more robust against changes.
  • I haven’t done this before.

It will come as no surprise to readers of this blog that the last reason is enough to make me try this. Plus the fact that it would spawn this article and a couple more.

Status

Let’s start with an inventory of plugins in my very own s_code.js file. I am using the following currently:

  • getQueryParam — for grabbing external and internal campaign tracking codes.
  • getValOnce — to de-dupe campaign tracking.
  • apl — to add a custom Page View event to my setup and a couple other events.
  • getTimeParting — to analyze traffic based on days and time of day.
  • getDaysSinceLastVisit — to analyse how often (the few) repeat visitors come to my site.
  • getPercentPageViewed — to check whether people scroll down my articles or not.
  • getPreviousValue — for grabbing the name of the previous page (good with getPercentPageViewed).
  • getVisitNum — again to analyse behaviour of returning visitors.
  • plus some self-made code that does all sorts of things.

Some of those are easy to replace (think getQueryParam), others not so much.

Oh. In case you are wondering why you cannot see any of these or even an s_code.js file here on this blog… I am talking about another site. This blog is hosted on WordPress and I can’t put Javascript onto the pages. I do track it (as you can see using Charles), but I won’t even discuss how because I use a method that is absolutely revolting, technically.

Right, with that out of the way, let’s remove some easy plugins!

getQueryParam

Now this is so easy to do it almost feels wrong to give it a headline.

DTM can be configured to read URL parameters in multiple places: the configuration of the Adobe Analytics tool in the Page Load Rule is one of those, a Data Element would be another.

[Screnshot]

Setting of Campaign Tracking Code in Page Load Rule

[Screenshot]

Data Element for Tracking Campaign Tracking Code

Easy.

apl

I use the apl plugin to add a custom Page View event to all of my pages. This is one of those best-practice things we all do when implementing Adobe Analytics (others are adding a custom Product View event and passing the page name into an eVar).

This is also very easy to do: just set the event in the Page Load Rule for Adobe Analytics, and you’re done.

[Screenshot]

Page Load Rule with Custom Page Views Event

c_r

The c_r function is not technically a plugin. It is a function that reads a value from a cookie, that’s all.

Based on whether you are using the Cookie Combining Utility, it will actually read from the cookie that you name or from the combined cookie.

So this one can only be replaced by a Data Element of type “Cookie” if you do not use the Cookie Combining Utility. I do, unfortunately.

But if you don’t, then this is the most easy replacement ever.

Notes

There are quite a few other plugins, which I may discuss in a follow-up at some point. I’m also thinking about plugins like Channel Manager, Cross-visit Participation (cvp) or maybe even the Cookie-combining Utility, which reduces the number of cookies used by the plugins, but makes moving to a plugin-less s_code.js file pretty tough.

I will in a future article look into the Conditions in DTM, see if they help me further.

Also, what’s left over can be moved into DTM, as is, if you want.

[Screenshot]

Where to put the Code from s.doPlugins

Hm…

Two down, lots to go. “I’ll be back”


Filed under: Integration, Javascript, Plugins Tagged: DTM, eVar, event, plugins, s_code.js

With DTM you don’t need Plugins! – Part 2

$
0
0

Let’s continue on the journey and replace one more plugins with DTM goodness.

I even get to introduce an almost hidden feature!

getVisitNum

I was slightly disappointed last time because I couldn’t find an easy way of replacing getVisitNum. This plugin keeps track of how often someone comes to the site. Very useful for segmentation purposes.

The plugin really does two things: maintain the number of visits in a cookie (which means increasing the number at the start of a visit), and providing the current number so we can track it into a “variable” of our choice.

I knew that reading the visit number from a cookie was easy: just make a Data Element based on the cookie.

But would I have to add some custom logic to manage the visit number? Where exactly?

Turns out the answer is no! I don’t have to! DTM provides everything I need, although it is hidden away a little bit.

When you make a rule, you can add a condition based on the number of visits. Oh. So DTM actually does keep track of visits for me? Cool!

Can I read that value somewhere? I couldn’t find it.

Then my colleague Anna pointed me the right way: the visit number is only handled by DTM if I actually have a rule with a condition based on “Sessions” (aka Visits) in my setup. Otherwise it doesn’t care.

But if I do have a rule with such a condition, it will write a cookie (“_sdsat_session_count”)! Heureka!

So, let’s set up a rule with a condition, like so:

[Screenshot]

DTM Rule with Session-baed Condition

Now if we push that live or use the “DTM Switch plugin” to check the staging version, we should see a new cookie on our site.

[Screenshot]

DTM Session Count Cookie

Success!

All we need now is a Data Element that reads that cookie:

[Screenshot]

Data Element for Visit Number

Use that Data Element with the Analytics variable that you currently use with the getVisitNum plugin:

[Screenshot]

Assigning Data Element to prop33

And Bob’s your uncle!

Bonus

For added bliss, you could add custom Javascript that reads the existing cookie and bootstraps the new one, so the visit number wouldn’t start at 1 for everyone.

In my s_code.js file, I was assigning the prop like so:

[Screenshot]

Original Call to getVisitNum in s_code.js

No parameter on the getVisitNum plugin means that the plugin will store the visit number in a cookie called “s_vnum”.

So all we need to do is read that value from the “s_vnum” cookie and write it into the “_sdsat_session_count” cookie before DTM uses that cookie for the first time to populate our variable.

My guess would be that placing such code into a sequential Javascript block at page top should work.

Don’t forget to delete the “s_vnum” cookie! You want to transfer the visit number once only.


Filed under: Integration, Javascript, Plugins Tagged: data layer, DTM, javascript, plugins, s_code.js

Quick Tip – One s_code.js for Multiple Sites

$
0
0

The situation: you are responsible for a couple of sites on different domains. Tracking is pretty much the same on all of these pages, but there are some small differences (e.g. each site tracks into a separate report suite).

A new version of the s_code has come out and you want to update all your sites.

What’s the best way of doing that?

The best way, of course, is to use a Tag Management System, like DTM.

Using a TMS, you can inject the s_code.js file into all your sites, and you can also change it there.

This helps you get around the release schedule, but it doesn’t address the fact that tracking is similar across your sites. What if 95% of the tracking is the same? Do you really have to touch every site just because something has changed in that part of the code?

Slice it!

I wouldn’t ask that question if I didn’t have an answer, would I?

It’s pretty straightforward: slice the s_code.js file, split it into pieces.

You might remember that the file has 4 sections in it: configuration, doPlugins, plugins, and core Javascript code.

A pretty simple approach would be to split the file into 4 pieces following those 4 sections:

  1. The section most likely to change for all sites at once (the core JS code) can be split into a separate file. You can use that file across all your sites. Great: only update a single file when the core JS changes!
  2. Your settings will be different for each site, so the part of the file that handles the settings can be separated and each site can have a (fairly static) file that contains just the settings.
  3. The code that defines the plugins can be split out. It is fairly likely that all your sites use more or less the same plugins. So it makes sense to have one file contain them all, then load that file on all sites. And: if you want to update a plugin, you only have to do it once.
  4. For the doPlugins method, you might have a different setup per site. In which case you would have a lot of these, for all your sites. But maybe the method is the same everywhere, in which case you can use one file, like for the core JS code and the plugin definitions. (Btw: it makes sense from an analytics point of view to have a similar doPlugins method. Think standardisation.)

DTM follows this approach partly: it allows you to set configuration directly in the UI, and you can instruct it to inject the core JS code straight from DTM.

The plugin code as well as the doPlugins method can also be defined in the DTM UI, but they’re usually not separated. I guess if you are using the exact same plugins in the exact same ways across all the sites you manage, then why separate those two?

In essence, splitting the s_code.js file in 3 is a good option as well.

Five

And so is splitting it into 5 pieces!

At the very top of the file, you usually set the report suite ID into a variable called s_account. The next step in the file is to call a method called s_gi with that rsid. The result is the “s object” that handles everything else.

[Screenshot]

The Top of the s_code.js File

The setting of s_account has to be done before the call to s_gi, and the call to s_gi has to be done before we set any configuration variables on the s obejct or define or call any plugins.

It makes sense, therefore, to treat the assignment of s_account separately. In fact, this might be the only difference between two sites, all the rest might be identical!

So… 3, 4, 5, how many pieces do you need?

My advice:

  • if all your sites have identical tracking setup, split in 3
    • s_account,
    • configuration, plugin code and doPlugins, and
    • core JS code
  • if, however, the setup is different between your sites, split in 4 or 5
    • s_account,
    • configuration,
    • plugin code,
    • doPlugins, and
    • core JS code

Make sure you always have a separate file for the core Javascript code — that’s what’ll allow you to easily upgrade to the latest version!

No TMS?

What if you don’t have a TMS?

You should probably still split the s_code.js file!

One added thing for you to think about then is: in which order do you need to load the three to five JS files? What are the dependencies? Where on the page should you load them?

In principle, you should load the files in the order they appear in the s_code.js file.

  1. The assignment of s_account must be first
  2. Defining the s object via a call to s_gi must be after s_account but before everything else
  3. The configuration can happen parallel to the definition of the plugins and doPlugins. All three must happen after the s object is initialised and before the call to s.t().

Given that it makes a lot of sense to call s.t() at the very end of the page (a raging debate, also within our organisation. I am firmly on the “as far down as possible” side), you can load the others pretty much anywhere.

I have seen sites that inline the assignment of s_account, which to me makes some sense, but obviously makes it more difficult to change the report suite if that should be needed.

Have you split up the s_code.js file? How? Why?


Filed under: Javascript, Plugins, Tips Tagged: DTM, javascript, plugins, s_code.js, tag management

Internal URLs

$
0
0

Every program, every tool has this one thing that causes head-scratching all over. For Adobe Analytics, one little head-scratcher has to do with the specification of “internal URLs”.

“Internal URLs?” you ask, “what on Earth are ‘internal URLs’?”

Your site has two interfaces to the rest of the Internet:

  1. Incoming traffic via links from other sites
  2. Outgoing traffic via links to other sites

An example journey on this blog might be: visitor might see article on LinkedIn, then she clicks through and lands on the blog. From here (and hopefully after reading) she then exits via a link to an article by – say – Adam Greco.

I want to track where people come from and where they go. For the tracking to be able to give me that, Adobe Analytics has to know “where my site starts and ends”, so to speak. It needs to know the URLs that I consider to be my site.

And that is what I configure as “Internal URLs”.

Easy.

But…

I know, I know.

Let me show you where to set this up. Then we’ll get to your question.

Setup

Let’s log into Adobe Analytics and go to the Admin section. Find “Admin” in the menu on the left, then “Report Suites”. Select the report suite in question.

Now hover over “Edit Settings”, move the mouse to “General” and then click on “Internal URL Filters”.

[Screenshot]

Internal URL Filter Menu

You get to the following page (apologies for the huge screenshot) where you can add and modify URLs.

[Screenshot]

Internal URL Filter Setup

Right, that’s one down, one to go.

Remember the s_code.js file? It has a configuration section. In there, you’ll find a variable called s.linkInternalFilters, which you should set to a list of URLs, separated by comma. Like so:

	s.linkInternalFilters="javascript:,jan-exner.de,fototimer.net,exner.com";

You can see that the list contains the same URLs as the list in the UI. But before we get to the obvious question (“WHY?”), let me say one more thing about those “URLs” you configure in those two places.

URL Fragments

Thing is, they’re not URLs.

Instead, they are fragments of URLs, substrings, to be exact.

So “jan-exner.de” would match “http://www.jan-exner.de/astro&#8221;, but also “http://we.like.com/jan-exner.de&#8221; (checking… nope, 404. *phew*).

Some people like to put just their brand, some are more specific. I guess it depends how popular your brand is and whether there are likely any sites that include your brand name in their URLs.

The Answer

And now, finally, the answer to your question: why are we configuring the internal URLs twice?

Well, it’s because they matter in two completely separate situations.

Situation 1 — when someone comes to your site, we want to see referrers. That’s something handled completely in the back-end, on the Adobe servers during processing.

To allow the back-end to distinguish (interesting) external URLs in the referrers from (boring) internal ones, you have to tell it what those internal URLs are, and because it is for the back-end, you configure it in the UI, as we saw above.

Situation 2 — when a visitor clicks a link on one of your pages, the tracking code has to make a decision: do I track this as an Exit Link or not?

The core Javascript code makes that decision, and we therefore have to let it know those URLs as well. That’s where the s.linkInternalFilters variable comes in.

So because there are two separate situations, there are also two places to configure them. It is rare for people to configure them differently, but I have seen the odd case where it made sense.

I hope that takes away some of the head-scratching.


Filed under: Javascript, Principles Tagged: internal urls, s_code.js

Quick Tip: Delayed Tracking with DTM

$
0
0

Sometimes you find yourself in a situation where you need to track something outside the normal way.

Examples can be when you want to track data that is not available directly on page load, such as product availability.

Easy: use DTM

In theory, this should be easy: Just make a Page Load Rule, set “Trigger Rule at” to “Onload” under Conditions, and Bob’s your uncle.

[Screenshot]

Trigger Rule at Onload

The great thing: you can then administer all tracking you want to do via Data Elements and the fields in the “Adobe Analytics” section of the rule.

But alas, sometimes this is not good enough. Sometimes you need to wait a bit longer, specifically if the site you are working with uses asynchronous loading of data.

There is an added bit of complexity with situations like these. You have to make sure there are no “race conditions”.

A race condition is when two pieces of code execute independently and do not always do so in the order you need.

The standard tracking on your page could be slow for whatever reason, and your delayed tracking could happen earlier. That could pretty much mess up tracking.

So what do we do?

Yes: use DTM

The answer is still to use DTM, but let’s make sure we deal with the race conditions. We will have to do three things:

  1. Wait for the “normal” tracking on the page to fire
  2. delay a bit more
  3. track our data

The third thing is easy, we have done it before many times. But 1 and 2 deserve some more attention. Luckily for us, Javascript comes with two handy methods: setInterval() and setTimeout().

While setInterval() allows us to execute code repeatedly (like checking for something, maybe?), setTimeout() is good for executing code after a bit of time, once.

One question left: how do we know the tracking request has fired?

You can check for the existence of an object named “s_i_<rsid>”, which gets attached to window when the tracking call goes out. That is probably the easiest way.

So, without further ado, here is the complete code:

[Screenshot]

Delayed Tracking Script

Put all of this into a sequential or non-sequential Javascript block in a new Page Load Rule (which you can trigger at “Top of Page”, if you want!)

Some of that needs a bit of explaining, I guess.

In lines 1 – 4, I load the necessary constants from Data Elements.

The rsid is an obvious one, and I’ll use “Check Interval” as the interval length for the setInterval() call, and “Delay Time” as wait time for setTimeout();

I have set “Check Interval” to 100 and “Delay Time” to 1000.

Meaning: the code will check every 100 milliseconds (10 times per second) whether tracking has gone out, then wait 100 milliseconds (1 second).

I used Data Elements for these constants because I’ll potentially be having multiple of these rules and I want to be able to change the times centrally.

Lines 9 & 10 check for the existence of that “s_i<rsid>” element.

Lines 12 – 17 cancel the interval timer plus the watch dog (more later).

Lines 20 – 25 are where your tracking code goes. You can set props, eVars and events here, then you must call s.t() or s.tl().

Don’t forget s.linkTrackVars & s.linkTrackEvents if you use s.tl()!

In lines 30 – 33 I define a second timeout. It waits 10s then calls a function that kills the interval. I don’t want that check to run on forever, and the assumption is that if after 10s no tracking has happened, it probably never will.

You could argue that this should happen much earlier than after 10 seconds, and I’d say you’d be right.

I hope this’ll help someone, and let me know if you can see any improvements!


Filed under: Javascript Tagged: complexity, data layer, DTM, eVar, event, implementation, javascript, link tracking, prop, rsid, s_code.js, tag manager

The s_code.js File – Where is it now?

$
0
0

Sometimes, I get narcissistic. I log into Analytics and go through the numbers, for no other reasons than wanting to see whether they have gone up again. Then I am pleased.

For about a year or so, I have always seen one specific article at the top of the list, The s_code.js File – Overview. This has always puzzled me slightly. It has often led to me thinking I should write about it some more. But: what else would I write about it? The mini series pretty much covered it all, overview, configuration, doPlugins method, plugins, and the core Javascript code. So, what’s left to say?

Well, there might be a little bit left, and it has to do with — wait for it — DTM! Yay! Another DTM post!

But if you are new to Tag Management, I guess you might have asked yourself the question “where does that stuff go?”, and so I shall try to answer that question today.

This will not be a mini-series, more a mini-article. TMS are cool, they make things easier.

Configuration

In the s_code.js file, there are configuration settings, where you specify things like the report suite to be used, tracking servers, internal link URLs, or character encoding.

I’m happy to say that your TMS will take care of that stuff. In DTM, to cite an obvious example, you can set all of these in the configuration of the Analytics Tool.

[Screenshot]

DTM – UI for configuration of Analytics

Easy.

s.doPlugins()

As I mentioned about a month and a half ago, the s.doPlugins() method is alive and well, even if you’re using a TMS.

If you are not yet using a TMS, you’re bound to have one in your s_code.js file. So where does it go?

I would put it into a place where it is executed (which means it sort of installs itself) once per page load. Your TMS documentation will show you where that would be.

For DTM, I can tell you: put it into the edit box under “Customize Page Code” in the Adobe Analytics Tool.

[Screenshot]

DTM – where to put doPlugins

Why does this work?

Well, DTM makes the deployment of Analytics more fancy, but under the hood, it still loads the Javascript code and calls s.t() and s.tl(). The core JS code still does what it always did: call doPlugins() before it actually tracks.

So, if you create a function called s.doPlugins(s) and set s.usePlugins = true, the function will be called on every tracking call.

Magic.

It might make sense to remove some code from the method, though.

Think about it: how much of your code could be replaced with a Data Element? Which part of it could rather be dealt with using an extra rule?

My tip: each bit of code that you migrate from doPlugins and into DTM (or whatever your TMS is) will make your setup easier, at least easier for your friendly marketer.

Plugins

Now where do the actual plugins go? (That is those that you still need!)

I’m sure you can guess: they go where the s.doPlugins() method goes as well: into the edit box under the “Customize Page Code” section in the Analytics Tool.

Again, we want the plugin code to be called once, at page load, on every page. That’ll define the plugin, so it can subsequently be used in doPlugins.

Do you put the plugins code below or above doPlugins?

Doesn’t matter. As long as the browser has gone through them before the first call to doPlugins, everything is fine.

Core Javascript Code

The core Javascript code is very similar to plugins and the doPlugins method in that it must be loaded once per page. You might copy your existing code, and there might even be a place in your TMS reserved for that. In DTM, that place is under “Library Management”: yet another “Open Editor” button.

[Screenshot]

DTM – core JS could live here

I have to say, though, that I don’t like that.

Some people just copy their complete s_code.js file into this editor, which totally works, but…

Don’t you want to disentangle things? Make your setup easier?

The easiest, and really the preferred way of handling the core Javascript file, is to not handle it at all, like so:

[Screenshot]

DTM – core JS should live here!

Notes

I’m sure there are good reasons to stray from the path I outlined here.

Actually, no, I am not.

The reality is that if you actually do copy the whole s_code.js file in one go, you bring your old boxes into a new house. Have you ever moved house? Noticed how you never actually open some of those boxes? This is one of those. It’ll stay closed forever. I say open it before you move, throw away the cruft!

I did it along with a German customer, and it took us a day. Down the line, we have a slimmer system, and we’ll get that day back multiple times over because we don’t have to always worry about side-effects of the old stuff.

Absolutely worth it!


Filed under: DTM, Integration, Javascript, Page Code, Plugins Tagged: appmeasurement.js, complexity, DTM, implementation, javascript, s_code.js

TDD and Adobe Analytics

$
0
0

Some time ago I wrote an article in which I dreamt of having some sort of test-driven development capability in online marketing tools. At the time, I said the vendors had to chime in, building hooks, frameworks or APIs into their systems so users and consultants would be able to be test-driven.

I had discussions with a colleague in product management about it, but before we could get anywhere, that colleague left Adobe, so I forgot about the whole thing, reluctantly.

Fast-forward a year and we now have a tool in the Marketing Cloud that might help revive a scaled-down version: Dynamic Tag Management (DTM).

DTM adds flexibility to the deployment, allowing marketing teams to at least partly relieve IT and development.

Somehow, the combination of flexibility on the input side paired with real-time access to data on the receiving end makes me think there should be something we can do.

The flexibility also means that marketing is less restricted by IT / dev and can do things independently. That in turn means testing is now more necessary than ever before!

Define

I think this is one area where our profession is way behind everyone else. And I think it hurts us. So I decide to think out loud, and maybe someone will chime in and we will develop something.

Right.

With that out of the way, and given that you’re still reading, I can presume you’re brave. Or bored.

Onwards!

Let’s see what we need for TDD and what we might have. The following list is from the Test Driven Development FAQ on pantras. I have only taken the simple things, let’s start small.

  • Unit — this is the “thing” we want to test. It is probably not a piece of code but rather a part of our setup. “Am I actually tracking my campaigns?” might be a “unit” in our case.
  • Test case — the definition of a test. An example might be “the page URL contains a URL parameter cid=’test’, assert that ‘test’ shows up in s.campaign”.
  • Fixture — the fixture is a defined environment and state in which the test can be run. See below.
  • Assertion — the assertion is where you tell the test framework what you expect the outcome to be. For my test case above, it would be “assert that s.campaign contains ‘test’, or even “assert that we can see one instance for ‘test’ in our Tracking Codes report”.
  • setUp & tearDown — two methods that create the fixture before the test and dispose of it when the test is done. The setUp() method creates everything that is needed for the test, a defined state. In TDD, the fixture should be as simple as possible. There is a risk that a complex fixture itself might not be correct, which would render the test useless.
  • Refactoring — the TDD mantra is “red – green – refactor”. If you want to add functionality to your system, you first make a test, which will likely fail. Then you do whatever is the simplest way of making your system pass the test (yes, often that means hard-coded values! putting s.campaign="test"; into the s_code.js file totally counts!). Once it passes, you refactor. Refactoring means removing programming sins, like hard-coding. You know that your refactoring is good as long as the test stays green.
  • Mock object — mock objects simulate parts of the system in the tests, usually parts that are “expensive”, such as DB transactions or disk I/O. In our case, Mock objects could replace visitors or 3rd-party processes such as emails being sent
  • Red bar & Green bar — “the bar” tells you whether all tests were successful (green) or not (red). It is a visual representation of your system’s behaviour in the test. Would be great to have something like this!

I won’t claim that we can do actual TDD. Our “Units” are far too abstract. There are too many black-box components involved, and you have no control over most of the technology involved.

Of the 5 rules of thumb of when a test is not a unit test in the above FAQ, we will violate at least 4.

But I maintain that some testing is better than no testing at all.

What Next?

While I have been writing, editing, re-writing, and thinking about this article, Craig Scribner published an ebook on auditing Adobe Analytics which covers some of the pain points I see. I am very happy that I am not the only one thinking about the issue of data quality and specifically “quality rot” in Analytics.

But what Craig’s ebook shows is that there is a need here. And there are people around the world who want this.

My current plan: I think the vision is pretty big in terms of moving parts. It might be to big for any of us to stem. The best answer to something like that: de-scope, radically.

So that’s what I’m currently doing. Thinking about where TDD can easily be done, and where it would have the biggest impact.

I absolutely love Craig’s idea of logging an audit score in Analytics, because — as he points out — it reinforces the point that the data is valid. It’s steps like this that will help us get there, I think.

Goals & Stakeholders

I discussed testing with two colleagues tonight, both working as consultants or architects on AEM, the Adobe Experience Manager. Their primary concern is: how do you serve content, fast and efficiently. They are very familiar with testing, obviously. They use selenium on the specific project we discussed.

But what I realised tonight has nothing to do with tools and everything with stakeholders, goals, and needs.

Which is great, because it helps me deconstruct and de-scope my problem further!

The realisation was that we both (“them and me”) can break the setup easily, mainly because with Tag Management, Marketing’s usual urgency and Javascript’s possibilities, we all work in a remarkably decoupled environment.

The epiphany Craig Scribner described in his post on Direct Call Rules is a really good description of this problem.

I still think the actual issue is not the tool, but the fact that we are so decoupled, and I think we can fix that. Because both, CMS people and measure people, can easily break the code, both need a means of knowing when they do.

CMS people and measure people have very different needs, in fact, but both want to know immediately if any change they made had a negative impact on what others are doing.

I feel that decoupled is good, as long as there is a “contract” that both sides adhere to. Tests can enforce that contract.

Depth of Tests

The thing is: IT and CMS people do not care what they have broken on my end. All they want is to know if they have.

That level of test is a lot easier than a full end-to-end test (which would help me, of course)!

So I think if we want to keep our IT / development happy, we can probably relatively easily provide test cases to them.

What would those cases have to test for?

I can think of two things off the top of my head:

  • availability of data — do I have everything I need on a given page available when the page loads?
  • events — do events that I listen to on a given page actually fire?

The thing is: I would love to test a lot more than those two things, but for IT / dev, they cover pretty much anything they can break and are therefore all we need to test for them!

Data Elements

At this point, and I don’t want to reveal too much too early, I use CasperJS to run tests that check all Data Elements on a page. I am essentially halfway there!

For IT / dev, those tests should be part of the build process or the continuous integration, which with CasperJS, they are not. I therefore have to get into selenium. Yet another tool. *sigh*

For me and your friendly marketer, I prefer a standalone tool. I want to be able at any moment to push a button and see a green bar, independently of anything else, like Marketing likes it.

I guess the Data Element tests will have to be made using a tool, as well as selenium. And then the tool has to do more in-depth tests that check what is actually sent into Analytics, or even what comes back out of it.

I just wanted to throw this out here, partly to give you an update, partly to solicit some response.

What do you think? Am I doing the right thing here? Is this a waste of time?

I know that I am not alone, at least. My friend Lukáš Čech is working on something similar.

Notes

Of course you could tell me off for being a couple years late. After all, TDD is dead. Or is it?

My take: in web analytics, we have never done anything remotely akin to TDD, and the agile manifesto is for us what the Internet is for German politicians: “Neuland” (“unchartered territory”).

In essence, while you (being a developer) might know what agile means, your friendly marketer and all the agencies and consultants she works with decidedly don’t. I’m asking you to help us. Small steps. We need it.


Filed under: Automation, DTM, Javascript, Principles Tagged: adoption, casperjs, charles, complexity, debug, DTM, selenium, s_code.js, tag manager, tdd

DTM – How to Amend an Existing Analytics Setup

$
0
0

(Let me state right here at the beginning that judging by the feedback I got from my beta testers editors, this article may not be for the faint of heart. If you think Javascript should be treated with respect, you might want to stop reading.)

Have you ever seen the ‘[x] Page code is already present’ in the setup of the Adobe Analytics Tool in DTM?

[Screenshot]

Page code is already present

Ever wondered what that does?

You probably guessed (correctly), that it is meant for situations where your web site has a working deployment of Adobe Analytics (or “SiteCatalyst” or “Omniture” or whatever you are calling it) in place and you want to put DTM on it. You would then at some point remove the legacy code and rely fully on DTM.

While both, Analytics and DTM, are live on your site, you can build Page Load Rules (PLRs) that mimic the existing implementation, so that when you take away the legacy code, all you have to do is switch off the “[ ] Page code is already present” option and your done.

Why do you have to switch it off?

Well, as long as the box is ticked, Page Load Rules will execute, all right, but they won’t actually have any effect on your tracking.

With the option enabled, DTM sort of executes all PLRs “in a sandbox”, meaning whatever events or “variables” it sets will be on a “shadow s object”, not connected to your page-level s object at all.

Note: Event-based Rules (EBRs) and Direct call Rules (DCRs) are working fine, you can use them as you usually would. The switch only affects PLRs.

But…

What if you want to use DTM to amend your existing tracking?

I can think of two ways of doing that, and I freely admit there may be more.

  1. Set events and “variables” via Javascript
  2. Copy events and “variables” from DTM

We have all done the first method for some time now. It works fine, but it has one big problem: you must know Javascript to implement, maintain, and change it.

I therefore proudly present method 2: copying values from DTM.

Method 2

If you debug what DTM actually does with the ‘[x] Page code is already present’ option turned on, you’ll realise that it really acts as usual, meaning it happily goes through all the PLRs and does everything you tell it to do.

It does, however, apply it all to an s object that is not the one already present on the page.

Bummer, hm?

It would be great to be able to use DTM’s graphical UI to amend the existing tracking…

So let’s see: DTM does actually execute the PLRs… we also still have the s.doPlugins method… oh!

Why don’t we copy all the values from DTM into the page-level s object using the s.doPlugins method?

Would that work?

Oh, yes!

Prerequisite

You have to find out the ID of your Analytics Tool first.

Go to your page, open a console, then type

_satellite.getToolsByType("sc");

[Screenshot]

getToolsByType call in FF

If you now click the “more” link of the result you got, you’ll see an “id” attribute somewhere.

Copy the id.

[Screenshot]

The id as seen in FF

That’s all you need.

Please note that this thing only works if your _satellite.pageBottom() call is above the call to s.t()!

Code

Now put the following code into your s.doPlugins method:

s.usePlugins=true;
function s_doPlugins(s) {
    // ugly hack ahead!
    var g = _satellite.tools.c7b417741f9f0d2435c6dd064ad9fc12;
    var gev = g.events;
    var gvb = g.varBindings;
    if (gev && gev.length > 0) {
        for (i in gev) {
            s.events = s.apl(s.events,gev[i],",",1);
        }
    }
    if (gvb) {
        for (b in gvb) {
            s[b] = gvb[b];
        }
    }
}
s.doPlugins=s_doPlugins;

Make sure you are loading the apl plugin!

Explanation?

The _satellite object in the DOM contains data about all the configured tools. Inside the Adobe Analytics Tool, there are two arrays that are interesting for us: events & varBindings.

[Screenshot]

Events & varBindings

Those two arrays contain the events and “variables” set by DTM at page load, so this is where we’ll copy them from.

Lines 7 — 11 copy the events, while lines 12 — 16 copy all “variables” (props and eVars).

More

Since the code is pretty generic, it should really be turned into a plugin, don’t you think?

While we’re at it, why not add a couple of bells, and a whistle?

Overwrite Switch

With the first version of the code, we’re simply copying every “variable” that has been set in DTM. There may be situations where we might not want to overwrite existing values, right?

So let’s add a switch!

We’ll also turn the code into a function while we’re at it. And, just because we can, we’ll add two checks: a) that DTM is actually there, and b) that the ‘[x] Page code is already present’ switch is indeed enabled.

The code then looks like this:

s.mergeDTMShadowData = function(toolID, overwrite) {
	// overwrite defaults to true
	if (typeof overwrite === 'undefined' || overwrite === null) {
		overwrite = true;
	}
	// check if DTM is present
	if (typeof _satellite !== 'undefined') {
		var analyticsTool = _satellite.tool[toolID];
		// is the analytics tool enabled?
		var enabled = true;
		if (analyticsTool.settings.initTool !== undefined && analyticsTool.settings.initTool === false) {
			enabled = false;
		}
		if (enabled === false) {
			// copy events
			var eventArray = analyticsTool.events;
			if (eventArray && eventArray.length > 0) {
				for (ev in eventArray) {
					s.events = s.apl(s.events, eventArray[ev], ',', 1);
				}
			}
			// copy props & eVars
			var varArray = analyticsTool.varBindings;
			if (varArray) {
				for (variable in varArray) {
					if (overwrite || typeof s[variable] === 'undefined' || s[variable] === null) {
						s[variable] = varArray[variable];
					}
				}
			}
		}
	} else {
		console.log('DTM not found, bailing out...');
	}
}

And when we turn it into a plugin, it looks like this:

s.mergeDTMShadowData=new Function("t","o",""
+"if(typeof o==='undefined'||o===null){o=true;}if(typeof _satellite!="
+"='undefined'){var a=_satellite.tool[t];var e=true;if(a.settings.ini"
+"tTool!==undefined&&a.settings.initTool===false){e=false;}if(e===fal"
+"se){var b=a.events;if(b&&b.length>0){for(c in b){s.events=s.apl(s.e"
+"vents,b[c],',',1);}}var d=a.varBindings;if(d){for(f in d){if(o||typ"
+"eof s[f]==='undefined'||s[f]===null){s[f]=d[f];}}}}}else{console.lo"
+"g('DTM not found, bailing out...');}");

Feel free to copy the plugin. You can use it in your s.doPlugins method like this:

[Screenshot]

Call to plugin in s.doPlugins

Always more

I have one big issue with the whole thing: in order to make this work, I must edit the original s_code.js or AppMeasurement.js file: I must add the plugin and the call to it.

Not good.

I mean, yes, you can do that while you’re deploying DTM alongside your Analytics deployment, but let’s be honest: we didn’t think of that back when we did that, right? DTM is already there, and there are no slots in any of the next milestones.

What I really want is to inject all of this into an existing page with DTM.

Hm…

The top answer on http://stackoverflow.com/questions/9134686/adding-code-to-a-javascript-function-programatically comes to the rescue.

Since the PLRs will still execute, any Javascript in a PLR will as well.

So, we’ll write a little bit of JS that caches the original s.doPlugins method, overwrites it, then calls the plugin plus the original method in the new method.

Say again?

Well, we want the original s.doPlugins to run, but we want to add a call to the plugin before it does, correct?

We can build a function that does exactly that. How will that function be called? Because we’ll call it “s.doPlugins“, of course.

The complete solution now looks like this:

s.mergeDTMShadowData=new Function("t","o",""
+"if(typeof o==='undefined'||o===null){o=true;}if(typeof _satellite!="
+"='undefined'){var a=_satellite.tool[t];var e=true;if(a.settings.ini"
+"tTool!==undefined&&a.settings.initTool===false){e=false;}if(e===fal"
+"se){var b=a.events;if(b&&b.length>0){for(c in b){s.events=s.apl(s.e"
+"vents,b[c],',',1);}}var d=a.varBindings;if(d){for(f in d){if(o||typ"
+"eof s[f]==='undefined'||s[f]===null){s[f]=d[f];}}}}}else{console.lo"
+"g('DTM not found, bailing out...');}");

s.doPlugins = (function() {
  return function() {
	  s.mergeDTMShadowData("c7b417741f9f0d2435c6dd064ad9fc12",true);
  
	  // now call the orignal
  	var result = s_doPlugins.apply(this,arguments);
  	return result;
  };
}());

Put that into the third-party/Javascript section of a PLR and you’re done. No changes are needed to the existing legacy code! None!

Always more after that

Call me a perfectionist, but I could give you a couple of improvements off the top of my head.

The easiest one: the “true” in the call to our plugin should not be a constant, but rather be a Data Element (_satellite.getVar("Overwrite Existing Vars")), shouldn’t it?

And while we’re at it: do you always deploy the whole site in one go? I have seen people add parts to it, and those parts would be different, use different templates and such.

It is entirely thinkable that parts of your site use legacy tracking, while other parts track solely with DTM.

So the aforementioned Data Element should itself take into account the template, maybe URL, maybe something in the data layer…

Plus you’d be well-advised making a second DE; one that enables or disables the whole mechanism in a similar way.

And do we really need the plugin?

I made it because I wanted to plug it into an s_code.js file. Now that the code sits in DTM, I could just put it straight in, no? And it really wouldn’t have to be minified beyond recognition.

Ah, too many things to do, too little time.


Filed under: DTM, Integration, Javascript, Page Code, Plugins, Tips Tagged: appmeasurement.js, data layer, debug, DTM, eVar, event, implementation, javascript, prop, s_code.js, tag management, variables

Basic Tracking – Remix (contains DTM)

$
0
0

In March 2013, I wrote an article on Basic Tracking, showing you the minimal set of code on a page that is needed for tracking the page in Adobe Analytics.

We’re in 2015, now, we have DTM, so it is time for a refresh.

This article will give you the minimal setup needed for tracking a page using DTM and Analytics.

There are two parts here: some Javascript on your page, plus a bit of setup to be done in DTM. Your friendly marketer might be doing the latter, but I guess the task might just as well fall into your lap.

You’ll notice that compared to the really quite basic steps in the old posting, this one seems like a lot more work. Consider it the equivalent of writing stuff yourself versus using a framework: the initial effort is higher, but above a certain level of complexity, you’ll get paid back later in the project.

Or: frameworks rarely make sense if all you want is a “Hello World!”, which is really what basic tracking is.

Basic Tracking

For tracking to work, you have to include DTM into your pages. It is technically relatively easy to do, because all you need are two lines of code, one in the <head> of the page, the other right before the </body> tag.

This goes into the <head>:

<script src="//assets.adobedtm.com/xyz/satelliteLib-abc.js"></script>

Note: yours will have some hex numbers in place of the “xyz” & “abc”, 40 characters each.

And this goes before the </body> tag:

<script type="text/javascript">_satellite.pageBottom();</script>

You can get this code from DTM itself, from the “Embed” tab in your Web Property. Or maybe your friendly marketer or a consultant will send them to you.

To keep in line with the other posting, I want to state here that while this is viable, you can improve it quite a lot. We’ll get to that later.

Basic Configuration

In DTM, the minimal configuration includes:

  • creating a Web Property
  • adding and configuring the Adobe Analytics Tool
  • adding and configuring a Page Load Rule

The Web Property serves as the space that manages what you deploy on the pages as well as the users able to do so.

Within the Web Property, “Tools” encapsulate the functionality of some Javascript-based solutions. Today, there are 5 tools available: Adobe Analytics, Adobe Target, Adobe Audience Manager, Marketing Cloud ID Service, Google Analytics, and Google Universal Analytics.

[Screenshot]

Adding a Tool

As with most Tag Management Systems, you can deploy pretty much anything that uses HTML and/or Javascript using DTM. The Tools are just pre-packaged to make it easier to do so.

Remember all those postings talking about how to do unspeakable things with Page Load Rules and Event-based Rules? Those rules are where you configure what exactly it is the Tools should do.

To put that together: the minimum viable configuration of DTM includes one Web Property and at least one rule. For our purposes (Analytics), it also includes one Tool, Adobe Analytics, and one Data Element.

If you’re confused about the configuration of the Adobe Analytics Tool, I’d suggest you keep it simple.

For now, all I want to tell you is that you have to set the report suite ID in the Tool:

[Screenshot]

Setting the Report Suite ID

DTM can log into the Marketing Cloud or Analytics and pull available report suite IDs for you, or you can just manually type them.

[Screenshot]

Setting the Report Suite ID manually

This is the equivalent of setting the old s_account variable in your s_code.js or AppMeasurement.js file.

Data Element & Page Load Rule

Next we need a Data Element for the page name. Adobe Analytics identifies what it tracks by the s.pageName “variable”, so we better give everything we track a page name.

In DTM, the facility that creates and let’s you use values is called Data Element.

You’ll find them under “Rules”.

[Screenshot]

How to find Data Elements

Go ahead and create one. If you’re good, you’ll already have the name of the page on your pages, in your data layer, of course.

If that’s so, then your “Page Name” Data Element can be as simple as this:

[Screenshot]

Simple Page Name Data Element

Chances are that you haven’t. Chances are you will somehow determine the name of the page based on the URL or some other data embedded into the page. Chances are you’ll use Javascript to do so.

In that case, you’d set the type to “Custom Script” and put some code into DTM that magically computes a page name.

[Screenshot]

Custom Page Name Data Element

Next thing is our Page Load Rule.

Click on “Page Load Rules” on the left hand side, then create one.

Call it anything you like, “Normal Page Load Rule” works well.

Within the PLR, all we need to do is add some instructions to the “Adobe Analytics” part of the rule, such as setting the page name.

[Screenshot]

Normal Page Load Rule

And that’s it, almost.

Note that when you look at the list of your Rules later, you can quickly see which Tools a Rule is using.

[Screenshot]

View of the Rules with Tools

All 4 PLRs in the screenshot use the Analytics Tool. The first two PLRs also have custom tags. None of the 4 PLRs uses Target or GA.

Publish

Now that you have created the Web Property, Tool, Data Element, and PLR, you should head over to your site and test them.

But wait! Nothing is happening!

Well, for now everything we did is pending. It has to be approved, then published.

But: you can test it right now, using Chrome or Firefox, and the famous Switch plugin.

Set the plugin to “Staging”, and off you go.

[Screenshot]

Switch Plugin, Staging and Debug

I switched staging and debug on here, which means DTM will execute non-approved things (in my browser!), and it will be pretty verbose in the Console.

Once you have tested and are happy, it’s time to approve and publish what you built.

[Screenshot]

Approvals in DTM

Then, head to the “Overview” tab and press the “Publish Queue” button, which will lead you to this screen where you can publish your changes so they go live (meaning the setup will be applied for all users of the site).

[Screenshot]

Find the Publish Queue

[Screenshot]

Publish in DTM

We have now configured tracking equivalent to what I called “(Better) Minimal Code” in the old posting.

We can obviously do a lot better!

Data!

Analytics is all about detecting important actions and segmenting your audiences. For that, you need to a) detect actions (d’uh) and b) collect data that allows your friendly marketer to create segments.

Where does that data come from?

Well, some of it comes from the browser, some comes from the context (e.g. geo-location based on the visitor’s IP address). Most of it, though, should come from your site!

And how does it get transferred from your site into Analytics? You know me, so you know the answer: a CEDDL-compatible data layer would be awesome!

But DTM itself helps as well. Since the idea is that not-so-technical people use and administer Rules, Data Elements are supposed to allow for easy access to data.

How the Data Element gets the data is up to the definition of the DE, so it is entirely possible to create a configuration that makes your friendly marketer happy without having any data layer, much less a CEDDL-compatible one.

In real life, sites don’t yet have proper data layers, so Data Elements are a much welcome addition to our tool box. I suggest you help your friendly marketer, maybe by building DEs for her, or maybe by building your site in a way that makes it easy for her to make DEs herself (think IDs on HTML elements and such).

The more relevant data she has in DEs, the more she can use in the Rules she builds and ultimately in her analysis.


Filed under: DTM, Integration, Javascript Tagged: appmeasurement.js, DTM, implementation, javascript, simplicity, s_code.js

When exactly does doPlugins run?

$
0
0

In a meeting a couple of months ago, André Urban (a seasoned colleague of mine) and I got into a sudden and unexpected argument about when exactly s.doPlugins is called.

I was convinced it was called as a result of calling either s.t() or s.tl(), while he says it was also called everytime a visitor clicks anything on the page.

Funny enough, he was able to show me on a simple page.

Since this was completely against what I thought I knew about how the darn thing works, I decided to dive into this a bit deeper. Let’s see who was right!

doPlugins

As a refresher, the doPlugins method is defined inside the s_code.js or AppMeasurement.js file. It is a callback method that allows for a considerable amount of automation in your tracking setup.

Traditionally (as in: pre-DTM), we used it to put Javascript that would grab campaign tracking codes from URLs, call plugins, or do other useful things. Anything that would potentially apply to all pages on your site, we used to handle in the doPlugins method.

While most of this can be done directly in DTM (or whatever Tag Management System you use), there are still cases where you’d use doPlugins, the biggest one of those being a migration. It is so much easier to keep your doPlugins method when you move everything into DTM, than it would be to avenge it in DTM.

So, where do we start our investigation?

Use the Source

Step 1: let’s look into the code!

[Screenshot]

Call to doPlugins in H.27.5 s_code

[Screenshot]

Call to doPlugins in AppMeasurement

The good old H.27.5 code contains exactly one call to s.doPlugins. Given how the H-code is built, I am inclined to first look at the new 1.5.3 code, which is much more easily readable. The three occurrences I find in there contain a comment, a clause in an if-statement, and a single call.

Cleaning up the Javascript a little bit shows that the call sits within the t() method, and that tl() ultimately calls t(), too. Sounds like I was right!

[Screenshot]

Call to doPlugins in AppMeasurement (beautified)

Then again… I am assuming that t() & tl() only ever called explicitly, either on the page itself, or by some code injected into it. What if I’m wrong?

A short burst of hectic CTRL-F’ing shows me that there are indeed calls to track() inside the AppMeasurement.js file itself! What on earth?

But it makes sense.

Remember that you can configure whether you want automatically track exit links, downloads, and normal links for clickmap? Well, somehow, the code needs to be able to do that. And so it installs event handlers, and inside those, it calls track() (which is an alias of t()). Makes perfect sense, in fact.

I’m still thinking I’m right at this point. And I’m not going to go into all the methods that I find. Some of them are beyond my grasp. I’m not a programmer, after all.

Debugger

So, let’s look at a page through the debugger or the Console.

We’ll make this easier by making a doPlugins method which outputs something into the Console:

s.usePlugins=true
function s_doPlugins(s) {
	// purely for testing purposes
	console.log("doPlugins has been called!");
}
s.doPlugins=s_doPlugins

Then we’ll load the page, maybe click on it a couple of times, and check the Console.

[Screenshot]

Call to doPlugins due to click

Oops… doPlugins does indeed get called when I click on the text!

So there are clearly circumstances under which doPlugins is called even though no actual tracking call is being made. Sounds like André was right, then.

Although I could still claim that because t() is called, I was “sort of right”, right? Right?

Also, setting

s.trackDownloadLinks=false
s.trackExternalLinks=false
s.trackInlineStats=false

makes no difference!

Legacy

I don’t like finding out that something I took for granted for years was utterly wrong. So maybe this is an AppMeasurement.js thing? Maybe the old H-code did/does it differently? Let’s test…

[Screenshot]

No call to doPlugins

Ha! It does not call doPlugins on clicks!

This is a very good illustration of why it is dangerous to trust an expert, isn’t it? Some experts (like my colleague André) keep up with the changes and adjust their internal model, while others (like me in this case) don’t.

Actually, I suggest that experts shouldn’t trust themselves, ever. Always go back on your assumptions and your experience when you work with technology!

Update: André can easily reproduce the “doPlugins called everytime” behaviour with H.27.5 code, so this is even more a reason to double-check on your deployment! Seems like we haven’t found out exactly what is going on, yet.


Filed under: AppMeasurement, Javascript, Plugins, Principles Tagged: javascript, plugins, s_code.js

Everything but the TMS

$
0
0

You know what the problem with luxury is? You get used to it. And then it becomes hard to enjoy the little things.

I am so used to tag management by now, that when I see how popular the mini-series on the s_code.js file still is, I keep wondering why would people even look at that anymore?

There is a good answer of course: doing things the hard way helps you understand them a lot better than if all you have to do is ticking a bunch of check boxes.

So sorry, DTM, but this one is not about you. Go do some shopping, or spend the day in a spa.

Today, we’re trying to implement Analytics, Target, plus the Marketing Cloud ID Service, the old-fashioned way.

Preparation

(In which you run around the building, frantically, trying to find someone who has access to the bits of information you need)

On top of a test page somewhere, we need:

  1. the AppMeasurement.js file
  2. a Report Suite ID
  3. a tracking server
  4. the VisitorAPI.js file
  5. a Marketing Cloud Org ID
  6. a Target client code
  7. the at.js file

Ask your friendly marketer. If she looks at you like you’re from Jupiter, ask your analyst, or try to figure out those values from your live site.

You can easily find rsid and tracking server by looking at the URL of any tracking call. See Debugging – the URL for details.

[Screenshot]

Tracking Call with tracking server and rsid

The Marketing Cloud Org ID should also be visible. Find it using the Debugger Chrome plugin, or look at any call that goes to the demdex.net domain and see whether you can find a “d_orgid” parameter. You have to decode the latter!

[Screenshot]

Marketing Cloud ID Call with Org ID

Now for the client code for Target, I guess the best bet is to go to any page that has a running campaign. You can either look at the “Request Domain” filed in the Debugger, or use the Console again, go to Network, find any request towards tt.omtrdc.net, then use the first part of the hostname:
[Screenshot]

Find the Target ClientCode

Analytics

I feel we should start with Analytics, because.

To get the AppMeasurement.js file (the modern equivalent of what used to be called s_code.js), you can either ask your friendly marketer, or — if you have a login for Analytics — go to Admin > Code Manager and download it yourself.

[Screenshot]

Finding the Code Manager

Take the file that hides behind the “Javascript (new)” link.

[Screenshot]

List of Libraries in Code Manager

Note that you have a link on the right that points to the implementation guide. Bookmark that. It does often help.

The zip file that you get contains the core JS file, some sample HTML, some additional modules and the JS file for the Marketing Cloud ID Service.

[Screenshot]

Files downloaded from Code Manager

For now, all we want is the core JS file. Put it somewhere on your server.

Time to load that file into your test page.

The file as it is does not work, really. There is some configuration to do. The corresponding page in the implementation guide has sample code, which you can just copy and amend.

Note: we used to keep everything together within the s_code.js file, and we can do the same with AppMeasurement.js, but it is also possible to put all the configuration into a separate file. That’s what DTM does! Why would we do that? Easier upgrades of the core JS, of course. And, once again, better control and insight.

I shall copy the sample code into the page itself, for clarity.

For now, I do not include the Marketing Cloud ID Service, just Analytics.

I specify my rsid and tracking servers into the code, then put it all into a <script> tag after loading AppMeasurement.js.

Half-way there.

At the bottom of the page, I add some JS code. All I want is set a page name and a minimum of other things.

Stop me if you think that you’ve heard this one before, but in the spirit of keeping it simple, I shall keep it simple.

If you want to see what I did, the three pages are online.

Let’s load that page, see if it works.

[Screenshot]

Test page with Analytics

YES!

Two things to note in that screenshot:

  1. The Analytics debugger tells us we should implement the Marketing Cloud ID Service. Haven’t done that so far, so fair point.
  2. The DTM Debugger on the top right is grey. Poor thing… no DTM here.

Oh, and why does it load twice? It doesn’t. The Analytics server just told it to reload so it could try and set the s_vi cookie.

In my case, that failed. Chrome doesn’t like 3rd-party cookies too much. My visitor ID for Analytics will therefore be the one found in the Javascript-set s_fid cookie.

… talking about visitor ID …

Marketing Cloud ID Service

… let’s add the Marketing Cloud ID Service!

This step is pretty simple. We just have to include the VisitorAPI.js file and add a line to our in-page Javascript block.

Quick question: in what order do we load AppMeasurement.js and VisitorAPI.js?

Answer: doesn’t matter.

What does matter, though, is that you load both before the code that configures them and creates the s object.

If we load the page now, you can see that the debugger shows a Marketing Cloud Visitor ID. Success!

[Screenshot]

Test page with Analytics & MCID

Btw, if you do not use the Analytics Debugger in Chrome (or you simply don’t use Chrome), you can check whether the Marketing Cloud ID Service loads properly by checking the “mid” parameter on the Analytics tracking call.

[Screenshot]

Test page with Analytics & MCID

The “mid” parameter is only set by the Marketing Cloud ID Service, so if you see it, the Service is working.

You probably noted that I followed the recommendation in the implementation guide and added a prop that tracks the presence of the Marketing Cloud ID Service. Do that!

Target

In order to add Target, we need to find, download, and add the at.js file. The best way to download is described on Download at.js Using the Target Download API in the help section.

[Screenshot]

Finding the correct admin server

As you can see, with my client code “janexner”, I need to pull the file from “admin16”. Yours will be different, of course.

Following those steps will get you a “pre-configured” at.js file, so no need to configure anything else. Just include the file into your page.

A quick check in the console after loading the test page reveals that at.js is being loaded.

[Screenshot]

Test page with Analytics, Target & MCID

Shall we add an mbox and see whether we can run a quick test?Well, we don’t need to add anything to the page. The global mbox can be added safely from within the Target user interface.

How about integrating Target with Analytics?

For the latest versions of both solutions, the integration happens in the Adobe data centres, rather than in the visitors’ browsers. There is a provisioning team that will work with your friendly marketer to get this set up.

So we’re done!

Notes

This was a fun exercise. Now we know what dependencies we need to be aware of between the different pieces of Javascript. There are surprisingly few, actually. Just load VisitorAPI.js and AppMeasurement.js first, then your configuration code, and at.js should be loaded last, as close as possible to </head>.

For you, this might be helpful if you want to do it like I did (manually), or if your TMS doesn’t have defaults for the Adobe tools and you need to make sure everything loads as desired.

After writing all of the above, I had the chance, last week, to change to an account that has a working integration between Analytics and Target (“A4T”), so I took the liberty of changing over to that account. The screenshots and examples in this article are no longer up to date, but the 3 linked pages are, and you should easily be able to understand what I changed.

Big bonus: I can (and do) now target some visitors on page 3.

Having done all of this, and having learned something on the way, there is now but one thing to do: rip it up and start again!

And this time, do yourself a favour, use a tag manager!


Filed under: AppMeasurement, Integration, Javascript Tagged: appmeasurement.js, eVar, event, implementation, javascript, pagename, report suite, rsid, simplicity, s_code.js, s_vi, visitor ID

Discussion – Customize Analytics in DTM

$
0
0

There is no “standard deployment”.

A couple of weeks ago, I was talking with a colleague who hadn’t worked with DTM before. He asked me lots of intelligent questions about his somewhat non-standard requirements, and I replied with what I think is the best approach.

Whilst doing that, I kept thinking about “the other approach”, which I usually mentally connect with my colleague André Urban, or my ex-colleague Jenn Kunz, mostly because they are the most vocal about “the other way” in my immediate vicinity.

(And by “vicinity”, I mean phone, twitter, and slack, of course. Ah, the Internet! How poor we were ‘ere the Internet!

Anyways.)

I decided that the fact that there were essentially two mutually exclusive ways was a good excuse for a blog battle an online discussion. Jenn Kunz versus myself, battling it out. We’re nerds, right? There must be a best way, the best way!

So, here I am, explaining the advantages of my way. Jenn Kunz will do the same for hers.

And while I’m sure there are pros and cons for everything, and that we will eventually settle on “it depends”, I am still hopeful, deep in my heart, that I will be able to convince Jenn. Wish me luck!

Standard

DTM, and other tag managers, make it very easy to deploy a standard Analytics implementation on any web site. Thing is, though, I have never seen a standard Analytics implementation.

The moment you start leaving the realm of “clean, easy, and standard”, you are diving into troubled, deep, murky waters. You don’t know whether there are any dangerous under currents, but I bet you you’ll encounter some sooner or later.

I’m not saying that standard implementations are useless. They are not. What I’m saying is that there are a lot of moving pieces and dependencies. In reality, people tend to not adapt other systems to our Analytics needs, so we end up adapting to theirs.

And sometimes, it’s just that you’re moving to tag management, and you have so many useful lines of Javascript in your s.doPlugins() that it’d take ages to rebuild everything properly. So you’d rather just transport it over as is.

No matter why you want to do it, here is how I suggest you go about it:

Aim for “as standard as possible”, then add a doPlugins() method that handles the non-standard requirements.

How?

When you add the Analytics Tool in DTM, leave everything as standard. Select your staging and production report suites, your char set and currency codes, maybe set the tracking servers, and possibly even some global variables.

But leave the library on “Managed by Adobe”! That is the crucial bit!

Why? Because that way, updates to the core Javascript are only a couple of clicks away, and DTM will actually remind you when updates are available.

So far so standard. How about all those fancy other things you want to do?

Let’s split those up into two groups:

  • Things that you used to put into the s_code.js file — plugins, the doPlugins() method, extra stuff that I need on every page
  • Things that you used to embed directly into pages, be it some or all of them

Whatever falls into group 1 can very easily and directly be transferred into the editor window under “Customize Page Code” in the Analytics Tool.

[screenshot]

Your BFF – Customize Page Code

To be honest, some of that can simply be built in DTM directly. Things like picking up a Campaign Tracking code, or setting some variable to some value read from a cookie and such. Move those! Reduce your code where you can!

Things that you used to do in code on pages, specific or not, should be moved into Page Load Rules.

Try to mimic the effect, not the code, by which I mean if your code used to set a variable, try doing it in the normal DTM way, with a Data Element and a simple assignment in the DTM UI.

If that is not possible, then go for “3rd party / Javascript”.

There is really not that much more to say, but let’s maybe look at some specific things.

Dynamic Choice of rsid

Yes, can be done, and falls under group 1, above.

I wrote about this last year, but in short you make a Data Element, Custom, that calculates the rsid, then add a call to s.sa(_satellite.getVar("Report Suite ID")); into the doPlugins() method, so the rsid is properly set on every call that goes out.

Campaign Tracking

In almost all cases that I have seen, campaign tracking could be moved from doPlugins() straight to DTM, be it into the Analytics Tool, or a PLR that fires everywhere.

Simply build a Data Element and specify that in the PLR or Tool. The DE can handle pretty much all the complexity you need.

If you’re using the Channel Manager plugin, handle it like group 1, with a doPlugins() method.

[screenshot]

Campaign Tracking in the Analytics Tool using a DE

TBD

There will be more to come, as I intend to show ways of building things that Jenn brings up.

Exception

There is one exception, where Jenn’s way is the only way: when you need to add a Module. The ActivityMap module is part of the Analytics code that DTM delivers out of the box, but things like the Media Module are not.

So if you need that specific module, better do what Jenn says.

But in all other cases, follow my approach.

Why?

I am putting all of my eggs into a single basket: there is nothing more important than the ability to quickly and routinely update the libraries.

Any solution that requires me or my customer to copy Javascript code from A to B is potentially … no, scratch that: it will cause missed or delayed updates, and to me that is as annoying as it is unnecessary.

Given that the s_code.js or AppMeasurement.js file has 4 sections, one of which comes from Adobe, it makes more than a little sense to split those parts apart. With a tag manager doing that for you, you’d be a fool not to!


Filed under: AppMeasurement, Javascript, Opinion Tagged: analytics, appmeasurement, appmeasurement.js, complexity, DTM, implementation, plugins, s_code.js, tag manager

Debugging 2017.02

$
0
0

It has been almost 3 years since I wrote my article on debugging. I read through it the other day, and couldn’t help but notice that my workflow has indeed changed. Time for a new article!

I’ll stick with client-side this time, as this is where the changes have happened.

The second big difference is that in most projects, I no longer debug only Analytics. These days, I need to check that a bunch of other tools do the right thing, too — at least the Marketing Cloud ID Service and Target are more and more often deployed alongside Analytics.

Thankfully, tools have been updated, and new tools have popped up.

Adobe Debugger

[screenshot]

The Adobe Debugger

The good old Adobe Debugger has been updated over time, and it is still the most comprehensive tool for Marketing Cloud users/analysts/developers. It still shows human-readable output, and it covers pretty much anything you need, the notable exception being the MCID Service.

Adobe Analytics Debugger plugin

The tool I use most often is without a doubt the Adobe Analytics Debugger Chrome plugin by Tomas Balciunas.

[screenshot]

The Adobe Analytics Debugger Plugin

The plugin simply prints a readable version of any tracking request into the Console in Chrome.

It does cover the MCID Service, and makes some reasonable recommendations.

It does currently handle Analytics only, and it is available only for Chrome, but hey. I’d wager that alone would be a good reason to switch to Chrome.

DTM Switch plugin

The folk at Search Discovery have created the DTM Switch plugin for Chrome & DTM Switch add-on for Firefox.

This tool only does two things, but those two things are pretty important: it allows me to set DTM to debug and to switch between production and staging DTM libraries.

Absolutely crucial if you build anything in DTM.

And yes, there is an argument that I’m relying way too often on the debug output DTM gives me, plus a sprinkling of _satellite.notify() calls. It feels like those days when I was programming in BASIC on the Commodore 64, way before I knew what a debugger is. Yeah, I’m that old.

Console / Firebug

While technically, nothing has changed that much in the last 2.5 years, there are more useful things you should know about using the browsers’ built-in consoles/inspectors/whathaveyous.

First, how do you find out whether the browser loads all the tools that you have deployed? Here we go:

_satellite should return an Object, not undefined.

_satellite.appVersion will tell you what library version you are using.

[screenshot]

Testing for DTM

Visitor should return a function, not undefined.

Visitor.version tells you the version of the Marketing Cloud ID library you have deployed.

[screenshot]

Testing for the MCID Library

AppMeasurement should return a function, not undefined.

You can see the version of the AppMeasurement code right there, too.

[screenshot]

Testing for the AppMeasurement Library

s_c_il should return an array of Marketing Cloud-related objects, such as the MCID instance, the tracking object, the Activity Map module, and possibly others.

[screenshot]

Checking s_c_il

Each one of those objects can be queried, of course. Most likely, you’ll need to figure out report suite IDs…

[screenshot]

Finding the Report Suite ID through s_c_il

mbox should return a function, not undefined.

mboxVersion will tell you what version of the Target library you are using.

[screenshot]

Testing for the Target Library (mbox.js)

You might have moved from mbox.js to at.js, in which case:

adobe.target should return an object, not undefined.

adobe.target.VERSION tells you what version of the code you’re on.

[screenshot]

Testing for the new Target Library (at.js)

If you like working in the Console, or you simply cannot install any plugins, check out the list of handy one-liner debugging helpers, and feel free to download the Inofficial Adobe Marketing Cloud debugging Cheat Sheet!

Payloads

Beyond knowing that some code has been loaded, and that it is the right version, you might want to know whether the code actually does the right thing — sends the right data, to be exact.

Apart from the tools mentioned above, you can use the Network console that your browser sports.

The online help explains what the parameters mean quiete nicely: for the MCID Service, for Analytics, and for Target.

Two things I want to point out on top of that:

  1. How do you find those requests in the endless list?
  2. Anything specific I should look for?

Finding the interesting requests is easy: use the filter (“demdex”, “b/ss”, “mbox” work well).

[screenshot]

Network panel with filter “demdex”

[screenshot]

Network panel with filter “b/ss”

[screenshot]

Network panel with filter “mbox”


And here is a really important bit in all this: does the integration between MCID Service, Analytics, and Target actually work? How do I tell?

Well, it’s all in the parameters.

Check for the “mid”. If it is there on all three calls (and it is the same!), then you’re good.

[screenshot]

MCID Service request with mid

[screenshot]

Analytics tracking call with mid

[screenshot]

Target call with mid

If not, then check tracking servers, versions, cookies.

Charles

I have used and liked Charles for some time now. Nevertheless, I have recently changed the way I use it, slightly.

Before, I used to rely heavily on the “Map Local” and “Map Remote” features when I wanted to inject/modify Javascript in the pages of those sites I was working on.

These days, I really only use the “Replace” feature.

I guess the change reflects that I almost exclusively deploy with DTM these days, so all I need Charles for is loading DTM from Adobe-hosted in situations where the site hosts the libraries but doesn’t update them often enough.

Notes

How do you know whether someone prefers looking at the tracking requests in the Network tab, or using the Adobe Analytics Debugger plugin?

Easy: check whether their console is at the bottom of the window, wide, or on the right side, slim but tall. The latter is good for the Adobe Analytics Debugger plugin, whereas the former is better if you look at the requests in the Network tab.

And regarding the hip/strange title of this article: it occurred to me that debugging is something that will evolve all the time, meaning this won’t be the last article on the subject. Versioning therefore is necessary. Well, maybe not in yyyy.mm format, but it looks cool.


Filed under: AppMeasurement, DTM, Integration, Javascript, Page Code, Tips Tagged: appmeasurement, debug, DTM, implementation, javascript, MCID Service, s_code.js, tag manager, target, testing
Viewing all 46 articles
Browse latest View live