Progressive Profiling: A Cure for Poor Lead Quality and Form Friction

You wouldn’t propose to a complete stranger, would you? So why ask prospects for a commitment without first getting to know them a little?

If your landing pages and web forms don’t bring in usable prospect data, how can you follow up with prospects and nurture them into qualified leads? How can you pass them to sales for further development?

The struggle is real. As much as 40 percent of B2B leads suffer from poor data quality, and bounce rates for lead generation pages average between 30 and 50 percent. The problem here is two-fold:

  • Prospects don’t take the time to complete web forms accurately
  • Prospects don’t complete web forms at all

In many cases, marketers either ask for all of their lead data up front or build a custom form for each content asset based on what buying stage they think will match prospects’ intent.

Neither approach is ideal.

On the one hand, you could scare prospects away by demanding too much information:

Yikes. Image source.

On the other hand, it’ll take a lot of extra work to a build custom form for each of your content assets.

How can this be avoided?

Progressive profiling is a lead acquisition technique that involves requesting one or two pieces of information at a time, starting with basic firmographics (e.g. company size, job title, industry) and leading into deeper, more targeted questions later in the relationship.

Done correctly, it can help you increase conversion rates and lead accuracy by lowering the psychological barrier to form submission — all while keeping forms simple for a better overall user experience.

Progressive profiling case study

Countless B2B organizations are already using progressive profiling to improve their conversion rates and the quality of their data profiles. The Eaton Corporation, a power management company based in Dublin, Ireland, used progressive profiling to improve engagement with a recent campaign aimed at IT professionals.

With the help of Oracle’s Marketing Cloud, they combined dynamic form fields with a personalized offer and brought in more than 5,000 new prospects… with 48 pieces of information for each. This surpassed their original goal by 276 percent.

So how exactly does progressive profiling work?

Instead of trying to build a complete lead intelligence profile from a single interaction or build a dozen different forms, you use marketing automation and dynamic web forms to request only the information you lack.

Here’s an example of how the process could work:

  1. A prospect visits your website and downloads a whitepaper.
    They submit their name, email address and company name through your web form.
  1. After receiving a few drip emails, the same person clicks a CTA to register for a webinar.

    A dynamic web form for now asks for their industry, company size and a custom question about their software needs. Dynamic web forms present unique fields to each prospect based on the information you already have (or don’t have) in your database.

  1. Not long after the webinar, the lead requests a video demo of your product.
    You now ask them to specify a budget range and implementation time frame.

You’ll need to set up rules for progressive profiling in your marketing automation platform. Most systems from leading vendors (Pardot, Eloqua, Marketo, HubSpot, Act-On) provide some kind of dynamic web form feature, although it’s not always labeled as such.

Here’s an example from Act-On — pay special attention to the “Visitor Form Rules” field:


As you can see, Act-On uses “if + then” rules to make sure no lead capture forms appear redundant to the prospect; you only want to ask for the pieces of information you don’t have.

When should you use progressive profiling?

It may seem like a cure-all solution from the outset, but progressive profiling isn’t always the best choice.

Not every prospect or lead will interact with your content frequently enough to move through a multi-stage lead capture process. According to a study by Demand Gen Report, only 38 percent of buyers view more than four pieces of content from the vendor they ultimately choose.

Worst case scenario: a lead only completes one web form, and you only get their name and email address — hardly enough to constitute an MQL (marketing qualified lead). In light of this reality, marketers should consider when and why they should employ progressive profiling:

  • If the goal is to gradually convert casual site visitors into sales-ready leads with a series of escalating offers, it’s probably a safe bet.
  • But what about visitors who are already in the decision stage of their buying process? If they click a bottom-funnel CTA, do you want to squander the opportunity by only capturing basic info, such as that you might capture for a newsletter subscriber?

    Of course not. Even if your initial conversion rate rises, your final conversion rate (after a couple nurture emails, another offer, another form) will be the same, or even lower.

Here’s the short version: you shouldn’t apply progressive profiling to all of your content campaigns just because it seems intuitive.

Instead, take a hybrid approach:

  • Build more extensive web forms for bottom-of-the-funnel assets and offers, and use progressive profiling to make sure you don’t request the same information twice.
  • For your first-time visitors and blog subscribers, the barrier to entry should still be low, but if there’s an opportunity to capture a qualified lead from a single touch point, take it.

The challenge

The challenge of lead acquisition is similar to the challenge of the sale: you must convince people that your offer (product/service/content) is worth some kind of investment (money/time/information).

While progressive profiling doesn’t necessarily improve your value proposition, it does lower the psychological barrier to entry. By minimizing your “asks” and spacing them out over time, you can build incremental trust with prospects and leads, which adds up to higher conversion rates and more new customers.

Just remember to keep one thing in mind: while progressive profiling is a great technique, it won’t carry your inbound lead generation efforts on its own. As with any campaign, there are many moving parts and they all must work in concert. To get the most out of progressive profiling, make sure you invest at least as much energy into optimizing your awesome content and the landing pages that go with them.

Interested in learning more about landing page optimization?

Download this ebook and become an expert at designing landing pages that convert like crazy.
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.


Progressive Enhancement and Data Visualizations

The following is a guest post by Chris Scott. Chris has written for us before – always on the cutting edge of things. This time Chris shows us something new in the form of a new charting technique his company offers. But it’s based on something old: the fundamentals of the web itself.

This month my company, Factmint, released its suite of Charts. They’re really cool for a number of reasons (not least their design) but one is of particular interest to web developers: they all transform simple HTML tables into complex, interactive data visualizations. This approach is a perfect example of Progressive Enhancement – for browser compatibility, accessibility and SEO you have a simple table, for modern browsers a beautiful graphic.

I wanted to explore the techniques we used in a bit more detail. So here goes…

A recap on Progressive Enhancement

There are a few core concepts of PE. Here’s the list on Wikipedia:

  • basic content should be accessible to all web browsers
  • basic functionality should be accessible to all web browsers
  • sparse, semantic markup contains all content
  • enhanced layout is provided by externally linked CSS
  • enhanced behavior is provided by unobtrusive, externally linked JavaScript
  • end-user web browser preferences are respected

Basically, implement a simple, cross-browser, pure HTML solution. Once that’s done you have a safe minimum-functionality page. Now, build upon that with CSS and JavaScript.

This article is going to look at using these concepts to produce data visualizations.

Data visualizations are backed by data

It’s painfully obvious but worth stating: data visualizations are based upon some underlying data. That data doesn’t need to be lost when building a graphic (as it would be in a raster image, for example). Neither does the data format have to be JSON, as most charting libraries use.

Going back to the first three of the “core concepts” of PE, the basic functionality should be some kind of markup encoding that data. An HTML table or list, for example.

A working example

To illustrate the idea, we are going to Progressively Enhance a timeline towards an SVG visualization. The data might be something like this:

  • 1969: UNICS
  • 1971: UNIX Time-Sharing System
  • 1978: BSD
  • 1980: XENIX OS
  • 1981: UNIX System III
  • 1982: SunOS
  • 1983: UNIX System V
  • 1986: GNU (Trix)
  • 1986: HP-UX
  • 1987: Minix
  • 1989: NeXTSTEP
  • 1989: SCO UNIX
  • 1990: Solaris
  • 1991: Linux
  • 1993: FreeBSD
  • 1995: OpenBSD
  • 1999: Mac OS X

Basic content

There are a number of ways that this data could be encoded. I’m going to go with a definition list – I think that is semantically accurate and will display well without much styling. Let’s start with the base (no-enhancement) case:

<dl class="timeline">   <dt>1969</dt><dd>UNICS</dd>   <dt>1971</dt><dd>UNIX Time-Sharing System</dd>   <dt>1978</dt><dd>BSD</dd>   <dt>1980</dt><dd>XENIX OS</dd>   <dt>1981</dt><dd>UNIX System III</dd>   <dt>1982</dt><dd>SunOS</dd>   <dt>1983</dt><dd>UNIX System V</dd>   <dt>1986</dt><dd>GNU (Trix)</dd>                <dd>HP-UX</dd>   <dt>1987</dt><dd>Minix</dd>   <dt>1989</dt><dd>NeXTSTEP</dd>                <dd>SCO UNIX</dd>   <dt>1990</dt><dd>Solaris</dd>   <dt>1991</dt><dd>Linux</dd>   <dt>1993</dt><dd>FreeBSD</dd>   <dt>1995</dt><dd>OpenBSD</dd>   <dt>1999</dt><dd>Mac OS X</dd> </dl>

That looks something like this:

Unstyled DL in Chrome

Okay, so it’s not pretty, but it will be clear and accessible on all browsers and it should be helpful for search engines, too.

Enhancing the layout

Now, let’s use a stylesheet to improve the layout. This could be taken much further, but, for the purpose of this article, let’s just make some simple improvements:

html {   font-family: sans-serif; }  dl {   padding-left: 2em;   margin-left: 1em;   border-left: 1px solid;      dt:before {     content: '-';     position: absolute;     margin-left: -2.05em;   } }

Now it renders as:

That definitely looks more like a timeline but there are some clear problems with it. Most importantly, the timeline points should be distributed based upon their relative date (so 1983 and 1986 aren’t next to each other). Also, I’d like the timeline to run horizontally to avoid the need for scrolling (in a production case I’d check for the best orientation).

Enhancing behaviour

Now for the fun bit. We are going to use externally linked, unobtrusive JavaScript to render an SVG timeline and replace the definition list with that. The final visualization will look something like this:

Objective: SVG timeline enhanced by JavaScript

Unobtrusive JavaScript

We are going to be producing an SVG graphic with this script, so the most important thing we can do – to keep the script unobtrusive – is to check that the browser supports SVG:

function supportsSvg() {   return document.implementation &&     (       document.implementation.hasFeature('', '1.0') ||       document.implementation.hasFeature('SVG', '1.0')     ); }

That should give pretty accurate feature detection. Alternatively you could go for Modernizr.

Now, we will check for SVG support before we do anything: if the browser supports SVG we’ll draw a pretty visualization and hide the definition list, otherwise we will leave the list in place.

if (supportsSvg()) {   var timeline = document.querySelector('.timeline'); = 'none'; // We don't need to show the list   // draw the graphic... }

Extract the data

The key principle to this approach of PE data visualizations is that the data format is the semantic markup, so let’s parse our data…

function getDataFromDefinitionList(definitionList) {   var children = definitionList.children;      var yearIndex = {};   var data = [];   var currentYear = null;      for (var childIndex = 0; childIndex < children.length; childIndex++) {     var child = children[childIndex];          if (child.nodeName == 'DT') {       currentYear = child.innerText;     } else if (child.nodeName == 'DD' && currentYear !== null) {       if (! yearIndex[currentYear]) {         yearIndex[currentYear] = data.length;         data.push({           year: currentYear,           values: []         });       }              data[yearIndex[currentYear]].values.push(child.innerText);     }   }      return data; }

There’s quite a lot going on there but the essence is simple: iterate over the children, use the DTs as the year and the DDs as the releases. The fact that definition lists allow more than one DD for each DT makes it slightly more complex, hence the lookup to add additional releases from the same year to the same entry in the data array.

The output from that will be something like:

[   {     "year": "1969",     "values": ["UNICS"]   }   ... ]

It’s really useful to have an array, like this, as opposed to an object map. It will be much easier to iterate over the entries later.

Prepare the canvas

To draw this visualization we are going to use SnapSVG. It’s an SVG drawing library by Dmitry Baranovskiy, who also wrote Raphael.js. First, we will need to create an SVG element:

var SVG_NS = '';  function createSvgElement() {   var element = document.createElementNS(SVG_NS, 'svg');   element.setAttribute('width', '100%');   element.setAttribute('height', '250px');      element.classList.add('timeline-visualization');      return element; }

Snap can then wrap the element. Something like this:

var element = createSvgElement(); var paper = Snap(element);

Drawing the timeline

Now for the fun part!

We’re going to write a method that iterates over our data object and draws onto the SVG element. Those two things (the data and the SVG element) will be the arguments:

function drawTimeline(svgElement, data) {   var paper = Snap(svgElement);   data.forEach(function(datum) {     // draw the entry   }); }

The simplest timeline would just draw a dot for each:

function drawTimeline(svgElement, data) {   var paper = Snap(svgElement);   var distanceBetweenPoints = 50;   var x = 0;   data.forEach(function(datum) {, 200, 4);     var x += distanceBetweenPoints;   }); }

That should give 17 evenly distributed dots. But our main objective was to space the dots properly, so let’s do something a little more interesting:

function drawTimeline(svgElement, data) {   var paper = Snap(svgElement);      var canvasSize = paper.node.offsetWidth;      var start = data[0].year;   var end = data[data.length - 1].year;      // add some padding   start--;   end++;      var range = end - start;      paper.line(0, 200, canvasSize, 200).attr({     'stroke': 'black',     'stroke-width': 2   });      data.forEach(function(datum) {     var x = canvasSize * (datum.year - start) / range;, 200, 6);   }); }

Cool: now our dots are distributed and there’s a line underneath them. No information yet, though, so let’s add some labels.

function drawTimeline(svgElement, data) {   var paper = Snap(svgElement);      var canvasSize = paper.node.offsetWidth;      var start = data[0].year;   var end = data[data.length - 1].year;      // add some padding   start--;   end++;      var range = end - start;      paper.line(0, 200, canvasSize, 200).attr({     'stroke': 'black',     'stroke-width': 2   });      data.forEach(function(datum) {     var x = canvasSize * (datum.year - start) / range;, 200, 6);          paper.text(x, 230, datum.year).attr({       'text-anchor': 'middle'     });          var averageIndex = (datum.values.length - 1) / 2;     var xOffsetSize = 24;     datum.values.forEach(function(value, index) {       var offset = (index - averageIndex) * xOffsetSize;              paper.text(x + offset, 180, value)         .attr({           'text-anchor': 'start'         })         .transform('r -45 ' + (x + offset) + ' 180');     });   }); }

Dealing with years that have more than one entry has a little more complexity, but nothing we can’t handle: the datum.values.forEach loop was used to spread duplicates out horizontally, centered around the dot. A rotation has also been applied to stop the labels from overlapping (even though that may be considered bad practice as it adds Cognitive Load – a better solution would be to show key releases always and others on hover but that’s not the point of the article).

Finally, let’s add a little style for the SVG elements:

svg.timeline-visualization {   circle {     fill: white;     stroke: black;     stroke-width: 2;   } }

Bringing it all together

Now we just have to stitch our components together in the if-statement:

if (supportsSvg()) {   var timeline = document.querySelector('.timeline'); = 'none';      var data = getDataFromDefinitionList(timeline);      var svgElement = createSvgElement();   timeline.parentNode.insertBefore(svgElement, timeline);      drawTimeline(svgElement, data); }

Here’s the complete code in a Pen:

See the Pen gbYqRW by chrismichaelscott (@chrismichaelscott) on CodePen.


There are lots of benefits with this approach to data visualizations; the markup is semantic, it’s SEO friendly, it’s accessible to screen-readers and it is progressively enhanced from a simple element supported on most browsers.

There are some things to consider though. If you only have one set of static data, it’s probably not worth the effort – just build an SVG by hand. If you have loads and loads of data this might not be the right approach either, as traversing the DOM tree may not be efficient enough.

If you are trying to do standard charts, like pies, doughnuts, bubble, lines, etc, it’s definitely worth checking out Factmint Charts. They are really beautiful and we’ve put a lot of thought into their design.

Progressive Enhancement and Data Visualizations is a post from CSS-Tricks