Month: August 2017

Your angry tweets may require libel insurance


(Bloomberg) — Courtney Love spent almost six years in litigation, accused of libeling her former attorney in a Twitter post that was visible for less than 10 minutes. She paid a reported $ 780,000 in settlements as a result of two other defamation suits, both stemming from Twitter missives Love wrote about designer Dawn Simorangkir. “Twitter should ban my mother,” her daughter, Frances Bean Cobain, once said. Love, an actress, musician and the widow of late Nirvana frontman Kurt Cobain, inherited the band’s publishing rights. She can afford to take on defamation lawsuits. You probably can’t. Given how much of our lives is spent venting on social media, especially in the age of Trump, the…

This story continues at The Next Web
Social Media – The Next Web

First steps with LUIS, the Language Understanding Intelligent Service

There already are several articles on my blog around the subject of the Microsoft Cognitive Services. One that’s still missing, is LUIS – the Language Understanding Intelligent Service. So today, I’ll give you a brief introduction about LUIS so you’ll be able to know what it is and what it can do. As a demo, we’re trying to teach an AI to act like a restaurant where we want to order some food. Hello LUIS!

What is LUIS?

LUIS is part of the Microsoft Cognitive Services which means that it’s part of an extremely powerful yet easy set of APIs developers can use in order to tap into the massive power of machine learning. As the name suggests, LUIS is specifically made to understand language. Although it might look simple at first, it’s actually a lot harder than you might think. Sentences and utterances can be completely different, although they mean exactly the same. Consider the following examples:

  • I want to order food
  • Bring me a hamburger
  • I want to place an order

The sentences look totally different, they have the same intent: order some food. Normally, you would try to parse each sentence (maybe through the usage of regular expressions), but that’ll cost you a lot of effort and you’ll probably won’t get it right. LUIS will help you with this problem, since it’s able to understand all the intents and handle accordingly.

Creating your first LUIS model

In order to do so, simply head over to LUIS.ai and navigate to My Apps. Select New App to start. Although the name “App” might be a little bit confusing here, since we’re essentially going to create a LUIS Model. LUIS supports a couple of different languages, so make sure you select the correct language before moving on.

Intents and Entities

Now that we have our LUIS model, we can start training. As stated at the beginning of the article, every utterance comes down an intent, the action you’re trying to achieve. Navigate to Intents, Add Intent and give it a name (in my example: OrderFood). Now start in typing Utterances that’ll be used to start your intent. In other words, simply write down a couple of sentences you would expect people to say.

LUIS Intents

But wait, do I need to add another utterance for each type of food that I sell? Luckily, you won’t need to do that. Simply select the word that’s variable (in my case the word hamburger) and add it as an Entity. LUIS can now be trained to understand different kind of foods when they are placed in the same kind of utterance.

LUIS Entities

Don’t forget to Save all your changes before moving on.

Train and test

Once you got your intents and entities in place, head over to Train & Test. This page will allow you to train LUIS to learn your intents, utterances and entities. Simply press the Train Application-button and let LUIS do it’s magic.

Now, we can use the Interactive Testing to check how the model behaves. Simply test some utterances, change some words and check how LUIS responds. The most importing thing here is the Top scoring intent, since that’s the intent that LUIS will assign as a matched utterance. Also note that LUIS changes words to the assigned entity automatically when it recognizes it.

Train LUIS

In this example above, I’ve used the following utterance: i want to eat a hotdog. When I created OrderFood intent in the previous step, I have never used exact “i want to eat a” phrase. Also, I’ve never used the word hotdog. Still, LUIS is able to recognise this sentence as an OrderFood and even parses the hotdog as the correct entity. All because the system is learning from all kind of phrases and recognizes entities itself.

(Re)training through the API

In order to get your automated build integrated with LUIS, you’ll need to be able to call it’s Programmatic API. There are several methods available in order to do so. This will im- or export the LUIS model as you wish so you won’t be able to do so through the portal. Take note all these methods require a Ocp-Apim-Subscription-Key in the Request header, which holds your subscription key. These methods are most commonly used to (re)train the model.

  • Export Application – Exports a LUIS application to JSON format
  • Import Application – Imports an application to LUIS, the application’s JSON should be included in in the request body.
  • Add Batch Labels – Adds a batch of labeled examples to the specified application
  • Train – Gets the trained model predictions for the input example

You can use a tool like Postman or the API testing Console from Cognitive Services to call the API. Here you’ll find an example of Postman calling the Export Application on the API (note the {appId} and {key} have been removed in this example).

LUIS Postman

An exported model will be a JSON-file. It contains all the data from the LUIS model. Here’s an example of a (trimmed down) version from my RestaurantLuisModel.

 {     "luis_schema_version": "2.1.0",     "versionId": "0.1",     "name": "RestaurantLuisModel",     "desc": "",     "culture": "en-us",     "intents": [ {             "name": "OrderFood"         } ],     "entities": [ {             "name": "food"         } ],     // Removed data     "utterances": [ {             "text": "i would like to order a hamburger",             "intent": "OrderFood",             "entities": [ {                     "entity": "food",                     "startPos": 24,                     "endPos": 32 } ]         }, {             "text": "i want to place an order",             "intent": "OrderFood",             "entities": []         }     ] } 

Conclusion

LUIS makes is fairly easy to understand a language and identify Intents with utterances and Entities. Since it’s a system that you can train, you can gradually make it better and learn from the past. Although not discussed in this article, LUIS even has the possibility to check Suggested Utterances making your LUIS Model even better. Take note that not all supported languages have Prebuilt entity support. This might cause LUIS to act differently than you would expect.

In my next article, I’ll dive into the possibility to integrate LUIS with the Microsoft Bot Framework in order to create a smart chat bot. This combination is extremely powerful when you’re creating a chat bot that uses natural language. Stay tuned!

Want to learn more about this subject?
Join my “Weaving Cognitive and Azure Services“-presentation at TechDaysNL 2017!

The post First steps with LUIS, the Language Understanding Intelligent Service appeared first on Marcofolio.net.

Marcofolio.net

No, Your Brand Isn’t Too Big for Thought Leadership

No, Your Brand Isn't Too Big for Thought Leadership

As we get older, we grow out of many things: tricycles, swing sets, sandboxes, and even Legos. We expect it to happen because it makes sense.

What doesn’t make sense is when a company thinks it’s outgrown the value of thought leadership. Not only can well-positioned thought leadership, fostered by strategic content marketing, expand any business’s branding and sales, but it can also solidify a foundation of trust, education, longevity, and awareness.

Content Rules for Businesses Small and Large

Startups and small- to mid-sized enterprises tend to lean heavily toward thought leadership, and almost half of executives think it makes them more competitive. They’re right, of course: Content allows businesses to communicate expertly with the public in a quick and cost-effective manner.

Seventy percent of consumers get information about businesses and product recommendations by reading articles—not by sifting through advertisements—and the average American spends about 490 minutes of each day immersed in digital media. In other words, thought leadership is a huge opportunity for you to get your brand and your ideas in front of millions of people.

The issue comes when the organization grows. Scaling naturally decreases flexibility, making thought leadership more complex. It’s easier for consumers to see the human side of a small mom-and-pop shop than a giant international corporation.

One company that’s bucking the trend and hitting gold with thought leadership is outdoor recreation company REI. As a retailer, its name is globally recognized, but it has suffered online due to heightened competition. To try to woo customers and compete in the online market, it created niche content that provides value to its customers. The company promotes itself as the place to learn about outdoor activities, which helps drive consumer confidence and sales.

The key to using thought leadership effectively as a large business is to remain authentic. You don’t want your customers to see a faceless corporate entity; you want to appear relatable, genuine, and, above all, a go-to resource for valuable insights.

Developing a Thought Leadership Agenda

When creating a thought leadership strategy, you need to start with a concrete agenda. Creating content as a large company requires a different approach than small businesses might utilize, and it’s important to keep a few things in mind so your strategy is cohesive, effective, and authentic:

1. Consider what your audience wants to hear.

In order to create content, you need to know your readers. Develop target customer personas, going into as much depth as you can until you have a good grasp on the problems they’re facing and how you can use your expertise to solve them.

This is also a great opportunity to get your entire team involved in the content creation process. Every employee can bring something unique to the table, and those diverse insights will help customers see the human side of your company.

The most authentic marketing strategies are audience-centric, meeting your customers where they are to deliver the message they need to hear at that moment. Solving problems for them will create a bond with your customers and give them a reason to come back to you when they’re facing another problem.

2. Don’t forget about your brand promise.

For young startups, a brand promise is all you have to convince customers that you’re worth their time and money. But as your company grows, it can sometimes become more difficult to uphold that promise. It’s sometimes easy to get so caught up in your daily activities that you begin to lose focus on what that promise was in the first place.

A thought leadership strategy, however, is the perfect opportunity to ensure you haven’t lost sight of that promise. Customers will know when you’re trying to fool them, and they won’t hesitate to call out a piece of content that contradicts your brand promise. Consistency is key to creating a successful brand image, and it involves regular checkups to ensure your message remains true to your company values.

3. Create concrete guidelines.

Your content needs to have standards to create a cohesive strategy across your various channels. And while each thought leader should be sharing her own thoughts and ideas in her own voice and style, everyone should be conforming to a specific set of guidelines so that the overall message remains consistent.

Organization should also be a key component of your guidelines. A simple checklist of concepts, keywords, or links that each piece of content should contain will ensure that your content is consistent—even when you have multiple thought leaders.

In addition, an editorial calendar is crucial to keep everything organized and allow all content creators the chance to collaborate and brainstorm. There are many tools available to keep your content organized, from the basic Google Calendar to blog management tools like WordPress to more advanced resources like Kapost and Trello.


Meeting customers where they are drops their resistance and opens the door to genuine relationships.
Click To Tweet


4. Take advantage of influencers.

Content creation doesn’t come naturally to many people, and large organizations, in particular, often struggle with content marketing. In fact, one study found that 81 percent of CMOs believe their businesses struggle when coming up with new ideas for thought leadership content.

This is where content curation and influencers come into play. When you’re open to sharing content from aligned companies and individuals, you can expand your reach and enhance your narrative.

The key word here is “aligned.” If you choose to work with influencers to help you create or promote your content, make sure they’re doing it because they truly believe in your brand—not just for the paycheck. This will give your content a more genuine and authentic tone because it’s coming from someone whose mission meshes with your brand’s.

For example, Capitol Records recently partnered with Olay and Mode to create a few behind-the-scenes videos with Michelle Jubelirer, COO of Capitol Music Group and a powerful female role model in the music industry. The videos featured Olay products and gave young women advice about living their best lives while balancing their careers with parenthood and self-care, and they went viral—garnering more than 10 million views. They were so powerful because they were authentic and emotional, and they gave viewers something they wanted from brands and people they trusted.

While scaling companies can expect to someday outgrow their workspaces, vendors, and perhaps even clients, they are never too large to incorporate smart thought leadership strategies into their sales and marketing mix. Meeting your customers where they are drops their resistance and opens the door to amazing, profitable, and genuine relationships.

Get a weekly dose of the trends and insights you need to keep you ON top, from Jay Baer at Convince & Convert. Sign up for the Convince & Convert ON email newsletter.


Convince and Convert: Social Media Consulting and Content Marketing Consulting

Statistical Design in Online A/B Testing

Statistical Design in Online A/B Testing

A/B testing is the field of digital marketing with the highest potential to apply scientific principles, as each A/B experiment is a randomized controlled trial, very similar to ones done in physics, medicine, biology, genetics, etc. However, common advice and part of the practice in A/B testing are lagging by about half a century when compared to modern statistical approaches to experimentation.

There are major issues with the common statistical approaches discussed in most A/B testing literature and applied daily by many practitioners. The three major ones are:

  1. Misuse of statistical significance tests
  2. Lack of consideration for statistical power
  3. Significant inefficiency of statistical methods

In this article I discuss each of the three issues discussed above in some detail, and propose a solution inspired by clinical randomized controlled trials, which I call the AGILE statistical approach to A/B testing.

1. Misuse of Statistical Significance Tests

In most A/B testing content, when statistical tests are mentioned they inevitably discuss statistical significance in some fashion. However, in many of them a major constraint of classical statistical significance tests, e.g. the Student’s T-test, is simply not mentioned. That constraint is the fact that you must fix the number of users you will need to observe in advance.

Before going deeper into the issue, let’s briefly discuss what a statistical significance test actually is. In most A/B tests it amounts to an estimation of the probability of observing a result equal to or more extreme than the one we observed, due to the natural variance in the data that would happen even if there is no true positive lift.

Below is an illustration of the natural variance, where 10,000 random samples are generated from a Bernoulli distribution with a true conversion rate at 0.50%.

Natural Variance

In an A/B test we randomly split users in two or more arms of the experiment, thus eliminating confounding variables, which allows us to establish a causal relationship between observed effect and the changes we introduced in the tested variants. If after observing a number of users we register a conversion rate of 0.62% for the tested variant versus a 0.50% for the control, that means that we either observed a rare (5% probability) event, or there is in fact some positive difference (lift) between the variant and control.

In general, the less likely we are to observe a particular result, the more likely it is that what we are observing is due to a genuine effect, but applying this logic requires knowledge that is external to the statistical design so I won’t go into details about that.

The above statistical model comes with some assumptions, one of which is that you observe the data and act on it at a single point in time. For statistical significance to work as expected we must adhere to a strict application of the method where you declare you will test, say, 20,000 users per arm, or 40,000 in total, and then do a single evaluation of statistical significance. If you do it this way, there are no issues. Approaches like “wait till you have 100 conversions per arm” or “wait till you observe XX% confidence” are not statistically rigorous and will probably get you in trouble.

However, in practice, tests can take several weeks to complete, and multiple people look at the results weekly, if not daily. Naturally, when results look overly positive or overly negative they want to take quick action. If the tested variant is doing poorly, there is pressure to stop the test early to prevent losses and to redirect resources to more prospective variants. If the tested variant is doing great early on, there is pressure to suspend the test, call the winner and implement the change so the perceived lift can be converted to revenue quicker. I believe there is no A/B testing practitioner who will deny these realities.

These pressures lead to what is called data peeking or data-driven optional stopping. The classical significance test offers no error guarantees if it is misused in such a manner, resulting in illusory findings – both in terms of direction of result (false positives) and in the magnitude of the achieved lift. The reason is that peeking results in an additional dimension in the test sample space. Instead of estimating the probability of a single false detection of a winner with a single point in time, the test would actually need to estimate the probability of a single false detection at multiple points in time.

If the conversion rates were constant that would not be an issue. But since they vary without any interventions, the cumulative data varies as well, so adjustments to the classical test are required in order to calculate the error probability when multiple analyses are performed. Without those adjustments, the nominal or reported error rate will be inflated significantly compared to the actual error rate. To illustrate: peeking only 2 times results in more than twice the actual error vs the reported error. Peeking 5 times results in 3.2 times larger actual error vs the nominal one. Peeking 10 times results in 5 times larger actual error probability versus nominal error probability. This is known to statistical practitioners as early as 1969 and has been verified time and again.

If one fails to fix the sample size in advance or if one is performing multiple statistical significance tests as the data accrues, then we have a case of GIGO, or Garbage In, Garbage Out.

2. Lack of Consideration for Statistical Power

In a review of 7 influential books on A/B testing published between 2008 and 2014 we found only 1 book mentioning statistical power in a proper context, but even there the coverage was superficial. The remaining 6 books didn’t even mention the notion. From my observations, the situation is similar when it comes to most articles and blog posts on the topic.

But what is statistical power and why is it important for A/B experiments? Statistical power is defined as the probability to detect a true lift equal to or larger than a given minimum, with a specified statistical significance threshold. Hence the more powerful a test, the larger the probability that it will detect a true lift. I often use “test sensitivity” and “chance to detect effect” as synonyms, as I believe these terms are more accessible for non-statisticians while reflecting the true meaning of statistical power.

Running a test with inadequately low power means you won’t be giving your variant a real chance at proving itself, if it is in fact better. Thus, running an under-powered test means that you spend days, weeks and sometimes months planning and implementing a test, but then failing to have an adequate appraisal of its true potential, in effect wasting all the invested resources.

What’s worse, a false negative can be erroneously interpreted as a true negative, meaning you will think that a certain intervention doesn’t work while in fact it does, effectively barring further tests in a direction that would have yielded gains in conversion rate.

Power and Sample Size

Power and sample size are intimately tied: the larger the sample size, the more powerful (or sensitive) the test is, in general. Let’s say you want to run a proper statistical significance test, acting on the results only once the test is completed. To determine the sample size, you need to specify four things: historical baseline conversion rate (say 1%), statistical significance threshold, say 95%, power, say 90%, and the minimum effect size of interest.

Last time I checked, many of the free statistical calculators out there won’t even allow you to set the power and in fact silently operate at 50% power, or a coin toss, which is abysmally low for most applications. If you use a proper sample size calculator for the first time you will quickly discover that the required sample sizes are more prohibitive than you previously thought and hence you need to compromise either with the level of certainty, or with the minimum effect size of interest, or with the power of the test. Here are two you could start with, but you will find many more on R packages, GPower, etc:

Making decisions about the 3 parameters you control – certainty, power and minimum effect size of interest is not always easy. What makes it even harder is that you remain bound to that one look at the end of the test, so the choice of parameters is crucial to the inferences you will be able to make at the end. What if you chose too high a minimum effect, resulting in a quick test that was, however, unlikely to pick up on small improvements? Or too low an effect size, resulting in a test that dragged for a long time, when the actual effect was much larger and could have been detected much quicker? The correct choice of those parameters becomes crucial to the efficiency of the test.

3. Inefficiency of Classical Statistical Tests in A/B Testing Scenarios

Classical statistics inefficiency

Classical tests are good in some areas of science like physics and agriculture, but are replaced with a newer generation of testing methods in areas like medical science and bio-statistics. The reason is two-fold. On one hand, since the hypotheses in those areas are generally less well defined, the parameters are not so easily set and misconfigurations can easily lead to over or under-powered experiments. On the other hand – ethical and financial incentives push for interim monitoring of data and for early stopping of trials when results are significantly better or significantly worse than expected.

Sounds a lot like what we deal with in A/B testing, right? Imagine planning a test for 95% confidence threshold, 90% power to detect a 10% relative lift from a baseline of 2%. That would require 88,000 users per test variant. If, however, the actual lift is 15%, you could have ran the test with only 40,000 users per variant, or with just 45% of the initially planned users. In this case if you were monitoring the results you’d want to stop early for efficacy. However, the classical statistical test is compromised if you do that.

On the other hand, if the true lift is in fact -10%, that is whatever we did in the tested variant actually lowers conversion rate, a person looking at the results would want to stop the test way before reaching the 88,000 users it was planned for, in order to cut the losses and to maybe start working on the next test iteration.

What if the test looked like it would convert at -20% initially, prompting the end of the test, but that was just a hiccup early on and the tested variant was actually going to deliver a 10% lift long-term?

The AGILE Statistical Method for A/B Testing

AGILE Statistical Method for A/B Testing

Questions and issues like these prompted me to seek better statistical practices and led me to the medical testing field where I identified a subset of approaches that seem very relevant for A/B testing. That combination of statistical practices is what I call the AGILE statistical approach to A/B testing.

I’ve written an extensive white-paper on it called “Efficient A/B Testing in Conversion Rate Optimization: The AGILE Statistical Method”. In it I outline current issues in conversion rate optimization, describe the statistical foundations for the AGILE method and describe the design and execution of a test under AGILE as an easy step-by-step process. Finally, the whole framework is validated through simulations.

The AGILE statistical method addresses misuses of statistical significance testing by providing a way to perform interim analysis of the data while maintaining false positive errors controlled. It happens through the application of so-called error-spending functions which results in a lot of flexibility to examine data and make decisions without having to wait for the pre-determined end of the test.

Statistical power is fundamental to the design of an AGILE A/B test and so there is no way around it and it must be taken into proper consideration.

AGILE also offers very significant efficiency gains, ranging from an average of 20% to 80%, depending on the magnitude of the true lift when compared to the minimum effect of interest for which the test is planned. This speed improvement is an effect of the ability to perform interim analysis. It comes at a cost since some tests might end up requiring more users than the maximum that would be required in a classical fixed-sample test. Simulations results as described in my white paper show that such cases are rare. The added significant flexibility in performing analyses on accruing data and the average efficiency gains are well worth it.

Another significant improvement is the addition of a futility stopping rule, as it allows one to fail fast while having a statistical guarantee for false negatives. A futility stopping rule means you can abandon tests that have little chance of being winners without the need to wait for the end of the study. It also means that claims about the lack of efficacy of a given treatment can be made to a level of certainty, permitted by the test parameters.

Ultimately, I believe that with this approach the statistical methods can finally be aligned with the A/B testing practice and reality. Adopting it should contribute to a significant decrease in illusory results for those who were misusing statistical tests for one reason or another. The rest of you will appreciate the significant efficiency gains and the flexibility you can now enjoy without sacrifices in terms of error control.

image 
Natural Variance
Classical statistics inefficiency
AGILE Statistical Method for A/B Testing


Online Behavior – Marketing Measurement & Optimization

Marcofolio.net vNext

Last May, Marcofolio.net turned 10 years old. Although I’m not blogging as frequently anymore as back when I started,
I’m still dedicated to share my passion and inspire my readers. That’s why I’m bringing you Marcofolio.net vNext, a complete overhaul and redesign of my blog. I decided to go for a minimal & clean theme to keep focus on the most important thing: the content.

I decided to focus this blog on development, split in different categories like Xamarin, Cognitive Services and Web Development. You’ll find all other categories on the top of this page.

The logo

The new logo is inspired by lines of code

I’m especially happy with the new logo, proudly showing off at the top of this site. The new logo is inspired by lines of code and has been created with my brother Auke from Rocket Media. My previous logo looked a little bit like a fidget spinner so I’m totally excited to share you my vNext logo.

The old

The last redesign of Marcofolio.net was from 2009 and could really use a 2017-update. My old blog was running on Joomla!, but moving forward I decided to switch to WordPress. I’ve moved over a couple of unique articles related to coding to clean up the content, but you’re still able to visit the old website. Simply head over to old.marcofolio.net to take a glimpse at the past. If you had any bookmarks, everything should still work!

Your thoughts

Anything you want to see different? Stuff that’s not working? Let me know what you think in the comments or on Twitter! I’m now even more motivated to deliver high quality development articles, so expect them soon. Feel free to subscribe to the feed to make sure you won’t miss out!

The post Marcofolio.net vNext appeared first on Marcofolio.net.

Marcofolio.net

How to Repurpose Blog Posts Into Instagram Albums

Are you looking for Instagram content ideas? Have you considered repurposing your blog content into Instagram albums? Grouping multiple images from a blog post into an Instagram album can bring engaging content to Instagram. In this article, you’ll discover how to combine blog posts into Instagram albums. Why Use Instagram Albums to Repurpose Blog Content? […]

This post How to Repurpose Blog Posts Into Instagram Albums first appeared on .
– Your Guide to the Social Media Jungle

Layering in Additional Insights

In my previous post, we discussed the importance of creating a data strategy and modeling your data. Interestingly enough in recent research completed with Econsultancy 46% of ANZ respondents cited integrating data as their key challenge with marketing automation. Once you have your base data model, you may also decide to layer in additional insights based on analysis done either by an in-house team or an agency. This could include data like customized personas based on buying type or what type of customers are detractors.

Following on from the last post, we will continue with our automotive example: The main data object is the buyer and the secondary data object is accessories or factory options. Beyond the base layer, which is the data that you can collect directly from the buyer on a form or from a third-party provider such as Dun and Bradstreet, we can also start tracking their digital body language or external web analytics data to provide insight into how we can better personalize their experience. Digital body language can be tracked through marketing automation platforms that allow us to monitor and view how a client or prospect is engaging across digital channels.

For example, for buyers that purchased in the last quarter were there any common engagement criteria that a buyer has completed that would trigger an intent to buy?

Did they visit the site more frequently?

Did they complete a Sales enquiry form?

Did they schedule an appointment with a Sales Rep?

Can we look for similar activity within our prospect database as provide a more targeted communications strategy to drive conversions based on these additional insights?

Another layer of insight to utilise is mobile usage. Do you know if your customers use mobile versus desktop? Is there app data you can leverage to decide on the frequency or method of communication? If you notice a particular group of buyers searching using their mobile device, perhaps they can choose to receive push notifications versus email. Can you layer any push notifications via SMS or an app especially for time sensitive communications?

From a web analytics standpoint, you can take a similar approach to mobile and analyze patterns in web browsing behavior and see if there are any trends that predominate in a particular buyer group. For example, you may find buyers within a certain age group go online during a particular time of day and search for certain search terms on the site.

In our next post, we’ll discuss how to represent these objects in your marketing automation platform.

As a B2B Marketer, your days are spent trying to reach your customers with the right message at the right time. For ideas on using Account Based Marketing to make this easier, check out this free download.

Account Based Marketing


Oracle Blogs | Oracle Marketing Cloud