Artwork

Player FM - Internet Radio Done Right

0-10 subscribers

Checked 4d ago
Added seven years ago
Content provided by Scriptorium - The Content Strategy Experts. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Scriptorium - The Content Strategy Experts or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
icon Daily Deals

Survive the descent: planning your content ops exit strategy

18:06
 
Share
 

Manage episode 443989426 series 2379936
Content provided by Scriptorium - The Content Strategy Experts. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Scriptorium - The Content Strategy Experts or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Whether you’re surviving a content operations project or a journey through treacherous caverns, it’s crucial to plan your way out before you begin. In episode 176 of the Content Strategy... Read more »

The post Survive the descent: planning your content ops exit strategy appeared first on Scriptorium.

  continue reading

220 episodes

Artwork
iconShare
 
Manage episode 443989426 series 2379936
Content provided by Scriptorium - The Content Strategy Experts. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Scriptorium - The Content Strategy Experts or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Whether you’re surviving a content operations project or a journey through treacherous caverns, it’s crucial to plan your way out before you begin. In episode 176 of the Content Strategy... Read more »

The post Survive the descent: planning your content ops exit strategy appeared first on Scriptorium.

  continue reading

220 episodes

All episodes

×
 
Tempted to jump straight to a new tool to solve your content problems? In this episode, Alan Pringle and Bill Swallow share real-world stories that show how premature solutioning without proper analysis can lead to costly misalignment, poor adoption, and missed opportunities for company-wide operational improvement. Bill Swallow: On paper, it looked like a perfect solution. But everyone, including the people who greenlit the project, hated it. Absolutely hated it. Why? It was difficult to use, very slow, and very buggy. Sometimes it would crash and leave processes running, so you couldn’t relaunch it. There was no easy way to use it. So everyone bypassed using it at every opportunity. Alan Pringle: It sounds to me like there was a bit of a fixation. This product checked all the boxes without actually doing any in-depth analysis of what was needed, much less actually thinking about what users needed and how that product could fill those needs. Related links: How humans drive content operations (recorded webinar & transcript) Brewing a better content strategy through single sourcing (podcast) The Scriptorium approach to content strategy Get monthly insights on structured content, futureproof content operations, and more with our Illuminations newsletter LinkedIn: Alan Pringle Bill Swallow Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Bill Swallow: Hi, I’m Bill Swallow Alan Pringle: And I’m Alan Pringle. BS: And in this episode we’re going to talk about the pitfalls of putting solutioning before doing proper analysis. And Alan, I’m going to kick this right off to you. Why should you not put solutioning before doing proper analysis? AP: Well, it’s very shortsighted and oftentimes it means you’re not going to get the funding that you need to do the project to solve the problems that you have. And with that, we can wrap this podcast up because there’s not a whole lot more to talk about here, really. But no, seriously, we do need to dive into this. It is very easy to fall into the trap of taking a tool’s first point of view. You’ve got a problem, it’s really weighing on you. So it’s not unusual for a mind to go, this tool will fix this problem, but it’s really not the way to go. You need to go back many steps, shut that part of your brain off and start doing analysis. And Bill, you’ve got an example, I believe, of how taking a tool’s first point of view didn’t help back in a previous job you had. BS: I do, and I’m not going to bury the lead here, but they didn’t do their homework upfront to see how people would use the system. So I worked for a company many, many, many years ago that decided to roll out and I will name the product. They rolled out Lotus Notes. AP: You’re killing me. That’s also very old, but we won’t discuss that angle. BS: But they did so because it checked every single box, every single box on the needs list, it did email, it had calendar entries, it did messaging, notes, documents, linking, sharing, robust permissions, and you even had the ability to create mini portals for different departments and projects. So on paper, it looked like a perfect solution. And everyone, including the people who greenlit the implementation of Lotus Notes, hated it. Absolutely hated it. Why did they hate it? It was difficult to use. It was very slow. It was very buggy. Sometimes it would crash and leave processes running, so you couldn’t relaunch it. There was no easy way to use it. Back at that point, we had PDAs, personal digital assistants, and very soon after that we had the birth of the smartphone. There was no easy way to use it in these mobile devices except for maybe hooking up to email. It didn’t fit how we were working at all. While it shouldn’t count, it really wasn’t very pretty to look at either. So everyone bypassed using it at every opportunity. They would set up a Wiki instead of using the Lotus Notes document or notes portal that they had. They would use other messaging services. This is back during Yahoo Messenger and ICQ. But yes, we had that going on and in the end it was discontinued after its initial three-year maintenance period ended because nobody liked it. AP: Yeah, so sounds to me like there was a bit of a fixation. This product checks all the boxes without actually doing any in-depth analysis of what you needed, much less actually thinking about what users needed and how that product could fill those needs. And I think it’s worth noting too, think about this from an IT department point of view, because they’re often a partner on any kind of technology project, especially if new software is going to be involved because they’re going to be the ones a lot of times that say yay or nay, this tool is a duplicate of what we already have. Or no, you have some special requirements and we do need to buy a new system. So if I as an IT person, the person who vets tools hears from someone, and let’s get back into the content world, I need a way to do content management and I need to have a single source of truth and I need to be able to take the content that is my single source of truth and then publish to a bunch of different formats. This is a very common use case. I would be more interested as an IT person in hearing that than hearing I have to have a component content management system. There’s a subtle difference there. And I think, and this is possibly unfair and grouchy of me, but that is me, grouchy and unfair. If I hear someone come to me, I need this tool instead of I have these issues and I have these requirements. It sounds selfish and half-baked. BS: It does. AP: And again, I am thinking about this from the receiving end of these queries, of these requests, but I also want to step back into the shoes of the person making a request. You can be so frustrated by your inefficiency and your problems, you latch onto the tools. So I completely understand why you want to do that, but you are basically punching yourself in the face when you go and make a request that is, I need this tool instead of I have these issues, these requirements, and I need to address these things. It’s subtle, but it’s different. BS: It’s very different. And also if you do take that approach of looking at your needs, you find that there’s more to uncover than just fixing the technological problem itself. AP: Yes. BS: There might be a workflow problem in your company that you may acknowledge, you may not know it’s quite there. Once you start looking at the requirements and looking at the flow of how you need to work, and how you need any type of new system to work, you start seeing where the holes are in your organization. Who does what? What does a handoff look like? Is it recorded? What does the review process look like? When does it go out for formal review? What does the translation workflow look like? And you start seeing that there may be a lot of ad hoc processes in place currently that could be fixed as well. AP: True. And I also think when you’re talking about solving problems and developing your requirements from that problem solving, you are potentially opening up the solution to more than just your department, your group. It can possibly be a wider situation there, too. And also by presenting it as a set of problems and requirements to address those problems, there may be already a tool in-house at your company that you don’t know about or there may be part of a suite of tools, and if you add another component to it will address your problem instead of just buying something completely outright. And we’ve seen this before, where it turned out there was an incumbent vendor that had some related tools already at the company, and that company also had a tool that could solve the problems that our client had or our prospect had. We’ve had both prospects and clients have this issue, so it doesn’t make sense, therefore, to go and say, I need this tool, which is essentially a competitor of what’s already in place. You’re going to have a very uphill battle trying to get that in place. It is also very easy, as someone who has already done a content ops improvement project, to understand this tool is good. It saves me at this company, but you’ve got to be careful of thinking just because it helped you over at company A. Now you’re at company B, it may not be a fit for company B culturally, there may be already something in-house. So you’ve got to let go of those preconceived notions. I am not saying that the tool you used before was bad. It may be the greatest thing ever, but there may be cultural issues, political issues, and even IT tech issues that mean you cannot pick that tool. So why are you pushing on it when you have got all of these things against you? Again, it is easy to fall into these traps. Don’t do it. BS: Yep. On the flip side of that, we had a situation where a customer of ours years ago was looking for a particular system, a CCMS, component content management system, and they had what they perceived to be a very hard requirement of being able to connect to another very specific system. AP: Yes, I remember this. It was about 10 or 11 years ago. BS: And it was such a hard requirement that it basically threw out all of their options except for one. And we got the system working the way they needed it to. It needed quite a bit of customization, especially over the years as their requirements grew. But in the end, they never connected to that requirement system. The one that everyone said this would be a showstopper. They never connected to it because they just decided it wasn’t a requirement after X many years. And that just kills me because there could have been three or four other candidate systems that would’ve easily have fit the bill for them as well and probably would’ve cost them a little bit less money. But there we are. AP: In fairness, all parties involved, including us, we’re working on the information that we had at the time. And I think this is a case where a requirement that we thought was a hard requirement turned out not to be. However, just because this happened in this case, folks out there listening to us, that does not mean that if a particular requirement points at a particular system that it could be not a real requirement because you want another system really badly. So you want to ignore that really hard, not how that works. It’s not how that should work. So I think there is a balance here that needs to be struck, and I think this is probably a good closing message. Don’t follow your knee-jerk instinct in regard to, I need this tool. Really look at the requirements, do an analysis. And because we’re humans, sometimes that analysis is not going to catch other things that it should have. Or you may end up having, like you just mentioned, a requirement that that’s not necessarily as real as you thought that it was. But I think your chance at project success and getting a tool purchased can configure and up and running are much higher when you start with those requirements than you start off with, I need tool Y. BS: Well said. Do the homework before the test. AP: And don’t put the cart before the horse. BS: Well, thank you, Alan. AP: Thank you. This was shorter, but it’s an important thing, and I think, again, this points to any kind of operational change being a human problem and dealing with people’s emotions and their instincts as much or more than an actual technological issue. CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. Need to talk about content solutioning? Contact us ! The post Tool or trap? Find the problem, then the platform appeared first on Scriptorium .…
 
Struggling to get the right content to the right people, exactly when and where they need it? In this podcast, Scriptorium CEO Sarah O’Keefe and Fluid Topics CEO Fabrice Lacroix explore dynamic content delivery—pushing content beyond static PDFs into flexible platforms that power search, personalization, and multi-channel distribution. When we deliver the content, whether it’s through the APIs or the portal that you’ve built that is served by the platform, we render the content in a way that we can dynamically remove or hide parts of the content that would not apply to the context, the profile of the user. That’s the magic of a CDP. It’s delivering that content dynamically. — Fabrice Lacroix Related links: Scriptorium: Personalized content: Steps to success (white paper) Scriptorium: AI in the content lifecycle (white paper) Fluid Topics , an AI-powered content delivery platform Fluid Topics: What is Content Operations and Why is it Important? Get monthly insights on structured content, futureproof content operations, and more with our Illuminations newsletter LinkedIn: Sarah O’Keefe Fabrice Lacroix Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hi everyone, I’m Sarah O’Keefe and I’m here today with the CEO of Fluid Topics. Fabrice, Lacroix. Fabrice, welcome. Fabrice Lacroix: Hey. Hi Sarah. Nice being with you today. Thanks for welcoming me. SO: It’s nice to see you. So as many of you probably know, Fluid Topics is a content delivery portal or possibly a content delivery platform. And we’re going to talk about the difference between those two things as we get into this. So Fabrice, tell us a little bit about Fluid Topics and what that content delivery portal or maybe platform. Which one is it? What do you prefer? FL: For us, it’s platform definitely. But you’re right, depends on where people are in this evolution process, on how they deliver content. And for many, many customers, the piece stands for a portal. You’re right, because that is the first need. That’s how they come to us, because they need a portal. SO: Okay, so in your view, the portal is a front end, an access point for content, and then what makes it a platform rather than a portal? FL: Probably because the goal that many companies have to achieve is delivering that content where it’s needed. It’s many places most of the time. So it’s not just the portal itself, and that’s where solving the problem of being able to disseminate this content to many touch points, you need a platform for that. The portal is one touch point only, but when you start having multiple touch points like doing in-product help or you want to feed your helpdesk tool or field service application or whatever sort of chatbot somewhere else, whatever use case you have that is not just the portal itself, then that becomes a platform thing. SO: So looking at this from our point of view, so many of our projects start with component content management systems, CCMSs, which are the back end. This is where you’re authoring and managing and taking care of all your information, and then you have to deliver it. And one of the ways that you could solve your delivery front-end would be with a content delivery platform such as Fluid Topics. Okay. So then, what are the prerequisites, when you start thinking about this? So our hypothetical customer has content obviously, and they have, we’re going to say probably a back-end content management system of some sort, probably. FL: Most of the time. SO: Most of the time. FL: Depends where you go, depends on the maturity and the industry. If you go to some manufacturing somewhere, they mostly still are maybe on the word and FrameMaker or something like that in design, and then they generate PDFs. SO: So maybe we have a backend authoring, well, we have an authoring environment of some sort on the back-end. Maybe it’s a CCMS, maybe it’s something not like that. And now we’re going to say, all right, we’re going to take all this content that we’ve created and we’re going to put it into the CDP, the content delivery platform. Now, what does success look like? What do you need from that content or from the project to make sure that your CDP can succeed in doing what it needs to do? FL: The first answer to that question that comes to my mind is no PDFs. I mean, if you look at it, don’t laugh at me. If you look at it from an evolutionary perspective, it’s like regardless how people were writing before, it was not CCMS, mostly unstructured. And at the end of the day, people were pressing a button and generating PDFs and putting the PDF somewhere, CRM, USB key, website for download. But managing the content unstructured was painful. That’s where you start working with the CCMS, because you have multiple versions, variants, you want to work in parallel, you want to avoid copy paste, translation, so the story around that. So then companies start and they start moving their content into CCMS. All of the content, part of the content, but they start investing in a modern way of managing, creating their content. But again, if you look at it once they have made that move, most of those companies 10, 15 years ago probably were still pressing a button and still generating PDFs. And then they realized that they had solved one problem for themselves, which is streamlining the production capability and managing the content in a better way. But from a conception perspective, regardless whether you work with word FrameMaker or in DITA with the most advanced CCMS of the market, if you still deliver PDF, you are not improving the life of your customers. And then people started realizing that, oh yeah, so we should do better. So let’s try to output that content in another way than PDFs. And then say, “What else than PDF, do we have? HTML.” And was like, okay, and let’s output HTML. But HTML that is pretty much the same as the PDF. You see what I mean? It’s like static document. Each document was a set of HTML pages. And then they started realizing that they need to reassemble the set of HTML pages into a website, which is even more painful than just putting PDFs on the website is reassembling zip files of HTML pages on the website, and then it’s like static HTML. And then you have to put a search on top and have to create consistency. And that’s why CDP have emerged. That’s solving this need, which is, how do we transition from PDF to static HTML to something that is easier, that ingest all this content, comes with search capabilities, comes with configuration capabilities, and as well at the same time as API, so that back to the platform thing, it’s not just a portal, but can serve other touch points. So that’s really because we are in the detail world, DITA is the Darwin Information Typing Architecture. So that’s a very Darwinian process that led to this creation of the CDP and the need of a CDP is the next step in the process. And many companies really follow that process of, I have to go from my old ways of writing, which are not working painful, move to a CCMS, but in fact realize that they don’t solve the real problem of the company, which is how can I help my customer, my support agent, my field technicians better find the content better use my content? And that’s where this T, oh, okay. That’s where we need a CDP. SO: Yeah, and I think, I mean, we’ve talked for 20 years about PDFs and all the issues around them, but it’s probably worth remembering that PDF in the beginning was a replacement for a shelf of books, paper books that went out the door. And the improvement was that, instead of shipping 10 pounds, or I’m sorry, what four kilos of books you were shipping as you said, a CD-ROM or this was before USB, a zip drive. Remember those? FL: Zip drive. SO: A zip drive. But you were shipping electronic copies of your books and all you were really doing was shifting the process of printing from the creator, the software, hardware, the product company to the consumer. So the consumer gets a PDF, they print it, and then that’s what they use. Then we evolved into, oh, we can use the PDF online, we can do full-text search, that’s kind of cool, that was a big step forward. But now to your point, the way that we consume that information is not printed and it’s for the most part, and it’s not big PDFs, but rather small chunks of information like a website. So how do we evolve our content into those websites? So then what does it look like to have a, and I think here we’re talking about the portal specifically, but what does it look like to have a portal for the end user that allows them to get a really good experience in accessing and using and consuming the content that they need to use the product, whatever it may be. What are some of the key things that you need to do or that you can do? FL: Yeah. I would say that the main thing that a CDP is achieving compared to static HTML, because now we have to compare not with PDFs that are probably still needed if you want to print as well, I’m not saying that PDF is dead and we should get rid of all PDFs. Just said that it’s just when you need to print, then you can get the PDF version of a document. But if we compare static HTML with what a CDP brings, we’re trying to make content personalized and contextual. If you pre-generate static HTML pages, it’s one size fits all. It’s the same HTML pages for everyone. And if you have two versions of your product and one variant, and then you translate the same zip file exists in 20 versions, so to say, and you have to assemble that and let people understand how to navigate that and that should become super complex. What a CDP solves is like, give me everything, and I will sort out this notion of I understand the fact that the same document can exist in 20 variants, whether it’s product version, document version, capabilities of the product version A, version, B, Asian market, European market, American market. And then you have subtilities and some paragraphs are here, some paragraphs are removed, added. And so we are adapting the content so that it fits the profile of the user. And if you ask me what’s needed to make a CDP work, it’s mostly metadata, metadata, metadata. And I can tell you a story, what was fun? It’s like, few years ago, some years ago, more than few, we had customers reaching out or expecting customers to reach out and say, “Oh, show me three topics.” And then we’re showing the capability and say, “Oh my God, it’s exactly what we need.” And then those guys disappeared for two years. And in fact, what they did during these two years is like adding metadata to the content. It was not about the product, but through this discussion we had with them and showing that you can put facets for the search and then varianting content and let people switch between variants and versions of the content through metadata and all that, and they realized that, oh my God, that’s exactly what we need. And then through their questions, they understood that they needed to have those metadata on the content and those metadata were not existing and still they were working with the CCMS. But if your output channel are PDFs, if you don’t put PDFs, you don’t care about putting this metadata on the content inside the CCMS. That’s a lot of work to do to maintain those metadata. But if at the end of the day you print a button and you generate a PDF, those metadata are lost, they are not used, they’re not leveraged by the PDF. So that becomes flat pages of content. So they had transitioned to a CCMS but never made this investment of tagging content. And when I mean tagging content, it’s not just the map, it’s like the section, the chapter, this is for installing, this is for removing, this is for configuring, this is for troubleshooting, this chapter is about this, this topic is about that for this version of the products. You know what I mean? Fine-grained tagging at different level of the documents. And because they were generating PDFs, they didn’t see the need of making that tagging at the right level, and they realized that suddenly the sheer value they could get from PDF is when the content is tagged because that’s using those tags and those metadata schemes that the CDP can adapt the content to the context profile of the user. So I would say, what’s needed to leverage the capabilities of a CDP? It’s mostly granularity of content and tags, metadata that let people, and you can design your metadata from a user perspective. As an end user, how would I like to filter the content? What are the tags I need for filtering the content? It’s like, if I run a search, I have these facets on the left side of the search result page, what would I like to click on to refine my search and spot the content that fits my needs? SO: And I think, going back to our flat file PDF or static HTML, if we need to do this kind of thing, if you need context in a flat file, what you have to do is say something like, if you have product variant A, do this. And if you have product variant B, do this. Or if you are installing and the temperature, the ambient local temperature is greater than X, then do these extra steps. If you are baking and you are at high altitude, you have to adjust your recipe in these ways. So you end up with all these sort of if statements that are, hey, if this is you do these things, but it’s all in the text, because I have no way, maybe I can do two variants of the PDF like variant A for regular altitude and variant B for high altitude. But I can’t do one per country, right? I mean, I guess I could, but ultimately, what you’re describing is that instead of putting it into the text explicitly, “Hey Fabrice, if you meet these conditions, do these things or don’t do these things or do these extra things,” the delivery portal, platform is going to say, “Okay, what do I know about this end user? What do I know about Fabrice? I know he is in a certain location with a certain preferred language and a certain product. I know which products you bought.” So therefore you don’t get an if, if, if, if, you just get, here’s what you need to do in your context with your product. FL: Exactly. When we deliver the content, whether it’s through the APIs or the portal that you’ve built and that is served by the platform, we render the content in a way that we can remove or hide dynamically parts of the content that would not apply to the context, the profile of the user. And that’s the magic of CDP. It’s making that content dynamically. It’s also called dynamic content delivery. You remember we had this concept, the dynamic part is, how can I dynamically leverage the metadata on the content side or the conditions that I adapted, read through metadata schemes and make that applicable to the situation and the user profile? So that’s the magic part of it, and that’s a huge improvement compared to a static document that lists all the conditions and then you put the burden on the reader to figure out, sort out inside the document what should be skipped and what to do depending on the product configuration. SO: Which can of course get very complicated. Now you mentioned product help, in-app help, context sensitive help. So what does it look like to use a Fluid Topics or this class of tool to deliver context sensitive help or in-app help? FL: We are back again to this granularity and the metadata. So imagine you are a software vendor, you design a web application that you have created and you want to do the inline help for your application, your web product. What would you do? You would say in that page, when people click on that question mark or help button, we should open a pane and display that information. That information needs to be a topic, it needs to be written, and the granularity should be a topic because that’s what you pull from the system. So that’s where we need the granularity that’s matching what you want to display inside your app, whether it’s a tool tip, maybe a small tool tip when you move something in the app and then that becomes some fragment of content you need to get from the CDP dynamically. That can be one page of explanation that you display in a pane that opens in your app, but you need to pull that content. So the same way that that’s how you would do it, you were embedding the content inside the application itself. You would write each part of the explanation, the help that you want to display as fragments of information. If you are doing it statically inside the application, but the problem is that if you want to fix something or enhance the content, you have to edit the application, change the… So it’s part of the development. Here, you want the app to pull the content dynamically because the same content can be not only used to be displayed live in the screen, real time. But can be the same content that is used on the doc portal or then you print a PDF on how to do this. That’s the same. You don’t want to maintain the same explanation, in the application, in the portal, in PDFs. So one source. So it’s exactly that. And then you’re pulling through metadata. The app will say, “Oh, give me what goes into that page.” So it’s metadata-driven as well. SO: Right? So there’s an ID on the software or something like that, and it says, “Give me the content that belongs with this unique label.” FL: Exactly. Behind each button you give an ID to that button, which is the question mark in that page. When people click pull content, inline content help, ID number 1, 2, 3, 4. And on your CCMS, you have a metadata, which is called content ID for inline help, whatever. And then you tag that piece of content, 1, 2, 3, 4, and then that’s it. Magic is done. So it’s that simple. SO: So what I’m hearing, and this is in fairness, exactly what you started with is, you have to have metadata, right? On the content. FL: You have to have metadata. SO: And without the metadata there is, well, let’s talk about magic. So if you have a front end that is some sort of a large language model that bought something, what does that mean in terms of this content delivery platform? I mean, can’t you just use ChatGPT and call today? FL: Yes, that’s a good one. I think most of the project AI project we’ve seen in large companies when they started to do, oh, let’s build a chatbot. That’s the magic dream of any company like building the chatbot that replies to any question. Okay, so how does the project start usually? You have the IT, some people in the IT team or the IT team is hiring external people specialized in AI and they realize that they need content. So the first thing they do is they come usually to the TechDoc team and say, “Give me all the content that you have.” And the TechDoc team says, “Okay, we have all these DITA contents.” You say, “No, I don’t want DITA, I want PDFs.” That’s huge to see that. Why? Because they use technology like something from Microsoft, you can build your chatbot in five minutes, but then the only content types you can fit this ready to use platform is with PDFs and Word. So all the magic you’ve put in your content and the tags are lost and you see people getting PDFs out of you wanting PDFs from your content, which is the exact opposite of the investment you’ve made. Putting PDFs somewhere on the storage place and say to Microsoft Chatbot, blah, blah, this is the content, this is the knowledge of the company. And then when you have 20 variants of the same product, then no metadata anymore. Then the chatbot is always mixing all the content. And when you start asking real questions about how to do this, how to do that with this version of the product, everything is lost. And then the chatbot start hallucinating, not because the LLM is hallucinating, because the LLM just the system, the chatbot does not know what PDF to use because it’s implicit to know that this PDF applies to that version of the product or that version. It’s even worse if you say, “If you have product A, do this, if you have product B, do that and start mixing conditions and then just the knowledge becomes barely readable by humans that make mistake reading it. So can you imagine how an LLM can make sure that it’s putting the right information from that complex text structure? SO: Okay, so make PDFs out of DITA, dumb it down, send it to the chatbot, that’s bad. FL: And then it’s guaranteed failure. SO: So what’s the good version of this? FL: But that’s how it works. I guess, I write that you’ve seen this sort of projects where people were asking for the content, thinking that the more they have, the better it’s going to be. And suddenly they realize that, that chatbot is not working and doing many mistakes. And they call that hallucination, because if the LLM was hallucinating, but it’s not, it’s just able to feed the LLM dynamically with the right retrieval, augmented generation scheme to dynamically provide the information for replying to the question because it’s difficult to pull from the PDF the right information that applies to the context. And we are back to, what is the context? What is the machine? What is the profile of the user? What is the variant, the version, the whatever you have in front of you? So that’s the complex part. So what’s the relationship? What is the successful AI? What’s the relationship between CDP and AI? All AI projects I’ve seen start regardless of us, regardless of Fluid Topics, start with we need to gather content. We need to take the content that we have, put it in one place, create this sort of unified repository of content. The promise that usually, as I said, they do it using static document, PDFs, to analyze blah blah. If you look at what a CDP is, that’s exactly what it is. It’s already your repository of content. At least everything around the product, because we’ve been talking about CCMS published to CDP. What also makes a CDP very special is that, not only can we ingest this DITA content, but also this legacy PDF and markdown content, API documentation knowledge bases. So the CDP is here to ingest all the knowledge that you have around your product, not just necessarily the formal techdoc, the proper techdoc that has been well written and validated. So we have already, well, the CDP is exactly that. It’s building, that’s the purpose of it. It’s building that unified repository and that’s where you should start from. And it’s fine grained, and we have the metadata and we have everything, so we know how to feed the LLM. So there are two things in an AI project. One is the LLM, but now people use generic LLM, you don’t fine tune, train an LLM anymore for this sort of use case that is just a chatbot for replying to questions and solving cases automatically. You use a generic LLM and you feed the LLM dynamically with the fragments of content of knowledge that you have in your repository. And that’s where just as a human, when you run a search, you look for content, you know what part of content, what are the fragments, the topics, the chapters that contain the knowledge for replying to that question? The tough part is, extracting that from the repository. Am I extracting the 2, 3, 4 pages around the question that are matching the version, the situation that I’m in? So that I can then feed the LLM and say, “This is the 10 pages of knowledge that we have, or 20 or 50 pages of knowledge. This is the question replied to the question using that knowledge.” That’s exactly what a chatbot does. You’re giving the question of the user, you give 5, 10, 20, whatever number of pages of knowledge that you have in your repository and you ask the LLM say, “This is the question, this is the knowledge, please reply.” So the test part is extracting the 5, 10, 20 pages that are really adapted to the situation, to the context. SO: And the metadata helps you do that. FL: And the metadata. Nothing else than metadata for doing that. SO: Right. Okay. So we’ve talked a lot about metadata as I guess a precondition, right? A prerequisite. Yeah, it is. If you don’t have metadata, none of these other things are going to work. And I wanted to ask you about other, maybe, challenges or prerequisites. So other than people coming in and saying, oh, right, we need metadata, and then they go away for two years and then they come back and they have some metadata, what are the other issues that you run into when you’re trying to build out a CDP like this? What are some of the other… What are the top challenges that you run into other than clearly metadata? So we’ll put that one at number one. FL: Oh yeah, clearly number one. I would say the second one now is the UX UI people want to design. Because modern platform have unlimited capabilities in designing the front end, the UI that you want. It’s like what do you want? What makes sense for regarding, based on your product types, the user that you have, the content that you have, what is the UX you want to build? That’s interesting, because probably five, no, let’s say 10 years ago, we were providing default interfaces out of the box with the product, with three topics to build your portal. And you could just brand that, put your colors, logo, tweak it a bit, and everybody was happy with that. And then we’ve seen a big evolution because now for many companies, marketing everywhere to say on UX, you have now UX director of VP of user experience that were not existing five years ago, 10 years ago. See what I mean, everybody was working is on swim lane. The techdoc department was in charge of writing the content and probably generating the PDFs and then setting up a doc portal. But many companies have realized that this tech doc portal is instrumental to the performance of the company. And now it says, “Oh, we need to have a look at that.” So it becomes a shared place. See, you’ve seen that I guess in your project. SO: Yeah. Yeah. FL: Five years ago, 10 years ago, the only people you had to work with and educate and discuss with were probably the tech dog team. And now you’ve got marketing and you’ve got customer support, and you’ve got customer experience people. And because they’ve realized the value there is in this content, but as well as how important it is to design the writer’s experience that fits with the other touch points of the company to create a seamless journey when you go from the corporate website to the documentation website to the help desk tool to the LMS. And you need some consistency around that, not only in terms of just branding colors and logos, but you go beyond that. And we see this as a new place where people struggle a bit. Our customers struggle is what do we want? In fact, they know that marketing says we need something that is more modern, more like this, more like that. But we start opening the discussion, what is it really that you want? Some companies are very mature, they got the Figma mockups and they come to us, “This is what we need to implement. We’ve spent two years with UX designer crafting the UX of our portal.” And some come and say, “Oh my God, you’re right. We don’t know what you need. Give us a default, something to start with and we’ll see.” SO: Well, you’ll appreciate this. I had a call not too long ago with a very, very, very, very large company, very large. And they said, “We need a front end for our content, this tech content that needs to go out into the world, we need a design for it.” And because it’s a very large company, I said, “Great, where’s your UX team? And do you have a design system?” Because, I mean presumably they do. And the person I was talking to said, “I don’t know. I don’t think so.” And so I consulted the almighty search engine and discovered that not only did this particular company have a design system, they had something that is publicly available, that is their design system that you can go get all the pieces and parts and all the logos and all the behaviors and everything. It is all out there in the world. And yet, the people that work at this organization and in their defense, there are many, many tens of thousands of them did not know that this thing existed. And so all of their requirements in terms of what they had to do for their portal design were right out there in the world accessible to me. FL: They didn’t even know about it. SO: And they had no idea that it existed. And so we had to be the ones to make that connection and say, okay, we have to talk to the people or at least download all these assets and then figure out what to do with them and then make sure that we’re following the rules and all the rest of it. So to your point, the enterprise issues, and we also run out into this with metadata and taxonomy, that that is typically an enterprise problem, not a departmental problem. And actually making those connections across the departments for the first time is a task that very often falls to us as the consultants on the outside who are asking, “Do you have a taxonomy project? Do you have design systems? Do you have these enterprise assets that we need to align with and be consistent with?” And they’re not ready for that question, because it was until recently, put a pile of PDFs somewhere. FL: That’s just a known and you don’t know what you don’t know. And when they start moving up to more capable tools, they discover that it comes with more capabilities, but they have to make choices, they have to invest in metadata, UX design and all that. And it’s probably some of those companies are not ready yet. I mean, they didn’t foresee that coming. And that’s where the project lag a bit in terms of complexity as well, because they realize that it’s not just buying the tool as well, making the investment on their content, their UX strategy, their design system and all that. That may be missing in some cases. SO: And I think that probably saying it’s not just about buying the tool is really a good summary of this whole situation. Because we started with you’re really going to need metadata, and if you don’t have metadata, that’s a huge problem. And we’ve landed on, and there are all these other connections and pieces and parts that you have to think about. So Fabrice, thank you very much. This was a great discussion and I appreciate all your information and we will wrap this up there. Are there any parting thoughts that you want to leave people with? FL: It was an absolute pleasure having this discussion with you, Sarah. I think it could have last another hour easily, so we need to stop somewhere. Maybe we’ll have another opportunities to keep on chatting about some of the subjects. SO: Yep. Sounds good. And thank you again, and we will see you soon. Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. The post Deliver content dynamically with a content delivery platform appeared first on Scriptorium .…
 
Are you considering a structured approach to creating your learning content? We built LearningDITA.com as an example of what DITA and structured learning content can do! In this episode, Sarah O’Keefe and Allison Beatty unpack the architecture of LearningDITA to provide a pattern for other learning content initiatives. Because we used DITA XML for the content instead of the actual authoring in Moodle, we actually saved a lot of pain for ourselves. With Moodle, the name of the game is low-code/no-code. They want you to manually build out these courses, but we wanted to automate that for obvious reasons. SCORM allowed us to do that by having a transform that would take our DITA XML, put it in SCORM, and then we just upload the SCORM package to Moodle and don’t have to do all the painful things of, you know, “Let’s put a heading two here with this little piece of content.” And the key thing is that allowed us to reuse content. — Allison Beatty Related links: Self-paced, online DITA training with LearningDITA.com Structured authoring and XML (white paper), which is also included in our book, Content Transformation Confronting the horror of modernizing content The benefits of structured content for learning & development content Get monthly insights on structured learning content, content operations, and more with our Illuminations newsletter LinkedIn: Sarah O’Keefe Allison Beatty Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hi everyone, I’m Sarah O’Keefe. Allison Beatty: And I’m Allison Beatty. SO: And in this episode, we’re focusing in on the LearningDITA architecture and how it might provide a pattern for other learning content initiatives, including maybe the one that you, the listener, are working on. We have a couple of major components in the learningDITA.com site architecture. We have learner records for the users. We have e-commerce, the way we actually sell the courses and monetize them. That is my personal favorite. And then we have the content itself and assorted relationships and connectors amongst all those pieces. So I’m here with Allison Beatty today, and her job is to explain all those things to us because Allison did all the actual work. So Allison, talk us through these things. Let’s start with Moodle. What is Moodle and what’s it doing in the site architecture? AB: Okay. So Moodle is an open-source LMS that we- SO: What’s an LMS? AB: Learning management system, Sarah. SO: Thank you. AB: And we installed Moodle, our own instance of Moodle and customized it as we saw fit for our needs. And that is the component that acts as the layer between the content and the learning experience. So without the Moodle part, it’s just a big chunk of content that you can’t really interact with. And Moodle gives that a place to live. SO: And then Moodle has the learner records, right? AB: Yes. SO: And what about groups? What does that look like? AB: In Moodle, there’s a cohort functionality which allows us to use groups so that a manager can buy multiple seats and assign them to individuals and keep track of their course progress through group registration rather than individual self-service signups. SO: So if I were a manager of a group that needs to learn DITA, instead of having to send five or 10 or 50 people individually to our site, I could just sign up once and buy five or 10 or 50 seats in a given course and then assign those via email addresses to all of my people, right? AB: Exactly. SO: Okay. So then speaking of buying things, we had to build out this e-commerce layer, which I was apparently traveling the entire time that this was going on, but I heard a lot of discussion about this in our Slack. So what does it look like? What does the commerce piece look like? AB: Yeah. So it is a site outside of the actual learningDITA.com Moodle site that has a connector into Moodle so that you can buy a course or a group registration in the store, and then you get access to that content in Moodle. SO: So we have this site, this actually separate site, and if you’re in there, you can do things like buy a course or buy a collection of courses or a number of seats. And then what were some of the fun complications that we ran into there? AB: Oh yeah. So the fun complications there were figuring out how to set up an commerce site that A, connected to Moodle so that we could sell the courses, and B was able to process taxes and payments and all of that fun stuff. So Moodle has PayPal as a feature just out of the box and the base Moodle source code. But we wanted to accept credit cards directly and so that meant some additional layers, which is how we ended up with the store.scriptorium.com site, which is built on WordPress and uses a connector, the aforementioned connector, to make those two sites talk to each other. So they’re actually, the LMS and the e-commerce piece are totally separate websites, but exist within the same system environment. SO: And most of you listening to this probably don’t care, but one of the things we learned was that digital training, downloadable training content is sometimes subject to sales tax and sometimes not, depending on the particular state or the particular jurisdiction. So it’s not just, what is sales tax in North Carolina versus what is sales tax in Washington state versus what is it in Oregon? But additionally, in each jurisdiction is this type of training subject to sales tax or not. So we spent a more than optimal amount of time on figuring out all of those things and making sure we get it right, because I’m extremely interested in making sure that those taxes are done correctly and keep us out of trouble. AB: And the basic PayPal and Moodle wasn’t going to give us that level of granular control and specification. SO: And typically our customers are looking to pay via credit card. So we’ve got the LMS piece with the learner experience, the actual learning platform. We’ve got the e-commerce piece with the Let’s Take Money piece. And then finally we have the content piece. So what does it look like to actually create these courses and create and manage the content that then eventually goes into Moodle? AB: Yeah. So the content does have a single source of truth. It is all authored in DITA XML and stored in a central repository. You can see that content in GitHub. It’s open source. We took the DIT XML and we developed a SCORM transform that we could use to hook the content up into Moodle and be able to use all of the grading and progress and prerequisite type things that we needed to flush out the actual learning platform. We had learned a fun lesson along the way that Moodle does not support SCORM 2004. So that required a little bit of backtracking to make sure that we were getting the data into the correct SCORM to get into Moodle. And so because we used it XML for the content instead of the actual authoring in Moodle, we actually saved a lot of pain for ourselves with Moodle. The name of the game with Moodle is low-code/no-code, and they want you to manually build out these courses. But we wanted to automate that for obvious reasons, and SCORM allowed us to do that by having a transform that would take our DITA XML, put it in SCORM, and then we just upload the SCORM package to Moodle and don’t have to do all the painful things of let’s put a heading to here with this little piece of content. And the key thing is that allowed us to reuse content as well. And then if we need to update the content, all we have to do is replace the SCORM package in Moodle. SO: So currently we have DITA 1.3 content out there. The DITA 2.0 content is under development, and I would say mostly done. We’re mainly waiting for the actual release of the those two chunks of content, although those courses are going to be in GitHub in the DITA training, or I think it’s called Learning DITA now, the Learning DITA project. AB: Yep. SO: Separately from that, we’re working on some new courses which are not going to be open sourced, but will be available on Moodle or… Sorry, on learningDITA.com. And so for those of you that are wondering, we’ve got a number of things on our roadmap. I’d love to hear more from people listening to this about what they need out of this. What more advanced courses are you looking for? One thing that we’ve heard a lot of requests for is a DITA open toolkit plugins 101. How do I build a plugin? How do I use best practices? How do I make this all happen? So we have this, I don’t know, DITA inception thing happening because we’re training people on how to do DITA using DITA inside DITA, building out the stuff. AB: It’s all very meta. SO: It’s extremely meta. Hypothetically, what would it look like to localize this? So what we’ve delivered right now is in English, and in the past we have had people put together both, let’s see, German, Chinese, and I think French versions of the Learning DITA content. But what does it look like in this new architecture to localize? AB: Yeah. So much like the tool chain for this new architecture, there are a couple of different components, and if you would like to localize the Learning DITA content, what you’ll want to look at is the content itself, translating and localizing the source content, but you’ll also need to localize Moodle some. So what you would do is make a, basically clone the Moodle site, and you’ll have to, not to go too into the Moodle weeds, but you’ll need to reconfigure the initializing PHP file a little bit. And then you would take your translated localized content and prep that up into your new Moodle for whichever language you’re localizing into. SO: So it looks as though, you mentioned maintenance and this idea that Moodle by design wants you to make updates inside Moodle, and we pulled the content out of there. We’re basically saying Moodle is for learners and learning management and course records and sequencing and those kinds of things, and grading, I suppose, but the DITA back end is for content. So we’re putting all the content in DITA and then we push it over to SCORM, which then goes into learningDITA.com into the Moodle site. It sounds like more work, right? We had to build a SCORM transform. We had to put all this stuff in… We didn’t just go into Moodle and start authoring, which would be a lot faster on day one. So what’s the rationale for that? What does it look like in the long term to maintain something in Moodle versus to maintain something in the system that we’re describing? AB: Yeah. It may seem easier on day one to manually put the content in, but when you need to make an update or change something, or particularly if you want to change something about a piece of content that is reused and repeated throughout the courses, you have to manually trawl through every single course page and make those updates, whereas with the SCORM package, once you have the SCORM transform set up and running to your liking, you can run your DITA content through there and then replace the SCORM package in Moodle instead of having to manually trawl through page by page. And maybe there is some content that is duplicated, but you mess it up because you were manually trawling through page by page. So it also, having DITA as the single source of truth helps you with maintenance, even if it seems scary at first. SO: And I expect one of the things we’re looking at is CCMS courses, and the concept of what is a CCMS is going to be the same for all of them. The process of how do I check out files is going to be a little different for each of them. So if you think about that from a course material point of view, you would have that conceptual overview of, what is the component content management system and why do I care? And then there’s, how do I do the thing in specific component content management system? That would probably be unique, but the conceptual overview would be probably the same. So we might have two or five or 15 different courses, one for each CCMS, but you could see where the conceptual stuff would overlap. AB: Exactly. SO: Okay. Everyone, I hope this glimpse into content operations for structured learning content was useful. Of course, the learningDITA.com site is much smaller than what we typically do with our customers at scale, but we are getting more and more requests for learning content and structured content options for learning content. If you’re interested in learning more about learningDITA.com, would suggest you go there and check it out. Check out the course bundle, which has eight or nine courses on DITA stuff from what is structured authoring, all the way to tell me about the learning and training specialization. Allison, thank you so much for all your input. AB: Thank you. SO: And we’ll see you on the next one. Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. The post LearningDITA: DITA-based structured learning content in action appeared first on Scriptorium .…
 
In this episode, Alan Pringle, Bill Swallow, and Christine Cuellar explore how structured learning content supports the learning experience. They also discuss the similarities and differences between structured content for learning content and technical (techcomm) content. Even if you are significantly reusing your learning content, you’re not just putting the same text everywhere. You can add personalization layers to the content and tailor certain parts of the content that are specific to your audience’s needs. If you were in a copy-and-paste scenario, you’d have to manually update it every single time you want to make a change. That scenario also makes it a lot more difficult to update content as you modify it for specific audiences over time, because you may not find everywhere a piece of information has been used and modified when you need to update it. — Bill Swallow Related links: Structured authoring and XML (white paper), which is also included in our book, Content Transformation Confronting the horror of modernizing content The challenges of structured learning content (podcast) Self-paced, online DITA training with LearningDITA.com Get monthly insights on structured learning content, content operations, and more with our Illuminations newsletter LinkedIn: Alan Pringle Bill Swallow Christine Cuellar Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Christine Cuellar: Hey, everybody, and welcome to today’s show. I’m Christine Cuellar, and with me today I have Alan Pringle and Bill Swallow. Alan and Bill, thanks for being here. Alan Pringle: Sure. Hello, everybody. Bill Swallow: Hey, there. CC: Today, Alan, Bill, and I are going to be talking about structured content for learning content. Before we get too far in the weeds, let’s kick it off with a intro question. Alan, what is structured content? AP: Structured content is a content workflow that lets you define and enforce consistent organization of your information. Let’s give a quick example in the learning space. For example, you could say that all learning overviews contain information about the audience for that content, the duration, prerequisites, and the learning objectives for that lesson or learning module. And by the way, that structure that I just mentioned … It actually comes from a structured content standard called the Darwin Information Typing Architecture, DITA for short. That is an open-source standard that has a set of elements that are expressly for learning content, including lessons and assessments. And I think it’s also worth noting, another big part of the whole idea of structured content is that you are creating content in a format agnostic way. You are not formatting your content specifically for, let’s say, a study guide, a lesson that’s in a learning management system, or even a slide deck. Instead, what a content creator instructional designer does … They are going to develop content that follows the predefined structure, and then an automated publishing process is going to apply the correct kind of formatting depending on how you’re delivering the content. That way, as a content creator and instructional designer, you’re not having to copy and paste your learning content into a bunch of different tools. And I know for a fact a lot of instructional designers are doing that right now. Instead of doing all that copying and pasting, you write it one time, and then you say, “I want to deliver it for these different delivery targets, whether it’s for online purposes, whether it’s for in-person training or maybe a combination of both.” You set up publishing processes to apply the formatting for whatever your delivery targets are so you, as a human being, don’t have to mess with that. CC: Which is awesome. Part of the reason that we’re talking about this today is that structured content has been a part of the techcomm world for over 30 years, for a really long time, and now we’re starting to see it make inroads in the learning and development space. We’ve been doing a lot of work for structured content in the learning space, but how is it different from the techcomm space? And Bill, I’m going to kick this over to you for that. BS: I think I’m going to take a higher-level view on this because there is a lot of overlap between techcomm and learning content. Where they really start to diverge is in delivery. Techcomm is pretty uniform in how it delivers content to people. There’s personalization involved and so forth, but essentially everyone’s getting the same thing. The experience is going to be the same. Everyone’s going to get a manual. Everyone’s going to get online help. Everyone’s going to get a web resource, what have you. It might be tailored to their specific needs, but it’s a pretty candid delivery experience. For training, the focus is on the learning experience itself, and it’s usually tailored to a very specific need, whether it’s a very specific type of audience that needs information, or it’s very specific information that needs to be delivered in a very specific way for those people. Beyond that, we start looking at the content itself under the hood, and the information starts to, I would say, broaden with learning content because it can consume all the different types of information you have with technical content. And generally in a structured world, we think of that as conceptual information, how-to information, and reference information, for the most part. With learning content, now you have a completely new set of content in addition to that where you have learning objectives. You have assessments. You have overviews, reviews, all sorts of different content that essentially expands on the wealth of information you have from your technical resources. CC: That’s great. Typically, the arguments for structured content, and the reason it’s really valuable for organizations, is it introduces consistency in your content, consistency for your brand across wherever you’re delivering content. It also helps you build some scalable content processes, that kind of thing. What are some of the arguments for structured content for the learning environment specifically, if there are any other new ones? AP: Some of the reasons that you want to do structured content for learning content are really similar to other types of content. We’ve already talked about one of them. I touched on this earlier in regard to automated formatting. You are not having to do all of the work as a human being, applying formatting to ever how many delivery formats that you have. That is a huge win that you’re not having to do that. And especially in the training space, I have seen so many organizations copying content from one platform to another because the platforms don’t play well together, so you’ve got multiple versions of what should be the same exact content to maintain. That is another huge reason to consider structure. You want a single source of truth for your content regardless of where that information is being delivered because if you’re looking at the overall learning experience and the excellence and quality of that learning experience, if you were telling learners slightly different things in different places in your content, you are not providing an optimal learning experience. Therefore, having that single source of truth for a particular bit of information gives your learners a consistent piece of information regardless of what channel they consume it for. That’s a really important win for a solid, dependable learning experience. CC: Gotcha. No, that definitely makes sense. It sounds like it would take some of the effort off of the subject-matter experts who are creating these trainings so that they can … They, I’m assuming, would rather focus on the work of helping train people. Getting some of the manual formatting and copy and pasting off of their workload sounds pretty nice. What are the complications that it might introduce or the change management issues that might need to be tackled when you’re bringing structured content into a learning environment? AP: It’s true anytime you bring in structure. When people are used to working in an environment where you are doing manual formatting, and you’re seeing what things look like as you kind of develop the content, the idea of developing content in a format agnostic way where you’re not thinking about what does this slide look like, or how is this assessment going to work in the learning management system, it’s very easy to get focused on the delivery angle because you want it to be good, and you want it to be done in a way that makes that learning experience useful for the people who are trying to learn whatever it is they’re trying to learn. You don’t want those impediments of bad formatting or a not great way that your assessments behave in your learning management system, but you kind of get to offload all of those concerns, which are very valid. I’m not saying they’re not valid. They are, but you want an automated process. Basically, you want computers to do that work for you. You want programming to apply that formatting so you can really focus on getting that information as solid as it can be, and you let technology handle the rest. You do set up the standards for how you deliver that content, whether it’s in print, online, in person, whatever. However you’re delivering your learning and training content, you set the standards. “This is how I need this to behave. This is how I need it to look. This is how I need it to interact.” Once you set those standards, then you turn around and have someone who has this programmatic skill set, like we do at Scriptorium, to come in and develop the transformations that take your content and deliver it in the ways you need it delivered so you, as, like you were saying, the subject-matter expert, the instruction designer, or whatever content creator we’re talking about here … You are not doing that for every single delivery type that you are putting out for your learners. BS: And it’s not to say that the experience isn’t tailored because it still can be tailored. Even if you are significantly reusing your content, you’re not just taking the same text everywhere. You can add personalization layers to that content and tailor certain parts of the content specific to what that specific audience needs rather than having to retype it all every single time you want to make a change if you were in a copy-paste scenario. And that also would make it a lot more difficult to update all that content as you modify it for specific audiences over time because you may not find everywhere where a piece of information has been used and modified if you need to update it. It does take a little bit of … Well, it takes a lot of the work off of those developing the content because they don’t have to worry about exactly what it looks like for every single target that they’re producing. It does require a little bit of, I would say, faith in the system that it will work. It really comes down to how you’re architecting this in the first place to make sure you understand who your varied audiences are, what the look and feel needs to be, what the delivery points are, and making sure that you are authoring within the scope of those things. And once you get that down, as Alan mentioned, it becomes a push-button operation to produce all of your various outputs. AP: I think, too, from a change management point of view, one thing that I have heard from lots of content creators in the learning space is the burden they have, for example, if a program or the company changes names, changes logos, changes branding, if you have that built in to the formatting in a way where you’re having to go into, say, a bunch of Microsoft Word or PowerPoint files and manually change those out, and I am sure I am talking to people out there in the ether who know exactly what I’m talking about, it is extremely painful. And when you have automated the application of formatting, what you can do is change those processes to update them to include the latest corporate colors, the latest taglines, the latest fonts, the latest logos, whatever has changed so you, as a human being, again, do not have to go in there and touch all of those files yourselves because that is a burden you don’t need when you were trying to quote do your real work, which is help people learn, not apply formatting to a zillion Microsoft Word documents. Nobody wants to do that, at least nobody I know anyway. CC: No. That’s a very good example of how the structure can just take that part of the workload off of you so you can get to focus on what you want to do. But I like, Bill, how you put it that you have to trust the process because it is an adjustment to go from authoring your content in a specific PowerPoint or in a specific Word doc to authoring it in a way that it can be reused. But ultimately what I’m hearing both of you say is that, even though it’s a valid concern that you might worry about your ability to personalize and your ability to control the user experience, once structured content is implemented correctly, and everyone is adjusted to the system, it sounds to me like you’re saying that your opportunities for personalizing at scale are actually going to be bigger than when everyone’s doing it individually, and at least it introduces consistency across those personalized experiences. Do you think that’s fair to say, either of you? Do you think that’s a fair statement, or is that too optimistic- AP: That is an incredibly loaded question the only answer to which is … No, you were correct. That is, structure does enable all the things that you just ask in that very leading, but good, question. CC: It is very leading. BS: It removes the visual context of where the content is going, but it doesn’t remove … In fact, it enhances the context of what the content is about. AP: Right. CC: That’s a good way to say it. I like that. Looking at structured content within the learning space itself, how does it … I know, Bill, you had mentioned that, within the techcomm space, it’s fairly uniform in how content is delivered and who it’s delivered to. Not that it’s always the same. How about in the learning space? How does that vary? And how does the structure approach vary? BS: Well, this might contradict what I said before, but it’s a slightly different look on it in that, really, the learning clients that we’ve had … They kind of mirror a lot of the techcomm clients we had in that everyone is producing roughly … If you look at it from a high enough altitude, it all looks the same. They’re all producing manuals. They’re all producing e-learning. They’re all producing whatever. When you get down into the nuts and bolts, that’s when you start finding that every single implementation is going to look a little bit different. In techcomm, you might have completely different types of content that you need to be able to handle. The same thing is with the learning space. Every single group is going to have different needs, and they’re going to have very specialized needs based on the content that they’re producing and who they’re producing it for. The learning space, unlike techcomm where they’ve basically been going down the structured path for 20, 30 years … The learning space has really been a sea of black boxes where every single system has its own way of doing things. It does about 90%, 95% of the same stuff that every other system out there does, but there is something special, something canned, something within the system that allows it to do the one thing that no other system does. And all of these technologies historically have really been locked down tight where your content goes in, and it lives and thrives in that box that you’re developing it in. But if you need to take that content out and change systems and put it somewhere else, there’s a lot of rework that potentially needs to be done depending on how customized that system you were using was. And let’s face it. You can structure content. You can centralize it. You can componentize it all you want. It’s not going to change the fact that learning content is going to have these many varied endpoints for how it’s being delivered. Even though you are consolidating and structuring in a central repository to maximize your reuse, to not worry about the formatting, you may still have three or four different learning management systems that you are pushing that content into. Each one of those systems has different requirements. The type of content that gets consumed. What it does. How it reacts. What it expects. The order it needs that information in per lesson, per page. E-page. It gets a little more complicated in the delivery of the learning content because we need to be able to tailor to not only the needs of the particular client in the content that they’re producing but the needs of the systems that need to ingest it. AP: One other thing I would mention here is the level of interactivity, I think, is higher with learning and training content than the techcomm world. Now, I realize there are documentation portals and things like that that do provide some levels of interactivity. However, I think you are going to see much more of that kind of thing on the learning and training side, especially in regard to assessments when you are trying to have people do little, basically, mini exercises to prove that they have learned what they need to learn and that they are graded, and then those scores are recorded. That is the kind of thing you don’t see in techcomm. That is a whole, very specific thing to the learning and training world. Therefore, the structure that you choose needs to accommodate that, and your delivery targets in particular need to accommodate that very high level of interactivity with, for example, like Bill was saying, a learning management system. BS: You have quite a variety of needs out there from basic, true/false, multiple choice, or matching all the way down to simulations, doing interactive exercises, and so forth all within a learning management system. And you need to be able to account for that. And as I mentioned, not all of those systems function in the exact same way, so it needs to be tailored. CC: For any listeners that are listening to this episode right now, and they are in the learning content space, and they’re interested in getting started with structured content, Alan, where would you recommend they start? AP: Well, our website, scriptorium.com, has lots–very self-serving. Very self-serving. We have a lot of resources, and we will put them in the show notes so you can get to them. We also are the creator and maintainer of a site called learningdita.com that teaches people about one way to do structured content, which is DITA, which I mentioned earlier in the show. And there is a free Introduction to DITA course that you can take. Between some links that we’ll include in the show notes in regard to what is structured content, how it applies to the learning and training space, and learning DITA, those are all good starting points for people who are considering going on the structured content journey for their learning content. CC: That’s great. And the only thing I’ll add to that is that, if you’re interested in learning more about learning content and structured content, this is something that we talk about a lot. I would recommend also subscribing to our Illuminations newsletter which, like Alan said, that’s also going to be linked in the show notes. But every month, we send out a recap of the topics we talked about, and learning content is very often in there because we talk about it a lot. This final question is for both of you. Is there anything else that you want to leave our listeners with about structured content in the learning content space before we wrap up today? BS: I’d say, if you’re looking at structured content, it’s not going to on its face be a savior solution. But if with enough thought, it can really make a difference in your content development workflow, and it can save you a lot of time in producing content that is targeted to very specific people and delivery points. AP: For me, my final suggestion here is think about your pain points. What are the things that are keeping you up at night as you develop your learning and training content? What are the continual issues you are battling, especially your content creators? What are they battling? Is it they’re having to format for umpteen different platforms? Is it that they’re needing to personalize things for different locations? For different levels of service that you were training people about? What are the things that are causing you problems? Basically, compile a list of those. And then from there, figure out, could structured content, solve any of these problems? Don’t put the cart before the horse, is the best way to put it, really. Think about your pain points in your processes and then see if structure might be the thing to solve them. CC: That’s great. And on that, Alan, Bill, thank you very much for being here and recording this with me today. BS: Thank you. AP: Absolutely. We like to talk about this stuff probably too much. CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. Get monthly insights on structured learning content, content operations, and more with our Illuminations newsletter. /* <![CDATA[ */ var gform;gform||(document.addEventListener("gform_main_scripts_loaded",function(){gform.scriptsLoaded=!0}),document.addEventListener("gform/theme/scripts_loaded",function(){gform.themeScriptsLoaded=!0}),window.addEventListener("DOMContentLoaded",function(){gform.domLoaded=!0}),gform={domLoaded:!1,scriptsLoaded:!1,themeScriptsLoaded:!1,isFormEditor:()=>"function"==typeof InitializeEditor,callIfLoaded:function(o){return!(!gform.domLoaded||!gform.scriptsLoaded||!gform.themeScriptsLoaded&&!gform.isFormEditor()||(gform.isFormEditor()&&console.warn("The use of gform.initializeOnLoaded() is deprecated in the form editor context and will be removed in Gravity Forms 3.1."),o(),0))},initializeOnLoaded:function(o){gform.callIfLoaded(o)||(document.addEventListener("gform_main_scripts_loaded",()=>{gform.scriptsLoaded=!0,gform.callIfLoaded(o)}),document.addEventListener("gform/theme/scripts_loaded",()=>{gform.themeScriptsLoaded=!0,gform.callIfLoaded(o)}),window.addEventListener("DOMContentLoaded",()=>{gform.domLoaded=!0,gform.callIfLoaded(o)}))},hooks:{action:{},filter:{}},addAction:function(o,r,e,t){gform.addHook("action",o,r,e,t)},addFilter:function(o,r,e,t){gform.addHook("filter",o,r,e,t)},doAction:function(o){gform.doHook("action",o,arguments)},applyFilters:function(o){return gform.doHook("filter",o,arguments)},removeAction:function(o,r){gform.removeHook("action",o,r)},removeFilter:function(o,r,e){gform.removeHook("filter",o,r,e)},addHook:function(o,r,e,t,n){null==gform.hooks[o][r]&&(gform.hooks[o][r]=[]);var d=gform.hooks[o][r];null==n&&(n=r+"_"+d.length),gform.hooks[o][r].push({tag:n,callable:e,priority:t=null==t?10:t})},doHook:function(r,o,e){var t;if(e=Array.prototype.slice.call(e,1),null!=gform.hooks[r][o]&&((o=gform.hooks[r][o]).sort(function(o,r){return o.priority-r.priority}),o.forEach(function(o){"function"!=typeof(t=o.callable)&&(t=window[t]),"action"==r?t.apply(null,e):e[0]=t.apply(null,e)})),"filter"==r)return e[0]},removeHook:function(o,r,t,n){var e;null!=gform.hooks[o][r]&&(e=(e=gform.hooks[o][r]).filter(function(o,r,e){return!!(null!=n&&n!=o.tag||null!=t&&t!=o.priority)}),gform.hooks[o][r]=e)}}); /* ]]> */ Name * First Last Email * Data collection * I consent to my submitted data being collected and stored. Review our privacy policy Consent to subscribe * I consent to the use of my submitted data for marketing emails. I understand that I can unsubscribe at any time. Review our privacy policy /* <![CDATA[ */ gform.initializeOnLoaded( function() {gformInitSpinner( 2, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery('#gform_ajax_frame_2').on('load',function(){var contents = jQuery(this).contents().find('*').html();var is_postback = contents.indexOf('GF_AJAX_POSTBACK') >= 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_2');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_2').length > 0;var is_redirect = contents.indexOf('gformRedirect(){') >= 0;var is_form = form_content.length > 0 && ! is_redirect && ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_2').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_2').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_2').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */ }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_2').val();gformInitSpinner( 2, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [2, current_page]);window['gf_submitting_2'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}jQuery('#gform_wrapper_2').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [2]);window['gf_submitting_2'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_2').text());}else{jQuery('#gform_2').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger("gform_pre_post_render", [{ formId: "2", currentPage: "current_page", abort: function() { this.preventDefault(); } }]); if (event && event.defaultPrevented) { return; } const gformWrapperDiv = document.getElementById( "gform_wrapper_2" ); if ( gformWrapperDiv ) { const visibilitySpan = document.createElement( "span" ); visibilitySpan.id = "gform_visibility_test_2"; gformWrapperDiv.insertAdjacentElement( "afterend", visibilitySpan ); } const visibilityTestDiv = document.getElementById( "gform_visibility_test_2" ); let postRenderFired = false; function triggerPostRender() { if ( postRenderFired ) { return; } postRenderFired = true; gform.core.triggerPostRenderEvents( 2, current_page ); if ( visibilityTestDiv ) { visibilityTestDiv.parentNode.removeChild( visibilityTestDiv ); } } function debounce( func, wait, immediate ) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if ( !immediate ) func.apply( context, args ); }; var callNow = immediate && !timeout; clearTimeout( timeout ); timeout = setTimeout( later, wait ); if ( callNow ) func.apply( context, args ); }; } const debouncedTriggerPostRender = debounce( function() { triggerPostRender(); }, 200 ); if ( visibilityTestDiv && visibilityTestDiv.offsetParent === null ) { const observer = new MutationObserver( ( mutations ) => { mutations.forEach( ( mutation ) => { if ( mutation.type === 'attributes' && visibilityTestDiv.offsetParent !== null ) { debouncedTriggerPostRender(); observer.disconnect(); } }); }); observer.observe( document.body, { attributes: true, childList: false, subtree: true, attributeFilter: [ 'style', 'class' ], }); } else { triggerPostRender(); } } );} ); /* ]]> */ The post The benefits of structured content for learning & development content appeared first on Scriptorium .…
 
In this episode, Alan Pringle, Gretyl Kinsey, and Allison Beatty discuss LearningDITA, a hub for training on the Darwin Information Typing Architecture (DITA). They dive into the story behind LearningDITA, explore our course topics, and share an exclusive coupon code for our podcast listeners. Gretyl Kinsey: Over time that user base grew and grew. And now it boggles my mind that it got all the way up to 16,000 users. I never expected it to grow to that size. Alan Pringle: Well, we didn’t really either, nor did our infrastructure. Because as of late 2024, things started to go a little sideways, and it became clear our tech stack was not going to be able to sustain more students. It was very creaky. The site wasn’t performing well. So we made a decision that we needed to take the site offline, and we did, to basically redo it on a new platform. Related links: As a thank-you to our podcast listeners, you can get 25% off our nine-course bundle! Click here for our DITA 1.3 course bundle and use the coupon code LDPODCAST during checkout. Open-source DITA training project GitHub files LinkedIn: Alan Pringle Gretyl Kinsey Allison Beatty Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Alan Pringle: Hey, everyone, I am Alan Pringle, and today I am here with Gretyl Kinsey and Allison Beatty. Say hello, you two. Gretyl Kinsey: Hello. Allison Beatty: Hello. AP: We are together here today because we want to talk about LearningDITA, our e-learning site for the DITA specification because we have just moved it to a new platform. So we want to give you a little background on what went on with that decision. So first of all, Gretyl, you and I were at Scriptorium when we kicked off this site, and I just went back and looked at blog posts. We announced it via blog post I wrote in July of 2015. So we have had this site up and running for 10 years, which absolutely blows my mind. GK: It blows my mind too. It’s hard to believe that it’s been that long because it does seem like it got launched pretty recently in my memory, but it has been through a lot of changes and so has the entire landscape of content creation as well. So yeah, it’s really cool that now we can look back and say it has been 10 years of LearningDITA being on the web. AP: For those who may not be familiar with the site, give us a little summary of what it is. GK: Sure. So LearningDITA is a training resource on DITA XML and it’s developed by Scriptorium, and it covers a lot of the main fundamentals of DITA. So we have some courses on basic authoring and publishing. We also have a couple of courses on reuse and one course on the DITA learning and training specialization. So you get a good overview of a lot of different areas of DITA XML. And all of the courses are self-guided e-learning. So you can go through and take them at your own pace. You can go back and take the courses again if you want a memory refresher. And they all come with a lot of examples and exercises. So you get a download of sample files that you can work your way through. There’s some of that practice that’s guided, and then there’s others that you do on your own. And then there are also assessments throughout each course that help you test your knowledge. So you get a really nice hands-on approach to LearningDITA. So that’s why we called the site that in the first place. And it really helps to get those basics, those fundamentals in place if you are coming at it as a beginner who is unfamiliar with DITA or maybe you have some familiarity, but you want to just reinforce what you know. AP: So we went along with this site and kept adding courses over the years. I think we got to nine, is that right? I think it’s nine. GK: That’s right. So we really started this out, like I was mentioning earlier, that we needed something that was beginner-friendly, something for people who were unfamiliar with DITA because we saw a gap in the information that was available at the time 10 years ago. A lot of the DITA resources, documentation, guides and things like that out there were something that assumed some prior knowledge or prior expertise, and there wasn’t really anything that filled that gap. So we came up with these courses. And the nine courses that we have, the first one is just an introduction to DITA. So that was the first one that launched back in July of 2015. And then shortly after that, we added a few courses on topic authoring. So that covers the main topic types, concept, task reference and glossary entry. And then we just added more courses over time. So we’ve got one that covers the use of maps and book maps. We’ve got one that covers publishing basics. We have, like I mentioned, the two courses on reuse. So there’s a more introductory basic reuse course and then a more advanced reuse course, and then learning and training. So those are the nine courses that we have, and they’ve been up there pretty much the entire time. The earliest ones where that introduction, the authoring, and then we added the others as the demand increased over time. AP: And that demand, I’m glad you mentioned that, really did increase because as of late 2024, we had over 16,000 students in the database for LearningDITA, which also completely blows my mind. GK: Yeah, it does for me too, because I think in the early days we saw a lot more individuals using it, and then over time we would see more large groups of users sign up. So an entire class whose professor might’ve recommended taking the LearningDITA courses or sometimes an organization, whether it was one of our clients or just another organization, would have a lot of employees sign up all at once. And so yeah, over time that user base grew and grew. And now it does boggle my mind as well that it got all the way up to 16,000 users. I never expected it to grow to that size. AP: Well, we didn’t really either, nor did our infrastructure. Because as of late last year, things started to go a little sideways and it became clear our tech stack was not going to be able to sustain more students. It was very creaky. The site wasn’t performing well. So we made a decision that we needed to take the site offline and we did to basically redo it on a new platform. And Allison, this is where I want you to come in because you are one of the, shall we say, victims on the Scriptorium side who got to dive into what our requirements were, what we needed to do. Essentially, I mean, we really became consultants for ourselves and turned our consultant eye at our problem to figure out what it was. And Allison, if you don’t mind, tell us a little bit about that process and where we landed. AB: Yeah, so the platform was the first big choice that we knew we had to make, and things started out pretty fuzzy because we didn’t really know what we were doing and just had to figure out what was going to work to solve these pain points. And so as a starting place, we knew we needed a new LMS, learning management system. And so we did some research on what learning management systems were out there and thought about what we could use that would fit our needs. And we ended up choosing Moodle, which is an open source LMS that is very widely used within colleges and universities and higher education settings. And we knew it could be very powerful and probably suit our needs with some custom work. But the thing about Moodle is it’s known for having a high barrier to entry in terms of the installation, and that made us a little nervous. But the more we kept looking at LMS options, both open source and commercial, we realized that Moodle is so popular and industry standard almost for a reason and that it was worth taking on that challenge. AP: And I even had someone in the learning space because I asked her advice, what LMS would you use? She pretty much said run away from Moodle because for a lot of the reasons that you just mentioned. But I think it’s worth noting, it does have… There are a lot of people using it, especially in educational settings, schools, universities. It’s also the open source angle was appealing because that way it didn’t look like we were picking “favorites” by picking a particular proprietary LMS. AB: Yeah, definitely. And then the other piece of the puzzle there as far as how we’re going to display and host the learning content was the DITA transform for the content itself and how we were going to get the LearningDITA content into our LMS. And so we knew that Moodle is compatible with both SCORM and xAPI and we ended up deciding that we wanted to develop a DITA to SCORM transform because SCORM is something that we have discussed and worked on with other clients as we’ve been seeing this trend in learning and training content pickup. I don’t know if Gretyl wants to talk a little bit about how she’s seen SCORM throughout various projects and why we decided it was something we wanted to pursue and learn more about ourselves. AP: And what is it while you’re at it? That too. AB: That’s a good question. I’ll just go ahead and talk a little about what it is without getting too deep technically. Basically it’s a standard for e-learning content and it provides communication that can do things like track grades within your LMS. In the LearningDITA, the previous site and the current site, you had to pass assessments to get to the next lesson. And so SCORM can handle things like tracking assessment completion and scores. It’s pretty flexible and widely used. It’s more or less just a standard, but it requires a pretty specific data structure for it to function because it’s expecting certain data structures that are defined in the standard for it to work in different environments. And Gretyl, would you like to talk a little bit about how we’ve seen the SCORM standard pop up through various client projects? GK: Sure. So we have seen I think over especially these last 10 years since LearningDITA launched an increase or a bit of an uptick in clients who come to us with e-learning content specifically. Some of them, that’s the only content they have. For others, they are trying to get some sort of a process for developing both e-learning content and then other kinds like technical documentation, marketing content. But a lot of them end up going down this path where they realize DITA XML is going to be helpful for content creation, especially if they do have that cross-department collaboration or reuse that needs to happen. And SCORM has been something that we’ve seen crop up with a lot of these projects. Because like you mentioned, Allison, it offers all that flexibility around things like scoring the assessments, keeping that student data that’s needed. And we’ve also seen how it’s really good when you’ve got an organization that has to deliver e-learning content to multiple different LMSs. So let’s say they’ve got students in a lot of different geographical areas or different industries and they all use different LMSs. That SCORM package can be delivered into all of them and used. And so they get that flexibility. So we’ve seen this crop up in a lot of different client projects. And the more we saw it pop up in these different projects, the more we said this might be beneficial for us too. And we’ve seen all the different ways that these organizations have made use of SCORM packages and why not give it a try for our LearningDITA content. And which by the way, I just wanted to mention, I don’t think we explicitly said this, but all of the LearningDITA courses themselves are authored in DITA XML. So kind of meta layer there to think about. But because of that, we have to think about how are we going to publish this information, get these e-learning courses out onto the web. And so a DITA to SCORM transform, as Allison said, is the approach that we decided on. AP: And those source files, by the way, are part of this open source project that’s out in GitHub. And we’ll put some links in the show notes about it. But you can look at the source files that we used and download them for free. They’re open source. You can look at them and even use them for your own purposes if you like. GK: And one question I had there, so you mentioned that all of those files are free and LearningDITA itself, the website, the platform has always been free, but now we are introducing a new pricing model. And so Alan, I wanted to ask you about that, how that change came about, why we made that decision to go from an entirely free resource to something with a new pricing model? AP: Yeah, that’s a hard one and it was not a fun discussion. It wasn’t. But basically considering we’ve got 10 years of work invested in this, we had both hundreds of hours invested in developing and maintaining the site and all the courses. We also have hosting costs involved. So it got to the point to where especially with those 16,000 students, things were just not sustainable. And the tech model, the tech stack was not working anymore. So we knew we had to do something and invest more time into the platform or frankly abandon it. And when you look at the choices, completely shut down the site and get rid of that resource or decide to charge very small amounts in most courses are going to be $15, that’s going to be the entry price point. The intro course will always be free. That was the decision that we made. And there will be coupon codes. There will be discounts for bundles and other things. So we realize we are changing from the free model. Wish we didn’t have to do it. But looking at the reality of the time that we’ve invested in it and to keep it running in the future, that was a decision that we made to keep this running for the long haul. GK: And I think, like we’ve said, we’ve seen so many changes in the content space, the industry itself over these years. And I think evolving and making sure that we are keeping track of the value that we add by having this resource makes sense to go to that pricing model. AP: And I want to talk a little more about the Moodle part of this equation, because the way that it works is different than what we had before. And I think it’s worth noting the user experience is a little different. Because when you open up a course, it essentially opens up in a SCORM package viewer. Allison, could you talk just a little bit about how that experience is different? AB: Yeah. So something that we noticed about Moodle is that it’s a very low-code, no-code type of platform. And so part of that SCORM decision was we wanted to be able to single source the content that lives in that repo or repository. We didn’t want to manually insert all that content. And so the way that SCORM ends up interacting with the Moodle site is that instead of having the content baked into webpages, it launches equivalent to an iframe, but it launches a second window where you take the course. And then when you close out that window, it ends your session. So don’t freak out if a second window pops up when you go to take your course. That’s the way that it is designed to work with the SCORM transform. AP: And then Moodle records your activity, how well you’ve done with the quizzes, and all of that kind of information. AB: And on the technical back end, all of that grade recording and assessment tracking is something that is handled because of the SCORM transform and how we built the Moodle site. AP: And I think it is time for us to mention the people who really helped build that Moodle transform. Let’s call them out by name. Thank you to Jake Campbell, Simon Bate, and Melissa Kershes. Thanks to all of them for getting in there and helping us get that done. GK: And I can just say after doing a lot of end user testing to make sure this works, I actually think it is easier to keep track of where you are than it was in our previous platform. I like that it pops things out into a new window. It really helps you, guide you along as you go through each part of the course. And it pops up with notifications about saving your progress if you need to stop and start a course at any point. And it does make it very clear where you are in the course and whether you have passed those assessments. And so the entire package does work really well. I think it’s really intuitive as an end user. And hopefully for all of you who go and take the courses on the new platform, you will see the same thing. AP: I think it’s worth mentioning too, moving to this new platform, it’s going to give us opportunities to do more things in the future. We will be adding new content, especially as the DITA 2.0 standard comes out. So when that is released by the committee that controls the standard, we will do some updates to our courses. And I think we’re going to maybe do some micro learning perhaps, some live e-learning. We’ve got lots of choices here, so stay tuned for that. CC: Hey listeners, this is Christine from Scriptorium, and I wanted to jump in to say that we’ve added a coupon code for our LearningDITA nine-course bundle, which is LDPODCAST. All one word, LDPODCAST, and that gets you 25% off our nine-course bundle. Any time you find this podcast, you can head over the LearningDITA and check it out. There will be instructions there on how to purchase your course bundle. So if you’re interested in getting started with LearningDITA, that’s a way to get you started, and thanks for listening to the show! And with that, Allison and Gretyl, I want to thank you very much for your work on the site and for talking with us today. GK: Absolutely. Thank you. AB: Thank you. The post LearningDITA: What’s new and how it enhances your learning experience appeared first on Scriptorium .…
 
In our last episode , you learned how a taxonomy helps you simplify search, create consistency, and deliver personalized learning experiences at scale. In part two of this two-part series, Gretyl Kinsey and Allison Beatty discuss how to start developing your futureproof taxonomy from assessing your content needs to lessons learned from past projects. Gretyl Kinsey: The ultimate end goal of a taxonomy is to make information easier to find, particularly for your user base because that’s who you’re creating this content for. With learning material, the learner is who you’re creating your courses for. Make sure to keep that end goal in mind when you’re building your taxonomy. Related links: Taxonomy: Simplify search, create consistency, and more (podcast, part 1) The challenges of structured learning content (podcast) DITA and learning content Metadata and taxonomy in your spice rack Transform L&D experiences at scale with structured learning content LinkedIn: Gretyl Kinsey Allison Beatty Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Allison Beatty: I am Allison Beatty. Gretyl Kinsey: I’m Gretyl Kinsey. AB: And in this episode, Gretyl and I continue our discussion about taxonomy. GK: This is part two of a two-part podcast. AB: So if you don’t have a taxonomy for your learning content, but you know need one, what are some things to keep in mind about developing one? GK: Yeah, so there are all kinds of interesting lessons we’ve learned along the way from working with organizations who don’t have a taxonomy and need one. And I want to talk about some of the high-level things to keep in mind, and then we can dive in and think about some examples there. One thing I also want to just say upfront is that it is very common for learning content in particular to be developed in unstructured environments and tools like Microsoft Word or Excel. It’s also really common that if you are working within a learning management system or LMS for there to be a lack of overall consistency because the trade-off there is you want flexibility, right? You want to be able to design your courses in whatever way is best suited for that specific subject or that set of material. But that’s where you do have that trade-off between how consistent is the information and the way it’s organized versus how flexible is it to give your instructional designers that maximum creativity. And so when you’ve got those kinds of considerations, then that can make the information harder for your students to find or to use and even for your content creators. So we’ve seen organizations where they’ve said, “We’ve got all of our learning materials stuck in hundreds of different Word files or spreadsheets or in sometimes different LMS’ or sometimes different areas in the same LMS.” And when they have all of those contributors, like we talked about with multiple authors contributing, or sometimes lots and lots of subject matter experts part-time contributing, that really creates these siloed environments where you’ve got different little pieces of learning material all over the place and no one overarching organizational system. And so that’s typically the driving point that see where that organization will say, “We don’t have a taxonomy. We know that we need one.” But I think that is the first consideration is if you don’t have one and you know you need one, the first question to ask is why? Because so often it is those pain points that I mentioned, that lack of one cohesive system, one cohesive organization for your content, and sometimes also one cohesive repository or storage mechanism. So that’s typically where you’ll have an organization saying, “We don’t have a good way to kind of connect all of our content and have that interoperability that you were talking about earlier, and we need some kind of a taxonomy so that even if we do still have it created in a whole bunch of different ways by a bunch of different people, that when it gets served to the students who are going to be taking these courses, it’s consistent, it’s well-organized, it’s easy for people to find what they need.” So I think that’s the first consideration is that if you’ve got that demand for taxonomy developing, think about where that’s coming from and then use that as the starting point to actually create your taxonomy. And then I think one other thing that can help is to think about how your content is created. So if you do have those disparate environments or you’ve got a lot of unstructured material, then take that into account and think about building a taxonomy in a way that’s going to benefit rather than hinder your creation process. And that is especially important the more people that you have contributing to your learning material. It’s really helpful to try to gather information and metrics from all of your authors and contributors, as well as from your learners. So any kind of a feedback form that, if you’ve got some kind of an e-learning or training website where you can assess information that your learners tell you about, what was good or bad about the experience, what was difficult or what would make their lives easier, that’s really great information for you to have. But also from your contributors, your authors, your subject matter experts, your instructional designers, if they have a way to collect feedback or information on a regular basis that will help enhance the next round of course design, then all of that can contribute to taxonomy creation as well. When you start building a taxonomy from the ground up, you can look at all the metrics that you’ve been collecting and say, “Here’s what people are searching for. We should make sure that we have some categories that reflect that. Here are difficulties that our authors are encountering with being able to find certain information and keep it up to date or with being able to associate things with learning objectives. So let’s build out categories for that.” So really making sure that you use those metrics. And if you’re not collecting them already, it’s never too late to start. I think the biggest thing to keep in mind also is to plan ahead very carefully and to make sure that you’re thinking about the future, that you’re doing futureproofing before you actually build and implement your taxonomy. And I know we both can probably speak to examples of how that’s been done well versus not so well. AB: Yeah, maintenance is so important. GK: Yeah, and I think the more that you think about it upfront before you ever build or put a taxonomy in place, the easier that maintenance is going to be, right? Because we’ve seen a lot of situations where an organization will just start with a taxonomy, but maybe it’s not broad enough. So maybe it only starts in one department. Like they have it for just the technical docs, but they don’t have it for the learning material. And then down the road it’s a lot more difficult to go in and have to rework that taxonomy for new information that came out of the learning department. That if they had had that upfront, it could have served both training and technical docs at the same time. So thinking about that and doing that planning is one of the best ways to avoid having to do rework on a taxonomy. AB: And I’m glad you brought up the gathering of feedback and insight from users before diving into building out a taxonomy. Because at the end of the day, you want it to be usable to the people who need that classification system. That is the most important part. GK: Yeah, that’s absolutely the end goal. AB: Usability. GK: Yeah, and I think a big part of that, like I’ve mentioned, planning ahead carefully and futureproofing, is looking at metrics that you’ve gathered over time because that can help you to see whether something in those metrics or in that feedback is a one-off fluke or whether it’s an ongoing persistent trend or something that you need to always take into consideration from your end users. If you’ve got a lot of people saying the same things, a lot of people using the same search terms over time, that can really help you with your planning. And yeah, like you said, I think the ultimate end goal of a taxonomy is to make information easier to find, and in particular for your user base because that’s who you’re creating this content for. And with learning material, that’s who you’re creating your courses for. So you want to make sure that when you’re building that taxonomy, that that end goal is something you always keep in mind. How can we make this content easier for people to find and to use? AB: Definitely. Something else that I am curious to get your take on is in this planning stage. So in my experience, I feel like there’s never nothing to start with. Even if there’s not any formalized standards or anything around classification of content, there’s like a colloquial system, right? GK: Yes, very much so. AB: Of how content creators or users think about an organized content, even if they’re not necessarily using a taxonomy. GK: Yeah. A lot of times it’s very similar to when we just talked about content structure itself. That if you’re in something like Microsoft Word or Unstructured FrameMaker, even if there’s not an underlying structure, a set of tags under that content, there is still an implied structure. You can still look at something like a Word document and say, “Okay, it’s got headings at these various levels. It’s got paragraphs. It’s got notes,” and you can glean a structure from that even though that structure does not exist in a designated form, right? So taxonomy is the same way. You’ve got people using information and categorizing information, even if they don’t have formal categories or a written down or tagged taxonomy structure. There’s always still some a way that people are organizing that material so that they can find it as authors or so that their end users can find it as the audience. And so that’s also a really good place to draw from. If you don’t have that formal taxonomy in place, you do still have an implied taxonomy somewhere. And so that’s where, going back to what you said about gathering the metrics, that’s a lot of times how you can find it and start to root it out if you are looking for that starting point of here’s how we need to build this formal taxonomy. So I think that’s step one is after you’ve figured out why you need to have that formal taxonomy in place, what’s the driving factor behind it? Then start going and hunting down that information about your existing implied taxonomy and how people are currently finding and categorizing information, because that will help you to at least start drafting something. And then you can further plan and refine it as you take into account the various metrics from your user base, and then gather information across all the different content producing departments in your organization until you finally settle on what that taxonomy structure should look like. AB: I know that the word taxonomy can sound complicated and scary and all that, but you’re never really starting with the fear of a blank page. Taxonomies are everywhere and in everything, even if they’re not formalized. Think about when you go to the grocery store and you know you need ketchup and you’re going to go to the condiment aisle to find that. There’s so much organization and hierarchy just in our day-to-day lives that exist already. That’s never a fear of a blank page with taxonomies. There’s just thinking of the future and being mindful that things may change and maintenance will happen. GK: Exactly. I think that point that you made about even when you go to the grocery store, humans think in taxonomy, right? Humans naturally categorize things. AB: And group things. Yeah. GK: And so I think the main goal of having a taxonomy formalized is to take that out of people’s heads and actually get it into a real form that multiple people can all use together, and then that serves that ultimate end goal we talked about of making things easier for your users to find. AB: Access. Definitely. I want to talk about some lessons learned based on taxonomies that you and I have worked with clients, and I’m thinking of how you’re never starting with a blank page. I’m thinking about one project in particular where we developed a learning content model and used Bloom’s Taxonomy as a jumping-off point for this learning model. That’s another option or another way to go about it is use the implied structure in combination with a structure that already exists and integrating that into your content model. And then on the other hand, I know we’ve also done taxonomies for learning where we’ve specialized a lot. GK: And specialization is always interesting because we see that develop out of… If you are putting out information that is very specific, so for example, if you are putting out learning material or courses around… I’ll go back to the example from earlier. Here’s how to use this specific kind of software. Here’s a class that you can take to get certified for doing this kind of an activity and this kind of software. Then that’s when it makes sense to think about any kind of specialized structures that you might want to have that are specific to that software. And it can be the same in whatever kind of material that you’re presenting. If you’re saying, “Oh, we’re in the healthcare industry. We are in the finance industry. We’re in the technology industry,” whatever your industry is, there’s going to be specific information to that industry that you probably want to capture as part of your taxonomy. Those categories are going to be specific to that industry and to the product or material that you are producing or to the learning material, the courses that you’re creating. So that’s a really good thing to think about when it comes to that taxonomy development is if we are in any very specific industry where we need that industry-specific information in the taxonomy, then it’s going to be really important to specialize. And so if you’re working in DITA XML, specialization is creating custom elements from out of the box or existing ones or standard ones. And so whenever you think about a taxonomy that is driven by metadata in DITA XML, then that’s where you might start creating some custom metadata elements and attributes that can drive your taxonomy. And those custom names for those elements and attributes would be something that you do specialize in and that matches the requirements or the demands of your industry. AB: Yeah, that’s spot on with the example I was talking about a while ago about how the Library of Congress uses Library of Congress subject headings, but the National Library of Medicine has their own classification system for cataloging. But under the hood, they’re both Dublin Core. They’re both specialized Dublin Core. You know what I mean? GK: Yes. AB: There’s different context and then… Yeah, totally. Oh, this was the question I was going to ask you. Is there a trade-off with heavy specialization in your taxonomy? GK: I think the biggest trade-off is maintenance. So we were talking earlier about how when you’re doing that initial planning that you want to think about futureproofing and you want to think about how you can make it as easy to maintain as possible within reason, of course, because nothing is ever easy when it comes to content development. AB: That’s true. GK: But yeah, when it comes to heavy specialization, that’s the biggest thing to consider is that for any kind of specialized tagging, you have to have specialized knowledge, so people who understand the categories, who know how to build that specialization and how to maintain it. So you have to have those resources available, and you also have to think about when you need to inevitably add or change the material, how much more difficult is that going to be if you specialize tags. Maybe it’s going to actually enhance things. And so instead of making things more difficult, it might be a little bit easier if you are specializing because then you already have created custom categories before. And if you need to add one down the road, you’ve got a roadmap for that. But it really depends on your organization and the resources that you have available. And thinking specifically about learning content as well, I think one of the biggest areas where heavy specialization can be challenging is that it is typical to have so many part-time contributors and subject matter experts who are not going to be experts in the tagging system. They’re just going to be experts in the actual material that they’re contributing. And so if they have to learn how to use those tags to a certain extent, then sometimes the more customization or specialization that you do, the more difficult that can be for those contributors, and it can make it sometimes difficult to get them on board with having that taxonomy in the first place. AB: Yeah, change management. GK: So I think that’s the big trade-off. Yes, change management, maintenance, and thinking about the best balance for making sure that things are useful for your organization. That you’ve got the taxonomy in place that you need, but it’s also not going to be so difficult to maintain that it essentially fails and that your authors and contributors don’t want to keep it going. AB: This is a big question, but who’s responsible for maintaining a taxonomy within an organization that develops learning content site. GK: So I think there’s a difference here between who is responsible and who should be responsible. AB: Oh, that’s so true. GK: If we think about best practice, it really should just be I would say generally a small team who is designated for that role, who has an administrative role so that they can be in charge of governance over that taxonomy. Because if you don’t have that, if you don’t have the best practice or the optimal situation, then instead, what can happen is that either no one’s managing the taxonomy, which is obviously bad, because then it can just continue to spiral out of control, or it’s almost like a too many cooks in the kitchen a situation, where if you don’t have that designated leadership or governance role over taxonomy, and anyone can update it or make changes to it, then it loses all of its meaning, all of its consistency. I do think it’s important that it’s a small team and not one single person. Because if that person is sick or something, then you’re left high and dry. So you want to make sure you’ve got it’s a small enough team that it’s not going to have the too many cooks in the kitchen problem, but it’s also not just one person. AB: Another reason that it’s not ideal to have just one person is diversity prevents bias in your taxonomy, right? GK: Absolutely. AB: If one person has a confirmation bias about a specific facet and they document it or build something that way, but no one in the organization… You know what I mean? GK: Yeah. So that’s where that small team can provide checks and balances too. AB: Totally. GK: You can have things set up where maybe every person on that team has to approve changes that are made to the taxonomy, or when they’re initially designing it, they all are giving the final review and final approval on it, so that way you’re not having it just through one person and whatever biases that person might carry. AB: And biases isn’t necessarily a negative connotation, but just that people see the world differently from person to person. And by world, I do mean learning content sometimes. Is there anything else that you wanted to cover? GK: I think I just want to wrap things up by saying the big things to keep in mind, the main points that we talked about when you’re developing a taxonomy, whether it is for learning content or just more broadly, are to plan ahead, think ahead, do all of the planning upfront that you can, rather than just building things, so that that way you can avoid rework. Use the metrics of the information that you’ve gathered from both inside your organization and from your user base. And finally, keep that end goal in mind that this is all about making things easier for people to use, for people to find content and develop your taxonomy with that end goal in mind. AB: Yeah, I agree with all of that. Well, thanks so much for talking with me, Gretyl. GK: Of course. Thank you, Allison, for talking with me. Outro with ambient background music Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. Behind every successful taxonomy stands an enterprise content strategy Building an effective content strategy is no small task. The latest edition of our book, Content Transformation is your guidebook for getting started. Name (Required) First Last Company name (Required) Email (Required) Enter Email Confirm Email Consent for storing personal data (Required) I consent to my personal data being stored according to the Scriptorium privacy policy. Consent to subscribe (Required) I consent to the use of my email address for occasional marketing emails according to the Scriptorium privacy policy. I understand that I can unsubscribe at any time. /* <![CDATA[ */ gform.initializeOnLoaded( function() {gformInitSpinner( 28, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery('#gform_ajax_frame_28').on('load',function(){var contents = jQuery(this).contents().find('*').html();var is_postback = contents.indexOf('GF_AJAX_POSTBACK') >= 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_28');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_28').length > 0;var is_redirect = contents.indexOf('gformRedirect(){') >= 0;var is_form = form_content.length > 0 && ! is_redirect && ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_28').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_28').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_28').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */ }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_28').val();gformInitSpinner( 28, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [28, current_page]);window['gf_submitting_28'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}jQuery('#gform_wrapper_28').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [28]);window['gf_submitting_28'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_28').text());}else{jQuery('#gform_28').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger("gform_pre_post_render", [{ formId: "28", currentPage: "current_page", abort: function() { this.preventDefault(); } }]); if (event && event.defaultPrevented) { return; } const gformWrapperDiv = document.getElementById( "gform_wrapper_28" ); if ( gformWrapperDiv ) { const visibilitySpan = document.createElement( "span" ); visibilitySpan.id = "gform_visibility_test_28"; gformWrapperDiv.insertAdjacentElement( "afterend", visibilitySpan ); } const visibilityTestDiv = document.getElementById( "gform_visibility_test_28" ); let postRenderFired = false; function triggerPostRender() { if ( postRenderFired ) { return; } postRenderFired = true; gform.core.triggerPostRenderEvents( 28, current_page ); if ( visibilityTestDiv ) { visibilityTestDiv.parentNode.removeChild( visibilityTestDiv ); } } function debounce( func, wait, immediate ) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if ( !immediate ) func.apply( context, args ); }; var callNow = immediate && !timeout; clearTimeout( timeout ); timeout = setTimeout( later, wait ); if ( callNow ) func.apply( context, args ); }; } const debouncedTriggerPostRender = debounce( function() { triggerPostRender(); }, 200 ); if ( visibilityTestDiv && visibilityTestDiv.offsetParent === null ) { const observer = new MutationObserver( ( mutations ) => { mutations.forEach( ( mutation ) => { if ( mutation.type === 'attributes' && visibilityTestDiv.offsetParent !== null ) { debouncedTriggerPostRender(); observer.disconnect(); } }); }); observer.observe( document.body, { attributes: true, childList: false, subtree: true, attributeFilter: [ 'style', 'class' ], }); } else { triggerPostRender(); } } );} ); /* ]]> */ The post Building your futureproof taxonomy for learning content (podcast, part 2) appeared first on Scriptorium .…
 
Can your learners find critical content when they need it? How do you deliver personalized learning experiences at scale? A learning content taxonomy might be your solution! In part one of this two-part series, Gretyl Kinsey and Allison Beatty share what a taxonomy is, the nuances of taxonomies for learning content, and how a taxonomy supports improved learner experiences in self-paced e-learning environments, instructor-led training, and more. Allison Beatty: I know we’ve made taxonomies through all sorts of different frames, whether it’s structuring learning content, or we’ve made product taxonomies. It’s really a very flexible and useful thing to be able to implement in your organization. Gretyl Kinsey: And it not only helps with that user experience for things like learning objectives, but it can also help your learners find the right courses to take. If you have some information in your taxonomy that’s designed to narrow it down to a learner saying, “I need to learn about this specific subject.” And that could have several layers of hierarchy to it. It could also help your learners understand what to go back and review based on the learning objectives. It can help them make some decisions around how they need to take a course. Related links: The challenges of structured learning content (podcast) DITA and learning content Metadata and taxonomy in your spice rack Transform L&D experiences at scale with structured learning content Rise of the learning content ecosystem with Phylise Banner (podcast) LinkedIn: Gretyl Kinsey Allison Beatty Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Gretyl Kinsey: Hello and welcome. I’m Gretyl Kinsey. Allison Beatty: And I’m Allison Beatty. GK: And in this episode, we’re going to be talking about taxonomy, particularly for learning content. This is part one of a two-part podcast. AB: So first things first, Gretyl, what is a taxonomy? GK: Sure. A taxonomy is essentially just a system for putting things into categories. Whether that is something concrete like physical objects or whether it’s just information. A taxonomy is going to help you collect all of that into specific categories that help people find what they’re looking for. And if you’ve ever been shopping before, you have encountered a taxonomy. So I like to think about online shopping, in particular, to explain this because you’ve got categories for the type of item that you’re buying at a broad level that might look something like you’ve got clothing, household goods, electronics, maybe food. And then within that you also have more specific categories. So if we start with clothing, you typically will have categories for things like the type of garment. So whether you are looking for shirts, pants, skirts, coats, shoes, whatever. And then you also might have categories for the size, for the color, for the material. They’re typically categories for the intended audience. So whether it’s for adults or kids. And then within that may be for gender. So all these different ways that you can sort and filter through the massive number of clothing results that you would get if you just go to a store and look at clothing. You’ve got all of these different pieces of information, these categories that come from a taxonomy where you can narrow it down. And that typically looks like things on a website, like search boxes, checkboxes, drop-down menus, and those contain the assets or the pieces of information from that taxonomy that are used to categorize that clothing. So then you can go in and check off exactly what you’re looking for and narrow down those results to the specific garment that you were trying to find. So the ability to go on a website and do all of that is supported by an underlying taxonomy. AB: So that’s an example of online shopping. I’m sure a lot of people are familiar with taxonomies in the sense of biology, but how can taxonomies be applied to content? GK: Sure. So we talk about taxonomy in terms of content for how it can be used to find the information that you need. So when you think about that online shopping example, instead of looking for a physical product like clothing. When it comes to content, you’re just looking for specific information. So it’s kind of like the content itself is the product. So if you are an organization that produces any kind of content, you can put a taxonomy in place so that your users can search through that content. They can sort and filter the results that they get according to those categories and your taxonomy. And that way they can narrow it down to the exact piece of information that they’re looking for instead of having to skim through a long website with a lot of pages, or especially if you’re dealing with any kind of manuals or books or more publications that you’re delivering. Not forcing them to read through all of that instead of being able to search and find exactly what they’re looking for. So some of the ways that taxonomies can help you categorize your content would be things like what type of information it is. So whether it is more of a piece of technical documentation, something like a user manual or a quick start guide or a data sheet, or whether it is marketing material, training material. You could put that as one of the categories in your taxonomy. You could also put a lot of information about your intended audience. So that could be things like their experience level. It could be things like the regions they live in or the languages they speak. Anything about that audience that’s going to help you serve up the content that those particular people need. It can also be things like what platform your audience uses or what platform is relevant for the material that you’re producing. It can be things like the product or product line that your content is documenting. There are all kinds of different ways that you can categorize that information. And I know that both of us have a lot of experience with putting these kinds of things together. So I don’t know if you’ve got any examples that you can think of for how you’ve seen information get categorized. AB: So a lot of the way I think about taxonomies is a library classification system or MARC records so in the same way that if you wanted to find a particular information resource and you went to your library’s online catalog and could filter down to something that fits your needs. You can think of treating your organization’s body of content like a corpus of information that you can further refine and assign metadata values to. Or in the case of a taxonomy hierarchy in the clothing example, choosing that you want a shirt would be a step above choosing that you want a tank top or a long sleeve shirt or a blouse. So a lot of my mindset around taxonomies for content is framed like libraries. The Library of Congress subject headings are generally a good starting off point for a library. But sometimes if your library has specific information needs, like the National Health Library has its own subject scheme that is further specialized than the broader categories that you get in Library of Congress subject headings, because they know that everything in that corpus is going to be health or medicine related information. And in the same way you and I have developed taxonomies for clients that are particular to their needs, you’re never going to start off knowing nothing when you build a taxonomy, right? GK: Exactly. And with the example that you were talking about of kind of looking at information in a library catalog, we see that with a lot of documentation. So if you’re thinking about technical content and things like product documentation, user guides, user manuals, we see that similar kind of functionality. If you have that content available through a website or an app or some other kind of digital online experience, back to the online shopping example. Your user base can in all of those different cases, go to those facets and filters, those check boxes, drop down menus, search boxes, and start narrowing down the information to what exactly they’re looking for. So that really helps to enhance the user experience to have that taxonomy in place underlying the information and making it easier to narrow down. I’ve also seen it really helpful on the authoring side. So if you have a large body of content, maybe you have it in something like a content management system. And more content that you have, the harder it becomes to find the specific information that you’re looking for. In particular, we deal with a lot of DITA XML. And so there will be a component content management system that that’s typically housed in. And when you’ve got it in there, those systems typically have some kind of underlying taxonomy in place as well that can capture all kinds of information about how and when the content was created. So that can help you find it. And then of course, you could have your own taxonomy for the kinds of things I named earlier, what type of information it is, what the intended audience is in case that can help you as the author find and narrow down something in your system. And it can also help you as an author to put together collections of content for personalized delivery. So maybe you have a general version of your user guide, but then you’ve also got audience specific versions that you can kind of filter and narrow it down to based on the metadata in your content. And that’s all going to be informed by those categories in your taxonomy. So really leveraging any of the information that you have about your audience, about how they use your content or how they need to use your content is really going to help you deliver it in a more flexible way and in a more efficient way as well. AB: I know for me personally, sometimes the amount of information out in the world can get very overwhelming. GK: Absolutely. AB: So I’m thinking about our LearningDITA e-learning project, and how much content we’ve collected between different versions of it and over the amount of time it’s been up, and it makes it so much easier to navigate knowing where pieces of content are when I’m looking for something as an author on that project. March 2025 update: We have moved LearningDITA to a new platform . The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com . GK: And that actually brings up a really good point because we were talking about the taxonomies used in content. We were primarily talking about technical content, so things like product documentation, user guides, legal, regulatory, but it can also be used for other types of content. And learning content is a really big one, and we are seeing that more and more. AB: Absolutely. GK: There’s a lot of overlap at organizations between technical documentation and learning or training material, especially if you make a product where there are certifications. So we see a lot of times, for example, with people who make software. That organization will usually have the product documentation, here’s how you use this software. But then there’s also training material so that if there are certifications around the use of that software, then there’s that material where their user base can go take a class and essentially be students or learners in that context rather than just consumers of the product. And so there’s a lot of need to share information across the technical documentation and the learning material. And we see more and more organizations where the learning material is kind of their main product, looking for ways to better categorize that information and have a taxonomy underneath it. And so when you mentioned LearningDITA, that kind of got me thinking about how not only that useful for us as the creators of LearningDITA, but for all the other organizations that also produce learning material. How much a taxonomy helps that experience, not only for them as the authors, but also for their end users. AB: It’s a win-win for users and creators. Something I would like to discuss is self-guided e-learning, and how a taxonomy can make it easier to tie assessments to learning objectives in that sort of asynchronous setting as opposed to a more traditional classroom. GK: And e-learning is really interesting because there’s a lot of flexibility out there in terms of how you can present that information and how you can gather information from the students or the learners taking your e-learning courses. And we’ve seen different categories or taxonomies around gathering information or putting information on your learning material about things like the intended reading level or grade level if you’re dealing with students who are still in school. You could also put information about things like the industry. If your learner base is professionals, you can put information about the subject that you’re covering, the type of the certification associated with that material. And then like you mentioned, learning objectives. So typically with any kind of a course that’s put out there for students to take, whether it’s e-learning or whether it’s just in a classroom, there are specific learning objectives that that material is intended to cover. So whenever you as a student get to the end, it’s basically you should be able to understand this concept or perform this activity as a result of taking this course. And we have seen a lot of demand in various different industries for tying those learning objectives to the assessment questions. So if you’re in an e-learning course, you’ve got your kind of self-guided material where you’re walking through, you’re reading, maybe you’re doing some exercises, maybe you’re watching some videos or looking at some examples. And then at the end there’s some kind of a quiz or an assessment to test your knowledge. And with e-learning, that’s typically something where you’re entering answers, maybe you’re checking boxes for multiple choice questions, or you’re typing a response in, or you’re picking true faults, things like that. So you take that quiz and the questions in that quiz are tied back to those learning objectives from the beginning of the lesson. So that way if you get a question wrong, it can tell you this is the specific learning objective that you missed this question four, and that you should go back and review more material that’s associated with that learning objective. And having all of that tied together so that your e-learning environment can actually serve up that information is where it can really help to have a taxonomy underneath. When you think about it, learning objectives themselves kind of naturally fall into categories. And there are even standards when you think about things like Bloom’s taxonomy, that’s a typical standard that’s applied to learning material. And of course you could also come up with whatever categories that you want for your learning information, but those objectives are often tied directly to the categories. And then being able to have the structure in place to tie those objectives and the taxonomy categories that are associated with to your assessment questions to the rest of your material just makes the whole experience a lot more seamless and streamlined for your learners. AB: It’s so valuable, particularly learning objectives. I’m glad you brought up Bloom’s taxonomy because I think that’s a pretty familiar entry point to taxonomies for a lot of people who work in the learning space. And I’m kind of also thinking about whether it’s learning content or technical documentation, any implementation of a taxonomy for a body of digital content. It sort of turtles all the way down, whether it’s a learning objective that is the value or significance being assigned to a piece of content. If you think about information theory and how sort of the basis of what is a node and a taxonomy is it’s a discrete thing. And I know it drives people crazy. That thing is more or less the technical term in that situation. It sounds so vague, but the thing is, it’s a discrete object that has a purpose for why it exists, whether it’s a learning objective that’s tied as an attribute in your DITA or piece of metadata somewhere or elsewhere, or whether it’s technical documentation that’s telling you which product, a piece of content assigns to. I know we’ve made taxonomies through all sorts of different frames, whether it’s structuring learning content, or we’ve made product taxonomies. It’s really a very flexible and useful thing to be able to implement in your organization. GK: And it not only helps with that user experience for things like learning objectives, but it can also help your learners just find the right courses to take. So if you have some information in your taxonomy that’s designed to narrow it down to a learner saying, “I need to learn about this specific subject.” And that could have several, of course, layers of hierarchy to it. It could also help your learners to understand what to go back and review based on the learning objectives. It can help them to maybe make some decisions around how they want to take a course. So when you think about e-learning, you can have it be self-guided and asynchronous, or sometimes it could be instructor-led. And so if you’ve got something like that baked into your taxonomy, something about the method of delivery that could help your learners decide which mechanism is going to be better for them. So all of that can be really helpful. And I also want to talk about it again from going back to the creator side, just like we did with technical content. Because if you are designing learning material, you’re an instructional designer, you’re putting together a course, then you might want some information about things like the learner’s progress, their understanding of the material. You’re going to want to obviously capture all the information around the scoring and grading from the assessments that they take. And having that tied back to a taxonomy, whether it’s to learning objectives or to any other information, can help you to understand how you might need to adjust the material. So if you notice, for example, that you’ve got one learning objective that everyone seems to struggle to understand, you’ve got a large percentage of your students missing the assessment questions associated with that learning objective, then maybe that tells you we need to go back and rewrite this or rework how it’s presented. So the taxonomy can not only help your learners find the information, navigate the courses, and take the courses that they need, but it can also help you to adjust the design of those courses in a way that further enhances their learning experience. AB: Absolutely. Something else that you just made me think of is say you have an environment of creating learning content with multiple authors. Another advantage of the taxonomy is that it can standardize metadata values. So say you and I, Gretyl are working within the same learning organization, and then when content that’s written by either one of us goes to publish, the metadata values will be standard if we use the same taxonomy. GK: And that’s also a really important point because that standardization is good not only across just a subset of your content, like your learning material, but we’ve seen some organizations go more broad and say, “Our learning content and our technical docs and our marketing material.” And whatever other content they have, all needs to have a consistent set of terminology. It needs to have a consistent set of categories that people use to search it. And so you can think about taxonomy at a broader level too, for all the information across the entire company or the entire organization, and make sure that it’s all going to fit into those categories consistently because it is, like you said, very typical to have lots of different people contributing to content creation. And then in particular, with learning content, we see a lot of subject matter experts and part-time contributors who do something else, but then they might write some assessment questions or they might write a lesson here and there. And having the ability to have that consistent categorization of information, consistent terminology, consistent application of metadata is really, really helpful when you’ve got so many different people contributing to the content because that helps to make sure that they’re not going to be introducing inconsistencies that confuse your end users. AB: That’s really a strength of most classification systems, whether it’s a controlled vocabulary or something more sophisticated like a taxonomy. And I’m thinking about something that you and I see a lot working with clients with DITA XML in particular is sort of blending technical and marketing content once DITA is implemented and having interoperability with your taxonomy definitely is a boon to that. GK: Absolutely. I think that’s a good place to wrap up for now. We’ll be continuing this discussion in the next podcast episode. So Allison, thank you. AB: Thank you. Outro with ambient background music Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. Behind every successful taxonomy stands an enterprise content strategy Building an effective content strategy is no small task. The latest edition of our book, Content Transformation is your guidebook for getting started. Name (Required) First Last Company name (Required) Email (Required) Enter Email Confirm Email Consent for storing personal data (Required) I consent to my personal data being stored according to the Scriptorium privacy policy. Consent to subscribe (Required) I consent to the use of my email address for occasional marketing emails according to the Scriptorium privacy policy. I understand that I can unsubscribe at any time. /* <![CDATA[ */ gform.initializeOnLoaded( function() {gformInitSpinner( 28, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery('#gform_ajax_frame_28').on('load',function(){var contents = jQuery(this).contents().find('*').html();var is_postback = contents.indexOf('GF_AJAX_POSTBACK') >= 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_28');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_28').length > 0;var is_redirect = contents.indexOf('gformRedirect(){') >= 0;var is_form = form_content.length > 0 && ! is_redirect && ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_28').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_28').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_28').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */ }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_28').val();gformInitSpinner( 28, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [28, current_page]);window['gf_submitting_28'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}jQuery('#gform_wrapper_28').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [28]);window['gf_submitting_28'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_28').text());}else{jQuery('#gform_28').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger("gform_pre_post_render", [{ formId: "28", currentPage: "current_page", abort: function() { this.preventDefault(); } }]); if (event && event.defaultPrevented) { return; } const gformWrapperDiv = document.getElementById( "gform_wrapper_28" ); if ( gformWrapperDiv ) { const visibilitySpan = document.createElement( "span" ); visibilitySpan.id = "gform_visibility_test_28"; gformWrapperDiv.insertAdjacentElement( "afterend", visibilitySpan ); } const visibilityTestDiv = document.getElementById( "gform_visibility_test_28" ); let postRenderFired = false; function triggerPostRender() { if ( postRenderFired ) { return; } postRenderFired = true; gform.core.triggerPostRenderEvents( 28, current_page ); if ( visibilityTestDiv ) { visibilityTestDiv.parentNode.removeChild( visibilityTestDiv ); } } function debounce( func, wait, immediate ) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if ( !immediate ) func.apply( context, args ); }; var callNow = immediate && !timeout; clearTimeout( timeout ); timeout = setTimeout( later, wait ); if ( callNow ) func.apply( context, args ); }; } const debouncedTriggerPostRender = debounce( function() { triggerPostRender(); }, 200 ); if ( visibilityTestDiv && visibilityTestDiv.offsetParent === null ) { const observer = new MutationObserver( ( mutations ) => { mutations.forEach( ( mutation ) => { if ( mutation.type === 'attributes' && visibilityTestDiv.offsetParent !== null ) { debouncedTriggerPostRender(); observer.disconnect(); } }); }); observer.observe( document.body, { attributes: true, childList: false, subtree: true, attributeFilter: [ 'style', 'class' ], }); } else { triggerPostRender(); } } );} ); /* ]]> */ The post Taxonomy: Simplify search, create consistency, and more (podcast, part 1) appeared first on Scriptorium .…
 
Ready to deliver consistent and personalized learning content at scale for your learners? In this episode of the Content Operations podcast, Alan Pringle and Bill Swallow share how structured content can transform your L&D content processes. They also address challenges and opportunities for creating structured learning content. There are other people in the content creation world who have had problems with content duplication, having to copy from one platform or tool to another. But I will tell you, from what I have seen, the people in the learning development space have it the worst in that regard — the worst. — Alan Pringle Related links: The challenges of structured learning content (podcast) DITA and learning content Rise of the learning content ecosystem with Phylise Banner (podcast) Flexible learning content with the DITA Learning and Training specialization Building an effective content strategy is no small task. The latest edition of our book, Content Transformation is your guidebook for getting started. LinkedIn: Alan Pringle Bill Swallow Transcript: Disclaimer: This is a machine-generated transcript with edits. Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction AP: Hey, everybody, I’m Alan Pringle. BS: I’m Bill Swallow. AP: And today, Bill and I want to talk about structured content in the learning and development space. I would say, the past two years or so, we have seen a significantly increased demand of organizations who want to apply structured content to their learning and development processes, and we want to share some of the things those organizations have been through and what we’ve learned over the past few months, because I suspect there are other people out there who could benefit from this information. BS: Oh, absolutely. AP: So let’s talk about, really, the drivers, what are the things that people, content creators in the learning development space, what’s driving them to it? One of them off the bat is so much content, so, so very much content, on so many different delivery platforms. That’s one that I know of immediately, what are some of the other ones? BS: Oh, yeah, you have just the core amount of content, the number of deliverables, and the duplication of content across all of them. AP: That is really the huge one, and I know there are other people in the content creation world who have had problems with content duplication, having to copy from one platform or tool to another. But I will tell you, from what I have seen, the people in the learning development space have it the worst in that regard—the worst. BS: Didn’t they applaud you when you showed up at a conference with a banner that said end copy, paste? AP: Pretty much, it’s true. That very succinct message raised a lot of eyebrows, because they are in the position, unfortunately, in learning and development, having to do a lot of copying and pasting, and part of the reason for that copying and pasting is, a lot of times, the different platforms that we’ve mentioned, also, different audiences. I need to create this version for this region, or this particular type of student at this location, so they’re copying and pasting over and over again to create all these variants for different audiences, which becomes unmanageable very quickly. BS: Yeah, copy, pasting, and then, reworking. And then, of course, when they update it, they have to copy, paste, and rework again to all the other places it belongs, and then, they have to handle it in however many languages they’re delivering the training in. AP: So now, everything is just blown up. I mean, how many layers of crap, and I’m just going to say it, do these people have to put up with? And there are many, many, many. BS: Worst parfait ever. AP: Yeah, no, that is not a parfait I want to share, I agree with you on that. So let’s talk about the differences between, say, the techcomm world and the learning and development world and their expectations for content. Let’s talk about that, too, because it is a different focus, and we have to address that. BS: So techcomm really is about efficiency and production, so being able to amass quite a wide mass of content and put it out there as quickly as possible, or put it out there as efficiently as possible. Learning content kind of flips that on its head, and it wants to take quality content and build a quality experience around it, because it’s focused on enabling people to learn something directly. AP: And techcomm people, we’re not saying you’re putting out stuff that is wrong or half ass. That is not what we mean, I want to be real clear here. What we mean is, there is a tendency to focus on efficiency gains, and getting that help set, getting that PDF, getting that wiki, whatever thing that it is that you’re producing, getting that stood up as quickly as possible, whereas on the learning side, speed is not usually the thing that you’re trying to use to sell the idea of structured content. I don’t think that’s going to win a lot of converts in the learning space. I do think, however, you can make the argument, if you create this single source of truth so you can reuse content for different audiences, different locations, different delivery platforms, and you’re using the same consistent information across all of that, you are going to provide better learning outcomes, because everybody’s getting the same information. Regardless of what audience they are or what platform that they’re learning, whether it’s live instructor-led training, something online, whatever else, you’re still getting the correct same information, whereas if you were copying and pasting all that, you might’ve forgot to update it in one place as a content creator, and then, someone ends up getting the wrong information, a student, a learner, and that’s when you’re not in the optimal learning experience situation. BS: Right, and it’s not to say that every single deliverable gets the exact same content, but they get a slice from the same shared centralized repository of content so that they’re not rewriting things over and over and over again. And they’re still able to do a lot of high-quality animations, build their interactives, put together their slide presentations, everything like that, but use the content that’s stored centrally rather than having to copy and paste it again and again and again. AP: Yeah, and let’s talk about, really, the primary goals for moving to structure content for learning and development folks. We’ve already talked about reuse quite a bit, that’s a big one. Write it one time, use it everywhere, and that also leads to creating profiling, different audiences, content for different audiences. BS: Right, I mean, these goals really are no different than what you see in techcomm, and what techcomm has been using for the past 15, 20, 25 years. It is that reuse, that smart reuse, so write it once, use it everywhere, no copy paste, having those profiling attributes and capabilities built in so that you can produce those variants for beginner learners versus expert learners versus people in different regional areas where the procedure might be a little bit different, producing instructor guides as well as learner guides. All of these different ways of mixing and matching, but using the same content set to do that. AP: Yeah, it’s like one of our clients said, and I have to thank them forever for bringing this up, they were bogged down in a world of continuous copying and pasting over and over and over again, and maintaining multiple versions of what should’ve been the same content, and they said, quote, “We want to get off the hamster wheel.” And that is so true and so fitting, and we probably owe them royalties for saying this over and over again, because such a good phrase. But it really did capture, I think, a big frustration that a lot of people in the learning and development space have creating content, because they do have to maintain so many versions of content. BS: And those versions likely are stored in a decentralized manner, so they could be on multiple different servers, they could be on multiple different laptops or PCs, they could be on thumb drives in some random drawer that are updated maybe once every two, three years. So being able to pull everything together into a central repository and structure it so that it can be intelligently reused and remixed, there’s so many benefits to that. AP: Yeah, and in regard to the remixing, the bottom line is, you want the ability to publish to all your different platforms. I believe the term people like to use is omnichannel publishing, so you basically can do push-button publishing to basically any delivery need that you have, whether it’s an instructor versus student guide for training you’re having live, e-learning, even scripts for video. Even when you’re dealing with a lot of multimedia content, there is still text involved, underpinnings of that content, audio and video, there’s still probably bits and pieces of that, that can come from your single source of content, because at the core of it, it’s text-based, even though if the delivery of it is a video or audio. BS: Now, we’ve had structured content for a good couple decades, at least- AP: At least, yeah. BS: … but there really is a reason why the learning world really hasn’t latched onto it completely, and it really comes down to the different types of content that they need to produce versus what traditionally a techcomm group would do. So right off the bat, there are all the different tests, quizzes, and so forth, all the assessments that are built into a learning curriculum. There was never really anything built to handle those in traditional structured authoring platforms in schemas. AP: And there are solutions now that will let you handle assessments and different types of questions, and things like that. BS: But the whole approach to producing learning content, it’s quite similar to techcomm and to other classic content development, but it’s also quite unique in its own right, and we do have to make sure that all of those different needs, whether it be the assessments, any interactives that need to be developed, making sure that you tie in a complete learning plan, and perhaps even business logic to your content, making sure all that can be baked in intelligently so that we’re able to produce the things that we need to produce for trainers. AP: Yeah, and now, especially, you have to be able to create content that integrates easily with the learning management system, which has its own workflows, it’s got tracking, it tracks progress, it scores quizzes, it keeps track of what classes you’ve taken, prerequisites, all of that stuff, that is a whole delivery ecosystem, and structured content can help you communicate with an LMS and create content that is LMS friendly by baking in a lot of the things that you just talked about. BS: And the content really does boil down to a more granular and targeted presentation to the audience rather than techcomm, which is more of a soup to nuts, kind of everything in the kitchen sink approach to offer. AP: Yeah, and then, there’s also the whole live delivery aspect, that is not something that’s really part of techcomm at all. BS: I wouldn’t want someone there reading a manual to me. AP: No, nor would I. Well, it might be a good way to treat insomnia, but that’s not what we’re here for. But you do have to consider, the assessments are a big difference from a lot of other content that is a good fit for the structure world, and then, the possibility of live instruction, that’s also another big difference, which, still, there are structured content solutions that can help you with both of those very distinct learning and development content situations. So I think it’s fair to say, based on talking to a lot of people at conferences focused on learning, and a lot of our clients, that the traditional way of creating learning and development content, it is not scalable. The copy and paste angle in particular is just not sustainable in any way, shape, or form. BS: No, you have so many hours in a day, so if you need to start producing more, you really need to start adding more people. And you add more people, then you have the likelihood that more things could go wrong with the content, or the content could get- AP: Will go wrong. BS: … could get out of sync with itself. AP: Yeah. Well, let’s talk also a little more about some of the challenges. We’ve talked about the interactivity, how that and the assessments, that’s something that’s kind of particular that you have to solve for in the learning space. Let’s talk about the P word, PowerPoint. BS: PowerPoint. Yeah, being able to pull focus slides together, which really would likely have a very small subset of a course’s content built within them, unless you’re producing a wall of text per PowerPoint. Those are quite unique to the space, so you don’t see much in techcomm where things are delivered via PowerPoint, or you hopefully don’t. AP: No, PowerPoint is great because it’s wide open and you can do a lot of things with it, PowerPoint is bad because it’s wide open and you could do a lot of things with it. That’s the problem with PowerPoint. BS: And a template’s only as useful as those who follow it. AP: Exactly. And now, you mentioned templates, structure content is a way to templatize and standardize your content, and I’m sure that can rub people the wrong way. My slides need to be special, this, that, and the other. There’s a continuum here of, I want to do whatever I want to the point of sloppy, or I can do things within this particular set of confines so there is consistency. And again, I think it’s fair to say, providing consistency for different learners with slide decks, that is going to make some better outcomes instead of a free-for-all, I can do whatever I want scenario. And I’m sure there are people out there who are going to kick and scream and disagree with me, but that’s a fight we’re just going to have to have folks. BS: Well, no, it provides us a consistent experience throughout, rather than having some jarring differences from lesson to lesson or course to course. AP: Yeah, yeah, and I think there’s one thing, too, that, in addition to the PowerPoint angle, with the learning and development space, there is this focus on, we need to create, this thing went off, that thing went off, and this other thing went off. There’s still standardization you can do among your different delivery targets that will streamline things, create consistency, and therefore, a better learning experience. I do believe that’s true, even though some people at first in particular can find it very confining. BS: Oh, right, I mean, it just takes the development of the end user experience, I don’t want to say completely out of the learning content developer’s hands, but it kind of frees them up to better frame the content for the lesson rather than worrying about the fit and finish of the product. AP: Yeah, and let’s focus now on some of the options out there in the structure content world for learning and development content. There’s several out there, let’s talk about what’s on the table. BS: It comes down to two different types of systems, one would be a learning component management system, so it’s a system that’s more built for learning content specifically. AP: Yeah, I would say it’s purpose built, I agree, yeah. BS: Yeah, and it functions the same way as a lot of, I guess what we would call the traditional techcomm component content management systems do, where you’re able to develop in fragmented pieces, in a structured way, in a centralized manner, and intelligently reuse and remix all of these different components to produce some type of deliverable. AP: Right, so you can therefore, within this system, set up things for different locations, different audiences, whatever else. And if you were moving into an LCMS or one of the other solutions we’re talking about, you are also going to make localization and translation much more efficient, and you’ll get stuff turned around in other languages for other locales much more quickly. So we’ve got the LCMS’s which are more proprietary, and then, on the flip side of that, let’s talk about DITA. BS: So DITA does provide you with a decent starting point for developing your content, and we’ve helped several clients do this already, but a lot of the tools that are out there on the flip side, where the LCMS is targeted at developing learning content, a lot of the tools for DITA aren’t, so it requires a lot of customization on the tool chain, as well as in the content model, to get things up and running. However, DITA does give you an easier point of integration with any work that is being produced by your techcomm peers. AP: Yeah, I do think it’s fair to say it’s a little more extensible, but the mere fact it is an open standard as an extensible means that it may take some configuring to make it exactly what you need it to be. And like Bill was saying, DITA has some custom structure that is a very good fit, it is specifically for learning and training, and you can further customize those customizations to match what you need. I will say, I think some of the assessment structures are not as robust as they should be, and we’ve had to customize those for some clients. So that’s another thing that you would have to kind of think about when you’re trying to make this decision, do I need to go with an LCMS, or do I want to go with DITA and a component content management system, and understand that I’m going to have to make some adjustments to make it more learning and development friendly? BS: No matter which way you slice it, though, moving to any kind of a structured repository in a structured system really starts to open things up from a back end production point of view, while not necessarily forgoing a lot of the experience-driven design that goes into producing those different learning deliverables. It is a way to kind of become more efficient, and as Alan mentioned, avoid the copy and paste, which can be a nightmare to maintain over time. AP: And at the same time, you do not have to throw out your standards for the quality of the content and the quality of the learning experience. You want to have structure, support, and bolster, and maintain those things, and don’t look at it as something that is going to degrade those things, because when used correctly, it can really help you maintain that level of quality and consistency that you really need for an outstanding learning experience. And with that, Bill, I think we can wrap up. Thank you very much. BS: Thank you. Outro with ambient background music Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. Questions about this podcast? Let’s talk! " * " indicates required fields Your name (required) * Your email (required) * Your company Subject (required) * Consulting requestSchedule a meetingLearningDITA.comStoreTrainingOther Your message * Data collection (required) * I consent to my submitted data being collected and stored. Comments This field is for validation purposes and should be left unchanged. /* <![CDATA[ */ gform.initializeOnLoaded( function() {gformInitSpinner( 14, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery('#gform_ajax_frame_14').on('load',function(){var contents = jQuery(this).contents().find('*').html();var is_postback = contents.indexOf('GF_AJAX_POSTBACK') >= 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_14');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_14').length > 0;var is_redirect = contents.indexOf('gformRedirect(){') >= 0;var is_form = form_content.length > 0 && ! is_redirect && ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_14').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_14').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_14').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */ }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_14').val();gformInitSpinner( 14, 'https://www.scriptorium.com/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [14, current_page]);window['gf_submitting_14'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}jQuery('#gform_wrapper_14').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [14]);window['gf_submitting_14'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_14').text());}else{jQuery('#gform_14').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger("gform_pre_post_render", [{ formId: "14", currentPage: "current_page", abort: function() { this.preventDefault(); } }]); if (event && event.defaultPrevented) { return; } const gformWrapperDiv = document.getElementById( "gform_wrapper_14" ); if ( gformWrapperDiv ) { const visibilitySpan = document.createElement( "span" ); visibilitySpan.id = "gform_visibility_test_14"; gformWrapperDiv.insertAdjacentElement( "afterend", visibilitySpan ); } const visibilityTestDiv = document.getElementById( "gform_visibility_test_14" ); let postRenderFired = false; function triggerPostRender() { if ( postRenderFired ) { return; } postRenderFired = true; gform.core.triggerPostRenderEvents( 14, current_page ); if ( visibilityTestDiv ) { visibilityTestDiv.parentNode.removeChild( visibilityTestDiv ); } } function debounce( func, wait, immediate ) { var timeout; return function() { var context = this, args = arguments; var later = function() { timeout = null; if ( !immediate ) func.apply( context, args ); }; var callNow = immediate && !timeout; clearTimeout( timeout ); timeout = setTimeout( later, wait ); if ( callNow ) func.apply( context, args ); }; } const debouncedTriggerPostRender = debounce( function() { triggerPostRender(); }, 200 ); if ( visibilityTestDiv && visibilityTestDiv.offsetParent === null ) { const observer = new MutationObserver( ( mutations ) => { mutations.forEach( ( mutation ) => { if ( mutation.type === 'attributes' && visibilityTestDiv.offsetParent !== null ) { debouncedTriggerPostRender(); observer.disconnect(); } }); }); observer.observe( document.body, { attributes: true, childList: false, subtree: true, attributeFilter: [ 'style', 'class' ], }); } else { triggerPostRender(); } } );} ); /* ]]> */ The post Transform L&D experiences at scale with structured learning content appeared first on Scriptorium .…
 
In episode 179 of the Content Strategy Experts podcast, Sarah O’Keefe and Alan Pringle share the inside scoop on how to write an effective request for a proposal (RFP) for content operations. They’ll discuss how RFPs are constructed and evaluated, strategies for aligning your proposal with organizational goals, how to get buy-in from procurement and legal teams, and more. When it comes time to write the RFP, rely on your procurement team, your legal team, and so on. They have that expertise. They know that process. It’s a matter of pairing what you know about your requirements and what you need with their processes to get the better result. — Alan Pringle Related links: Survive the descent: planning your content ops exit strategy (podcast) The business case for content operations (white paper) Content accounting: Calculating value of content in the enterprise (white paper) Building the business case for content operations (webinar) LinkedIn: Sarah O’Keefe Alan Pringle Transcript: Disclaimer: This is a machine-generated transcript with edits. Alan Pringle: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about writing effective RFPs. A request for a proposal, RFP, approach is common for enterprise software purchases, such as a component content management system, which can be expensive and perhaps risky. Hey everybody, I am Alan Pringle. Sarah O’Keefe: And I’m Sarah O’Keefe, hi. AP: So Sarah, we don’t sell software at Scriptorium, so why are we talking about buying software? SO: Well, we’re talking about you, the client buying software, which is not always, but in many cases, the prerequisite before we get involved on the services side to configure and integrate and stand up the system that you have just purchased to get you up and running. And so, because many of our customers, many most, nearly all of our customers are very, very large, many of those organizations do have processes in place for enterprise software purchases that typically either strongly recommend or require an RFP, a request for proposal. AP: Which let’s be very candid here. Nobody likes them. Nobody. SO: No, they’re horrible. AP: Vendors don’t like them. People who have to put them together don’t like them, but they’re a necessary evil. But there things you can do to make that necessary evil work for you. And that’s what we want to talk about today. AP: So the first thing you need to do is do some homework. And part of that homework, I think, is talking with a bunch of stakeholders for this project or this purchase and teasing out requirements. So let’s start with that. And this is even before you get to the RFP itself. There’s some stuff you need to do in the background. And let’s talk about that a little bit right now. SO: Right, so I think, you know, what you’re looking to get to before you go to RFP is a short list of viable candidates, probably in the two to three range. I would prefer two, your procurement people probably prefer three to four. So, okay, two to three. And in order to get to that list of these look like viable candidates, as Alan’s saying, you have to do some homework. Step one, what are your hard, requirements that IT or your sort of IT structure is going to impose. Does the software have to be on premises or does it have to be software as a service? Nearly always these days organizations are hell bent on one or the other and it is not negotiable. Maybe you have a particular type of single sign-on and you have some requirements around that. Maybe you have a particular regulatory environment that requires a particular kind of software support. You can use those kinds of constraints to easily, relatively easily, rule out some of the systems that simply are not a fit for what your operating environment needs to look like. AP: And by doing that, you are going to reduce the amount of work in the RFP itself by doing this now. So you’re going to streamline things because you’ve already figured out, this candidate is not a good fit. So why bother them and why make work for ourselves having to work and correspond with the vendor that ends up not being a good fit. SO: Right, and if we’re involved in a process like this, which we typically do on the client side, so we engage with our customers to help them figure out how to organize an RFP process, right, we’re going to be strongly encouraging you to narrow down the candidate list to something manageable because the process of evaluating the candidates is actually quite time consuming on the client side. And additionally, it’s quite time consuming for the candidates, the candidate software companies to write RFP responses. So if you know for a fact that they’re not a viable candidate, you know, just do everybody a favor and leave them out. It’s not fair to make them do the work. AP: No, it’s not. And we’ve seen this happen before where a organization will keep a vendor in the process kind of as a straw man to strike down fairly quickly. It would be kinder and maybe more efficient to do that before you even get to the RFP response process, perhaps. SO: Yeah, and of course, again, the level of control that you have over this process may vary depending on where you work and what the procurement RFP process looks like. There are also some differences between public and private sector and some other things like that. But broadly, before you go to RFP, you want to get down to a couple of viable candidates, and that’s who should get your request for proposal. AP: Yeah, and when it does come time to write that RFP, do rely on your procurement team, your legal team. They have that expertise. They know that process. It’s a matter of pairing what you know about your requirements and what you need with that process to get the better result. And I think one of the key parts of this communication between you and your procurement team is about use case scenarios. So let’s talk about those a little bit because they’re fundamental here. SO: Yeah, so your legal team, your procurement team is going to write a document that gives you all the guardrails around what the requirements are and you have to be this kind of company and our contract needs to look a certain way and various things like that. We’re going to set all of that aside because A, we don’t have that expertise and B, you almost certainly as a content person don’t have any control over that. You’re just going to go along with what they are going to give you as the rules of the road in doing RFPs. However, somewhere inside that RFP it says, these are the criteria upon which we will evaluate the software that we are talking about here. And I think a lot of our examples here are focused on component content management systems, but this could apply to other systems whether it’s translation management, terminology, metadata, you know, all these things, all these content-related systems that we’re focused on. So, somewhere inside the RFP, it says, we need this translation management system to manage all of these languages, or we need this component content management system to work in these certain ways. And your goal as the content professional is to write scenarios that reflect your real world requirements that are unique to your organization. So if you are in heavy industry, then almost certainly you have some concerns around parts, about referencing parts and part IDs and maybe there’s a parts database somewhere and maybe there are 3D images and you have some concerns around how to put all of that into your content. That is a use case that is unique to you versus a software vendor who is going to have some sort of, we have 80 different variants of this one piece of software depending on which pieces and parts you license, and then that’s gonna change the screenshots and all sorts of things. So what you wanna do is write a small number of use cases. We’re talking about maybe a dozen. And those dozen use cases should explain, you know, as a user inside the system, I need to do these kinds of things. You might give them some sample content and say, here is a typical procedure and we have some weird requirements in our procedures and this is what they are. Show us how that will work in your system. Show us how authoring works. Show us how I would inject a part number and link it over to the parts database. Show us, you know, those kinds of things. So, the use case scenarios typically should not be, “I need the ability to author in XML,” right? AP: Or, “I need the ability to have file versioning,” things that every CCMS on the planet does, basically. SO: Right, somewhere there’s a really annoying and really long spreadsheet that has all those things in it, fine. But ultimately, that’s table stakes, right? They should not get to the short list unless you’ve already had this conversation about file versioning and the right class of system. The question now becomes, how do you provide a template for authors and what does it look like for authors to start from a template and do the authoring that they need to do? Is that a good match for how your authors need to or want to or like to work. So the key here from my point of view is don’t worry too much about the legalese and the process around the RFP, but worry a whole bunch about these use case scenarios and how you are going to evaluate all the different tools that you’re assessing against the use case scenarios. AP: Be sure you communicate those use case scenarios to your procurement team in a way they understand so they have a better handle on what you need because if everybody is kind of on the same page as far as those use cases go the clearer it’s going to be to communicate those things to the candidate vendors when they do get their hands on the RFP. SO: And I think as we’re going in or talking about going into a piece of software, there probably should already be some consideration around exit strategy, which Alan, you’ve talked about that a whole bunch. What does it mean to have an exit strategy and to evaluate that in the inbound RFP process? AP: It is profoundly disturbing to have to think about leaving a before you’ve even bought it, but it does, does behoove you to do that because you need a clear understanding of how you are going to transition outside of a tool before you buy it. So when that happens, when you come to a point where you have to do it, you have an understanding about how you can technically exit that tool. For example, how can you export your source files for your content? What happens when you do that? In what formats? These are part of the use cases that you’re talking about perhaps here too. So it really is so weird to have to think about something that’s probably years down the road, but it is to your advantage to do that at this point in the game. SO: Yeah, I mean, what’s the risk if something goes sideways or if your requirements change? This doesn’t have to be sideways. So you are in company A and you buy tool A, which is a perfect fit for what you’re doing. Company A merges with company B. Company B has a different tool and B is bigger than A. So B wins and you exit tool A as company A and you need to move all your content into tool B. Well, that’s a case where you made all the right decisions in terms of buying the software. You just didn’t account for a change in circumstances, as in B swooped in and bought you. So what does it look like to exit out of tool A? AP: Yeah, it doesn’t necessarily have to be the tool no longer works for us. It could be what you describe. There can be external factors that drive the need to exit, have nothing to do with bad fit or failure on anybody’s part. SO: So we have these use case scenarios and we’ve thought about exit, though this is entrance. AP: Or even before entrance, you haven’t even entered yet. SO: And so now you’re going to have a demo, right? The software vendor is going to come in and they’re going to show you all your use case scenarios. Well, we hope they’re going to show you your use case scenarios. Sometimes they wander in and they show you a canned demo and they don’t address your use cases. That tells you that they are not paying attention. And that is something you should probably take into account as you do your evaluation. AP: Yeah, and don’t get sucked in on a similar note. Don’t get sucked in by flashy things because that flash may blind you and very nicely disguise the fact that they can’t quite match one of your use cases. So look at this sparkly thing over here. Don’t fall for that. Don’t do it. Yeah. SO: Sparkles. So, okay, so we have our use cases and they are going to bring a, they, the software vendor is going to bring some sort of a demo person and they are going to demo your use cases and hopefully they’re going to do it well. So you sort of check those boxes and you say, okay, great, it works. I think the next step after that is not to buy the tool. The next step after that is to ask for a sandbox so that your users can trial it themselves. There is a big, big difference between a sales engineer or a sales support person who has done hundreds, if not thousands of demos going click, click, click, click, click, at how awesome this is. And your brand new user who has never used a system like this, maybe, trying to do it themselves. So user acceptance testing, get them into a not for production sandbox, let them try out some stuff, let them try out all of your use cases that you’ve specified, right? AP: It’s try before you buy is what we’re talking about here. Yep. SO: Mm-hmm. Yeah, I’ve just made a whole bunch of not friends among the software vendors because of course setting up sandboxes is kind of a pain. AP: It’s not trivial. SO: Yeah, but you’re talking to just one of two candidates, right? So it is not unreasonable. It is completely unreasonable if you just did a, know, a spray this thing far and wide and ask a dozen software vendors for input. That is not okay from my perspective. And when we’re involved in these things, we try very, very hard to get the candidate list down to, again, two or three at most because almost certainly you have requirements in there somewhere that will make one or another of the software tools a better fit for you. So we should be able to get it down to the reasonable prospect list. AP: And I think too, this goes back to efficiency. Having fewer people or fewer companies in this means you’re gonna have to spend less time per candidate system because you’ve already narrowed it down to organizations that are gonna be a better fit for you. So it’s gonna be more efficient for them because they’re not having to probably do as much show and tell because you’ve narrowed things down very specifically here in my use cases. Also for you as the tool buyer and your procurement team, you’re going to have less to do because you’re not having to talk to four, six candidates, which you should not be doing for an RFP, in my opinion. I know some people in procurement will probably disagree with that though. SO: Well, we’re just going to make everybody mad today. And while I’m on the topic of not making friends and not influencing people, I wanted to mention something that probably many of you as listeners are familiar with, which is something called the Enterprise Architecture Board. If you work in a company of a certain size, you probably have an EAB. And the EAB is kind of like the homeowners association of your company, right? They are responsible for standards and making sure that you occasionally mow the lawn and whatever else, whether there are other ridiculous rules the homeowners association set. But EABs, Enterprise Architecture Boards in a company context, are responsible for software purchases, software architecture, and looking at what kinds of systems are we bringing into this organization and usually how can we minimize that? How can we maintain a reasonable level of consistency instead of bringing in specialty solutions all over the place? Now, a CCMS, a component content management system is pretty much the definition of a specialty system. AP: It’s niche. Yeah. SO: Yep, and EABs in general willl take one look at it and say something very much like, “CCMS, no, we have a CMS. We have a content management system. We have SharePoint, just use that. We have Sitecore, just use that. We have fill in the blank, just use that.” And your job, if you have the misfortune to have to address an EAB, is that you need to explain why it is that the approved existing solutions within the company architecture do not meet the requirements of the thing that you are trying to do and because that one’s not hard. The and part is the hard part and it is worth the and they’re going to talk about TCO total cost of ownership. It is worth the effort and the risk and the complexity of bringing in another solution beyond the baseline CMS that they’ve already approved to solve the issues that you’ve identified for your content. This is difficult. I’ve spent a lot of quality time with the AABs and they’re literally their job is to say no. I mean, that is just flat out their job. Their job is to streamline and minimize and have as few solutions as possible. So if you have to deal with this kind of situation, you’re going to have some real challenges internally getting this thing sold. AP: Yeah, and while we’re making friends and influencing people with our various comments on this process today, one final thing I want to say before we wrap up is, that common courtesy goes a really long way in this process. When you have wrapped things up, you have made your selection. Be sure you also communicate that to the vendors you did not choose. SO: Yeah. AP: Too many times in RFP processes, there’s not the level of communication with the people who did not win. And it’s just common courtesy, let them know, no, we chose someone else. And if you’re feeling super polite, you might even tell them why this use case you didn’t quite hit. This is why we went with this organization if you choose to. So be nice and be courteous because I realize this is more of a professional business situation, but it still doesn’t hurt to tell someone exactly why you did what you did. SO: Yeah, and I know those of you in more on the government side of things, nonprofit, typically do have a requirement to notify on RFPs and even give reasons and all the rest of it. But on the corporate side, there’s typically not any sort of requirement to let people know, as Alan said. you know, people put a lot of work into these RFPs and a lot of pain. AP: Yeah. SO: And one last, last thing beyond you should notify people. I want to talk about RFP timing. So we’re rolling into the end of 2024 here. I fully expect that there will be RFPs that will come out on roughly December 15th, which will be due on something like January 1st. So in other words, “Hi vendors, please feel free to spend your holiday time filling out our RFP so that we can, you know, go into the new year with shiny RFP submissions.” AP: RUDE! SO: That is not polite. Don’t do that. It is extremely rude. And it signals a level of disrespect that from the vendor side of the process makes them perhaps less inclined to bend on some other things. So reasonable amount of time for the scope of work that you’re asking for. And holidays don’t count. AP: Yeah, exactly. to go back, I think we can kind of wrap this up and go back to what we were talking about. All of that legwork that you do upfront for this RFP process, your vendors, believe it or not, would generally appreciate it because it shows you’ve done the homework, you have thought about this, and you’re just not wildly flinging out asks with no money, no stakes behind those asks. And they will probably be much more willing to work with you and go that extra mile when you have done that homework. Is there anybody else that we need to tick off before we wrap up? SO: I think we covered our list. So I’ll be interested to see what people think of this one. So let us know, maybe politely, but let us know. AP: And I’ll wrap up before there’s violence that occurs. So thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. The post Creating content ops RFPs: Strategies for success appeared first on Scriptorium .…
 
In episode 178 of the Content Strategy Experts podcast, Sarah O’Keefe and Christine Cuellar perform a pulse check on the state of AI as of December 2024. They discuss unresolved complex content problems and share key considerations for entering 2025 and beyond. The truth that we’re finding our way towards appears to be that you can use AI as a tool and it is very, very good at patterns and synthesis and condensing content. And it is very, very bad at creating useful, accurate, net new content. That appears to be the bottom line as we exit 2024. — Sarah O’Keefe Related links: Pulse check on AI: May, 2024 (podcast) AI in the content lifecycle (white paper) The future of AI: structured content is key (webinar) Savor the season with Scriptorium: Our favorite holiday recipes LinkedIn: Sarah O’Keefe Christine Cuellar Transcript: Disclaimer: This is a machine-generated transcript with edits. Christine Cuellar: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, it’s time for another pulse check on AI. So our last check-in was in May, which in AI terms is ancient history, so today, Sarah O’Keefe and I are gonna be talking about what’s changed and how it can affect your content operations. Sarah, welcome to the show. Sarah O’Keefe: Hey Christine, thanks. CC: Yeah. So 2024, as we’re currently recording this 2024 is winding down. People are preparing for 2025. Throughout this year, we went to a lot of different conferences and events. Of course, everybody’s talking about AI. So Sarah, based on the events that you like just recently got back from, you finally get to be in your own house. What are your thoughts about what’s going on with AI in the industry right now? SO: There’s, still a huge topic of conversation. Lots of people are talking about AI, a huge percentage of presentations, you know, had AI in the title or referenced it or talked about it. With that said, it seems like we’re seeing a little more sort of real world, hey, here’s some things we tried, here’s what’s working, here’s what’s not working. CC: Mm-hmm. SO: And I’ll also say that we’re starting to see a really big split between the AI in regulatory environments, which would include the entire EU plus certain kinds of industries and the sort of wild, wild west of we can do anything. CC: Yeah. So do you feel like it sounds like, know, when AI first came onto the scene, there was mostly, you know, let’s just all adopt this right now. Let’s go for it full steam ahead, especially marketers as a marketer. can I can say that because we’re definitely gung-ho about stuff like that. It sounds like, the perspective has shifted to being more balanced overall. Is that what you would say? SO: Yeah, I mean, that’s the typical technology adoption curve, right? You know, have your your peak of inflated expectations, and then you have the I think it’s the valley. It’s not the valley of despair, but it’s something like that. But you know, you sort of go from this can do anything. This thing is so cool. Go, go, go, go, go to a more realistic. Okay, what can it actually do? And what you know, does the and this is true for AI or anything else? What can it do? What can’t it do? What does it do well? CC: Mm. SO: Where do we need to put some guardrails around it? What are some surprises in terms of things that are and are not working? CC: Yeah. And at some of the conferences we were at this year, our team had some things to say about AI as well. So we will link some of the recap blog posts we have in the show notes. Sarah, what are some of the things AI can’t do right now? are the still, what are, Sarah, what are some of the big concerns about AI that are still unanswered, unresolved? SO: So in the big picture, as we’re starting to see people roll out AI-based things in the real world, whether it’s tool sets or content ops or anything else, we’re starting to see some really interesting developments and some really interesting assessments. Number one is that when you look at those little AI snippets that you get now when you do a search and it returns a bunch of search, well, actually it returns a page of ads. CC: Yes. SO: And then some real results under the ads. And then above that, it returns an AI overview snippet. So those are surprisingly bad. You do a search on something that you know a little bit of something about and see what you get. And you will see content in there that is just flat wrong. I’m not saying it’s not the best summary. I’m saying it is factually incorrect, right? CC: Yeah, I hate them right now. SO: So those are surprisingly bad. And talking about search for a minute, which ties into your question about marketing, there’s some real problems now with SEO, with search engine optimization, because if I’m optimizing my content to be included in an AI overview that is A, wrong, and B, doesn’t actually give me credit, Pre-AI, those snippets that showed up would say, I sourced it from over here. CC: Mm-hmm. SO: And in many cases now, the AI overview is just like the sort of summary paragraph with no particular, there’s no citation. It doesn’t say where it came from. So what’s in it for me as a content creator? Why am I creating content that’s going to get taken over by the AI overview and then not lead to people going to my webpage, right? How’s that helped me? CC: Yeah. Yeah. SO: So there’s some real issues there, there’s a move in the direction of thinking about levels of information. So thinking about very superficial information. How much does a cup of flour weigh? That type of thing. That’s just a fact and you can get it pretty much anywhere, we hope. And then there’s deeper information. Why is it better to weigh flour than to measure it? By volume, if you’re a baker. CC: Yeah. SO: And what does it look like to use weights? And are there differences among different kinds of flours? And what are some of the things I should consider when I’m going in that direction? So one of those, know, flours, a cup of flour weighs 120, sorry, a cup of all-purpose flour weighs 120 grams is a useful fact. And I don’t know if I really care if people peruse that further or come to my website for more about flour. The deeper information, the more detailed discussion of, you know, whole wheat versus all-purpose versus European flours versus American flours and all these other kinds of things, that requires more in-depth information and that is not so subject to being condensed into an AI summary. So that distinction between, you know, quick and dirty information versus deeper information, information that goes into a topic, CC: Mm-hmm. SO: We have a huge problem with disinformation and misinformation with information that is just flat out not either not correct or because of the way AI tools work, is trivially easy to generate content at scale. Tons and tons and tons and tons and tons of content. And because it’s trivially easy, CC: Mm-hmm. SO: That means it’s also trivially easy for me to generate, for example, a couple thousand fake reviews for my new product or a couple thousand websites for my fake products. It we can fractionalize down the generation of content. CC: Yeah. SO: And the you know, the interesting part of this is that it implies that you could potentially, you know, we talk about doing A/B testing and marketing. You could do A/B/C/D/E/F/G testing pretty easily because you can generate lots and lots of variants and kind of throw a bunch of stuff against the wall and see what works. But the bad side of this is that you can generate fake news, fake information, fake content that is going to be highly, highly problematic from a content consumer trust point of view. And so that I think is the third piece that we’re looking at now that is going to be critical going forward. And that is information trust, content reputation or the reputation of content creators and credibility. CC: Mm-hmm. SO: So for those of you listening to this podcast, how do you know it’s really us? Do you know these are live humans actually recording this podcast versus you know there’s now the ability to generate synthetic audio and you can create a perfectly plausible podcast which is really hard to say unless probably your AI and then it can probably do it perfectly but our perfectly plausible podcasts are you know how do you know that what that what you’re receiving in terms of content, digital content in particular, is actually trustworthy. And so I think ultimately there’s going to be some, need to be some tooling around verification, around authenticity, around, you know, this was not edited. You know, in the same way that you want to be able to verify that a photo, for example, is an accurate record of what happened when that photo was taken. CC: Yeah. SO: And if I went in and photoshopped it and cleaned it up, then that’s something that should be acknowledged. By the way, for the record, we do record these things and we do edit them. We try to stay on the right side of just editing out dumb mistakes and not editing it in a misleading way. CC: Yeah, ums and ahs and yeah. SO: So it’s not like we record the whole thing from soup to nuts and never, you know, never break in and never edit things out because believe me, I’ve said some stuff that needed to be taken away. If you ever get the raw files, they are full of, I didn’t mean to say that. you might want to take that out. CC: Me too, so many times. Let me start over, that’s me a lot all the time. SO: Yeah, sorry. Starting over. OK, but the point is that when we put out a podcast, we are saying this is our opinion, this is our content, and we’re gonna stand behind it. Whereas if it’s synthetic or AI generated or AI generated by these non-humans, you can do these weird, let’s make a podcast out of a blog post, well, okay, but what’s the value of that and why would I trust that content? CC: Yeah. SO: So that I think is going to be the big question for the next couple of years is what does it look like to be a content creator in an AI universe and to have the ability or sorry to as the content consumer to have the ability to validate what you’re listening to or reading or seeing. CC: Yeah. And a point that you had brought up in, I believe it was the white paper that you authored back in 2023. One of the points in there was that, people are going to, because of this trust and credibility issue, people are going to have to start relying on companies and brands that they’re already familiar with for the information that they’re looking for rather than a search from scratch because, you know, search is so messed up right now. And that is something I’ve seen personally, like myself, I do it a lot more. I’ve seen that with friends and other contacts and stuff like that. That’s really what people are doing is they’re going to, you know, the source even for recipes. Recently, as I was looking for a recipe and instead of just Googling it like I used to because I’m so sick of the summarized AI search, I went to all recipes, you know, a place that I knew that I liked the recipes or I think Sally’s baking addictions or something like that. There’s a lot of different places like that that now I’ll just go there instead of, you know, a search from scratch. That’s… I don’t know how we’re gonna fix that problem, yeah, trust and credibility, that’s gonna be a huge one. SO: It’s a really good example though because if you search for a particular recipe, even say two years ago, you would get a certain set of results and then you would say, I’ve heard of that website and I’ll go there. Now you search on a recipe, I’m getting 20, 30, or 40 websites that I’ve never heard of that all seem to have posted exactly the same recipe. CC: Mm-hmm. SO: I, you know, do I trust them? Do I trust them not to be AI-generated? Do I trust them to remember to not, you know, recommend that I put gravel in my recipes? You know, maybe not. And so I’m doing the same thing you are, which is, you know, reverting to trusted sources, trusted brands that I know that have a reputation for producing good recipes. Now, the flip side of this is that content is disappearing. CC: Hmm. SO: So, I have an infamous triple chocolate cookie recipe, is really if you’re looking for a chocolate bar in the form factor of a cookie, that is what it is. It’s just stupid amounts of chocolate. CC: Mm-hmm. yes, that sounds amazing. SO: It’s they’re delicious. And I think we’re putting them in our our holiday post, which may or may not have gone live already. So keep an eye out for that. But here’s the thing. I have the recipe because I got it out of Food & Wine about 20 years ago and I have a paper cut out of it that I wrote, hand wrote Food & Wine 12/01 on. So it was December of 2001 and so I went to Food & Wine. I went searching for this recipe knowing that it was originally published by them. I can’t find it. It is not there. CC: Hmm. wow. SO: It is not in their database, or at least it didn’t come up in their database when I searched on the exact name of the recipe. I then searched that exact recipe name, you know, just generally on the Internet, and I found three or four or five different places that had it, but none of them credited where I got it from 20 years ago, which I’m pretty sure is the original, right? Because these are all much more recent sites. So there are digital copies out there floating around, but they are not the original recipe and they didn’t credit the original publisher. Now, I don’t know exactly where Food & Wine got it because all I did was cut out the recipe. didn’t cut out the article. It was probably the context around it. But what I’m now reduced to is that I have a paper copy stashed in my paper recipe book, right? And I took a photo of the paper copy and put it on my phone. So I have a sort of digital version, but it is literally a photograph of a printout, which is, it is 2024 and we are doing photographs of printouts, but I can’t find it or I can’t find the original online. CC: Yes. Yeah. That’s interesting. Why do you think that content has disappeared? Do you think it’s because of the breakdown of the content model where the AI engine is just eating what it’s already regurgitated a bunch of times? Do you think it’s that? Does an org pulled it for some reason or what do you think is the cause? SO: Well, I mean, my best guess is that their recipe database only goes back so far and they just said anything more than X years old doesn’t need to be in here. They had some similar recipes. So maybe, well, this one’s been updated. It’s a little more modern, whatever. But it was just, it was really troubling that I, even knowing what the source was, I couldn’t find it. CC: Yeah, that is troubling. So how can companies prepare knowing that this is our context, this is our landscape? What should we do to prepare for 2025 and beyond? Because it’s not just like next year. SO: Beyond yeah, okay. So first of all you have to understand your regulatory environment Because that is very different by country or by region the issues that the people in the EU are looking at or American companies that sell in the EU, right. CC: Mm-hmm. Yeah. SO: There’s an EU AI act, and there’s a whole bunch of guidance that goes along with that. So there’s some concerns there. Whereas here in the US specifically, we don’t have a lot of regulation around AI, if any. Mostly we lean on, well, if you put out something that’s incorrect, there’s potentially product liability. If you put out instructions that are wrong and people follow them and they get hurt or worse, then the product owner is probably liable for putting out wrong instructions. That’s kind of where our stuff lands. But as a content consumer, I think you have to do what you’re describing, Christine, and become very, very skeptical about your sources methods, right? Where’d you get this stuff? And do you trust the source that it came from? CC: Yes. SO: If you are a content creator, then looking at questions around AI, the questions become, how can I employ AI inside my content workflows in a responsible way that achieves the goals that I have and doesn’t get me in big trouble in whatever way? And there’s also the question of, if I’m a content creator and I know that my consumers, my customers, are going to be using AI to consume my content, then how do I optimize that for that? How do I prepare for that? So it looks very different if you’re a person writing, creating new content, versus you’re the person deploying a chat bot on your corporate website that’s going to go read through your content corpus versus the person actually using the chat bot versus you name it. So. CC: Yeah. SO: And then, you know, we’re talking about AI generally, but of course we have AI tooling and we also have generative AI and we have all sorts of different things going on. So it’s a very, very broad topic, but overall, you know, what’s the problem I’m trying to solve? Can I apply this tool in a useful way? And what are some of the guardrails that I need to employ to keep myself out of trouble? CC: Yeah, in one of our webinars from this year, from 2024, depending on when you’re listening to this podcast, Carrie Hane mentioned something along the lines of like, you know, when you’re dealing with AI, it’s such a huge topic. You need to break it down by what’s the purpose of what you’re trying to do and then tackle the problem that way. Okay. So to wrap up, Sarah, what are your final thoughts, wishes and or recommendations for the world as we enter this new era? Or I guess we’re in it, but as we try to recover. SO: So the very short, we’ll try and keep it short. I think when all this AI stuff hit us a year or two ago, business leaders generally were hoping that they could just use AI as a general-purpose solution. Fire all the people, use AI for all the things, cool. CC: Mm-hmm. SO: The truth that we’re grasping towards or finding our way towards appears to be that you can use AI as a tool and it is very, very good at patterns and synthesis and condensing content. And it is very, very bad at creating useful, accurate, net new content. That appears to be the bottom line as we exit 2024. CC: Yeah. Well, thank you very much for unpacking this with us because I know that, you know, things are changing so fast. It’s helpful to have people like you that have been in the industry, the content industry specifically for a really long time that can help, you know, figure out a way through all this and give some practical ideas. SO: Well, you know, in six months, we’ll just feed this podcast into the AI and tell it to fix it so that it remains accurate. And off we go. CC: Yeah, there we go. And then we’re done. SO: And we’re done. CC: Yeah. Thanks so much for being here today and for talking about this. SO: Yeah, anytime. CC: And thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. The post Pulse check on AI: December, 2024 (podcast) appeared first on Scriptorium .…
 
Is it really possible to configure enterprise content—technical, support, learning & training, marketing, and more—to create a seamless experience for your end users? In episode 177 of the Content Strategy ... Read more » The post Do enterprise content operations exist? appeared first on Scriptorium .…
 
Are you looking for real-world examples of enterprise content operations in action? Join Sarah O’Keefe and special guest Adam Newton, Senior Director of Globalization, Product Documentation, & Business Process Automation ... Read more » The post Enterprise content operations in action at NetApp (podcast) appeared first on Scriptorium .…
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play