Lately I’ve been thinking a lot about the problem of fragmented digital learning experiences. The pandemic really highlighted how digital learning journeys are fragmented across multiple tools—an LMS, maybe some tools that plug into the LMS and take the students somewhere else, maybe some digital courseware, maybe a web conferencing tool, and so on.
This fragmentation creates a few serious problems. First, every hop a student (or educator) must make from one platform to another represents a chance to get lost. Where am I supposed to go? Do I need a separate login? What am supposed to do here? How does this thing work again? And so on. While we live in a world of phone apps where people are used to switching from one to another, the mobile app switching experiences differ in two respects. First, they are relatively well orchestrated. For example, when I launch a web conference from my calendar app, not only does it take me smoothly into the new app with one click, it also maintains a back-link to the app that got me there. Second, the workflows that seem natural are simple and atomic. When I’m hopping once from one app to another and back, I’m fine. But if I’m performing a complex work task that requires me to switch between Zoom, Slack, Google Docs, and maybe a browser, I lose my way easily and often. I spend a non-trivial amount of my day hunting for the correct browser tab in the correct browser window, to the point where, by the time I’ve found what I’m looking for, I sometimes forget why I was looking for it. If this feeling of getting lost among open windows and tabs is familiar to you, too, then you have a visceral sense of the problem in our digital learning experiences. This is the cognitive load we are placing with students who may also be struggling with unfamiliar material, learning new study skills, working in an environment with distractions (like kids and pets), and so on. Again, the pandemic has brought this problem home to all of us.
The impact on individual learners is bad enough. But the fragmentation also prevents us from having a view into what the learners are doing. In the virtual workplace, it’s easy to lose a sense of what your coworkers are doing in a project—and what they need from you—because you keep losing track of where the Google Doc is that they’re working in. Or maybe you’ve even forgotten that doc exists because it’s completely out of sight for you. In an educator’s context, this means that you often struggle to keep track of how students are doing, and they often struggle to keep track of what they should be doing.
When we try to understand why such a problem exists—not generically, but for a particular common workflow that feels like it should have a better solution by now—we often find ourselves going down a rabbit hole. There might be a missing technical specification. And maybe that technical specification is missing because there aren’t business drivers for software companies to coalesce around that specification. And maybe the business drivers aren’t there because the customers make purchasing decisions in a certain way. And maybe they make their purchasing decisions the way they do because of their internal organization and culture. Or laws and regulations. Or both. And so we go deeper and deeper into the rabbit hole. How many conference panels have attended that end with you shrugging your shoulders at the seemingly intractable complexity that the panelists have brought to light?
One of the jobs I take on at e-Literate (and more generally in my professional life) is to selectively go down the rabbit holes that may actually lead somewhere other than further down. I don’t care much about technical specifications, business models, university procurement processes, faculty professional indoctrination, and so on for their own sake. I care about the educational problems that those factors influence. It’s a two-step process: (1) find the subset of hard problems that are not impossible to solve at the moment, and (2) look for an answer that is as simple as possible but no simpler. I often don’t get past step one, even with multiple lengthy posts. These are the sorts of problems that require multiple stakeholders to solve in concert. Sometimes the first step toward solving them is getting everyone to see them in roughly the same way.
We are at a moment now where the rabbit hole of digital learning experience fragmentation may more tractable than it was before. Not easy, but perhaps possible. While there are a variety of reasons why this is true, the most obvious one is that our shared experience of pandemic life has created the widespread and direct experience of digital frustration that has engendered increased empathy and urgency. More of us get the problem because more of us have lived something like it. So deep, in fact, that I’m going to draw on blog posts that are 15 and 16 years old. I confess this is uncomfortable to me. It’s a little like taking out your journal from middle school and reading it at a party.
But 2005 – 2008 was the period when the LMS began a very important step toward solving the fragmentation problem. It’s worth taking a look back to see what’s changed, what hasn’t, and why some problems turned out to be harder than others.
Adding windows to the walled garden
Going back to one of the earliest posts on e-Literate, I wrote this in 2004:
The analogy I often make with Blackboard is to a classroom where all the seats are bolted to the floor. How the room is arranged matters. If students are going to be having a class discussion, maybe you put the chairs in a circle. If they will be doing groupwork, maybe you put them in groups. If they are doing lab work, you put them around lab tables. A good room set-up can’t make a class succeed by itself, but a bad room set-up can make it fail. If there’s a loud fan drowning out conversation or if the room is so hot that it’s hard to concentrate, you will lose students.
A good [LMS] allows for flexibility in classroom set-up. For example, when I used [a now-defunct open-source LMS]…I was able to meet with stakeholders and try different arrangements of the virtual chairs until we found one that they were comfortable with. The default set-up, which was optimised for on-campus students who took four or five courses at once and needed an aggregation portal, was completely unsuited for my audience, which was a group of full-time bond traders and other financial services people who were taking time out of their twelve-hour work days for one course. It wasn’t worth their time to learn the complex (cluttered?) interface that made sense for a full-time college student. So we streamlined. We turned off functionality that we didn’t need. We renamed pages to fit the students’ expectations. We re-arranged portlet windows on pages and page order on the interface. We did all of this in 15 minutes without any programming.
In Blackboard, you can’t do that. Sure, you can change the way the buttons look. And you can hide a button. But that’s it. It’s essentially only trivially configurable by the instructor. I strongly believe that this has a significant impact on distance learning drop-out rates.
That was the state of the LMS in 2004. You could turn menu items on and off. That was the extent of customization. Over time, these products became more flexible, to the point where at least some of them—I’ll call out Brightspace here, because I’ve seen this flexibility in action fairly recently in that product—are incredibly configurable. Of course, there’s always a trade-off between configurability and ease-of-use, particularly when you’re adding configurability to an existing product with an installed base that expects it to work a certain way. But we’ve seen progress on this first problem over the past 17 years.
This was also the period when both LMS usage was growing and we began seeing the growth of web 2.0 tools. (“Web 2.0 was a term first used in public at the O’Reilly conference in late 2004, according to Wikipedia.) So we went through a moment when LMS companies started adding half-baked blogs and terrible wikis to their products. It became instantly clear that almost nobody wanted to use these LMS-internal tools.
But it quickly became apparent that educators wanted an increasing number of specialized tools into their pedagogical workflows. Here’s a story from a 2012 post but which actually happened in the 2005-2006 time frame:
I often tell the story of when Beth Harris and Steven Zucker (formerly of FIT but now of Khan Academy) took me to see an image annotation tool developed by Columbia University that they were excited about. They were looking for a tool for teaching art history online. Columbia’s tool was really cool, but it was developed for a histology professor. It turns out that the way histology professors want to use and annotate images in the classroom is completely different than the way art history professors do. Some of these may not be sustainable as commercial applications and may work better as non-commercial open source. But, for example, teaching good writing is a pretty large niche application spanning multiple disciplines and should support significant commercial efforts.
This passage hints at the rabbit hole problem I referenced at the top of the post, but the main point is that a long tail of learning applications was beginning to develop to accommodate subject-specific learning journeys.
The trouble was that instructors had no way to integrate these into the LMS, which was the main—and in many cases, only—platform available to them for creating digital learning experiences.
I was getting increasingly involved with the Sakai LMS community around this time, in part because it seemed like a place to make progress on this problem. Here’s something I wrote in 2007, the year before IMS Global released LTI 1.0:
Sakai could foster inter-institutional resource-sharing at the level of hosting an LMS, it could also host resource sharing at the level of individual tools. As Chuck Severence pointed out in a relatively recent blog post, there are now at least three methods for integrating tools into Sakai using web services. In addition to lowering the barrier of specialized Java skills, and indeed removing the requirement that your tool be written in Java at all, these methods open up the possibility of remotely hosted tools. Add WSRP and SAML support into the mix, and developers truly have a wide array of options for developing remotely hosted tools.
I would like to see these techniques refined to make Sakai the absolute best [LMS] for developing remotely hosted tools. I would like to see new support models that enable system administrators to say “yes” more often when faculty members ask for specialized tools. I would like to see new economic models that make it feasible for resource-rich universities developing specialized teaching applications to share them with even the poorest of their peers in the Sakai community, rather than keeping the use of those tools confined to the few professors within the developing institution that had the money to do the work (as happens much of the time today). I would like to see the Sakai Board seek funding for developing this newer, richer kind of open educational resource. The particular strengths of the Sakai community uniquely position it to accomplish these goals.
If you’re technical enough to know what WSRP, SAML, and Java portlets are, you’ll know that these were pretty labor-intensive and clunky solutions to the long tail integration problem. LTI was much easier to implement and was a direct driver of the explosion of learning tools that are available today for integration into LMSs. This list shows 463 apps that can be integrated with an LMS via LTI today.
But integration via LTI itself mostly means single sign-on and grade return. Depending on how the integration is implemented and how many integrations are used, we still can easily have the lost-in-too-many-windows-and-tabs navigation problem and the I-can’t-see-what-you’re-doing problem. We have integration but not true interoperability.
Here’s a bit I wrote in 2005 from a speculative piece about an idea that some colleagues and I were (mis)labeling a “Learning Management Operating System (LMOS) Service Broker”:
Now, it turns out that RSS feeds carry quite a bit of information that could be useful in a learning context. Here’s a list of just a small subset of the information available from my RSS feed, for example:
- The URL of my blog’s home page
- The ID of the blog’s author (in my case, it gives my email address as my ID)
- The software application that generated the posts
- The URL, title, description, contents, time stamp and category labels for each post
Notice two things about this data. First, it’s very generic and could be useful information about just about any content online. Second, the post-specific information (in the last bullet point) is pretty much exactly what you need to know about any assignment that a student submits for a class.
In order to start making use of this data in the context of an LMOS, the blog developer need only make a few relatively minor technical enhancements in order to plug into the service broker:
- Write an adapter that enables the RSS feed to talk to the broker. (Since RSS is a very common format, chances are good that such an adapter would already exist.)
- Tie the blog into the single sign-on mechanism, so that the LMOS knows that the person that owns a particular blog is also, say, a student in the Psychology 101 class.
- Extend the blog to be able to subscribe to category labels that are related to the groups to which the student belongs (e.g., the Psych 101 student should see a “Psych 101” category tag show up in her post category list).
Now we’re ready for some service broker automagic. Let’s say the student decides to write a blog post on a topic related to her Psych 101 class. As she writes her post, she looks over to the category list. Because the system knows that she is a registered student in Psych 101, it automatically adds “Psych 101” to her category list. She selects the appropriate category heading(s), writes her post, and publishes. The service broker, seeing that the content is labeled “Psych 101”, announces to all the applications within the Psych 101 course environment that it has some student-created content. “Can any of you applications do anything with this student-created content?” it asks.
It gets the following responses:
- The class RSS aggregator responds, “Yeah, I can do something with it.” It takes the student’s post and publishes it along with those of the other students in the class.
- The course activity tracker says, “Me too. Gimme some of that.” It notes the student ID, the time stamp, and the title and URL of the post. Using this information it adds an entry for the student’s class activity on that particular date.
- The grade gook says, “I can also use that.” Noting that the content is generated by the weblog application, it pulls the post text and URL into the student’s row in the gradebook under the “weblog entries” heading. The instructor can now assign a grade and comment to it.
Notice that the weblog developer didn’t have to write separate integration code for course activity tracker and grade book. The service broker was able to integrate the new application on-the-fly because the blog publishes the basic required knowledge in a standard format. All the blog developer had to do was write a connector that picks up the categories from the system and works with its single sign-on mechanism. The broker does the rest. It would be the same for any other application, too. You could, for example, use more or less the same mechanism to integrate your discussion board with the grade book and the course activity tracker.
But wait. There’s more.
Suppose we make one more minor enhancement. Suppose that individual applications within the course environment could publish categories to share with each other. Suppose, for example, that the grade book could publish a category corresponding to a particular assignment. Our student could select that particular assignment category for her post and the instructor would automagically have it show up in the appropriate grade book column, with the appropriate point scale and weighting, and so on. Let’s imagine, too, that you could set your discussion board to generate a forum topic for particular category (such as the assignment heading in the grade book) and generate a new thread for each post that comes in labeled with that category. Students could continue to post to their personal blogs that travel with them beyond the class, but the instructor could also create class-internal discussions based on those posts. This is all done using fairly generic mechanisms, so developers creating new applications won’t need to do anything special to integrate their new wiki, or simulation, or flux capacitor, or whatever with individual applications already in the course environment.
But wait. There’s still more.
Suppose that, in addition to having students publish information into the course, the service broker also let the course publish information out to the student’s personal data store (read “portfolio”). Imagine that for every content item that the student creates and owns in her personal area–blog posts, assignment drafts in her online file storage, etc.–there is also a data store to which courses could publish metadata. For example, the grade book, having recorded a grade and a comment about the student’s blog post, could push that information (along with the post’s URL as an identifier) back out to the student’s data store. Now the student has her professor’s grade and comment (in read-only format, of course), traveling with her long after the system administrator closed an archived the Psych 101 course. She can publish that information to her public e-portfolio, or not, as she pleases.
[Editor’s Note: Sorry for the formatting glitch; the newish WordPress block system still doesn’t handle nested layout formats very easily or intuitively.]
It’s the meaning in the middle
The unsung hero in the example above is RSS, which stands for “Really Simple Syndication.” It carries a lot of important information about a document, including but not limited to who wrote it, when they wrote it, what topic(s) it was related to, and what the title is. That information is valuable regardless of whether the document being shared is a blog post, a journal entry, an essay, or a collaboratively authored Google Doc. (There’s nothing in the format that limits us to one author.) Atom, which is a slightly newer cousin to RSS, also supports threading. With that additional information, we could share discussion posts and pull in contextual information about the discussion threads that the posts are in.
If we think a little bit about how we want to use the metadata, we can think about how we want to write and read to the app creating the document. We might want to supply an assignment name, a learning objective, or a group of collaborators from a class. The options become quite rich without us having to change anything about the metadata structure. It turns out that we share a lot of different types of narrative documents which share common traits. By making those traits incomprehensible to our learning platform, we can build a fundamentally more legible, navigable, and sensible learner journey with accompanying data that enables the students’ educators to act as guides in ways that are difficult in today’s digital learning environments.
I honestly think that if we took the RSS/Atom structure and added in a representation of pedagogical intent in our curricular materials design, as I wrote about in January, we would cover quite a bit of ground in terms of creating a more seamless and supported learning journey.
Furthermore, this kind of interoperability creates more entry points for views into that learner journey. If the data across the platforms can be tied to a coherent pedagogical pathway, and if we have structures for getting permission to see those data, then we can view the relevant parts of that journey from anywhere that make sense to view it from. If, for example, you’re in class using a clicker to check students’ understanding of a concept, you might want to be able to pull up relevant data from how they performed on related activities in the LMS, courseware, and maybe even other platforms. In this particular scenario, you’d likely be looking at class aggregate performance rather than the individual learner journey. In yet another scenario, one could imagine a department sitting down and discussing how to fine-tune the course design for a 101 class that is used by a number of different instructors. So they might want to aggregate information across the whole department to see, for example, that students struggle with one particular unit or activity across the various course sections and instructors. The group might then drill down to investigate whether the assessment questions need to be adjusted, the content needs to be enriched, whether those in-class clicker activities help improve later performance, and so on.
I’m describing a semantic mesh of pedagogical information about a course’s design and the learner’s journey through it across application boundaries. There are no technological barriers to building such a mesh, although it requires hard work and hard thinking about preserving appropriate privacy in this new world.
I contend that we do not have such a mesh and are not making much progress toward it because the people who think about the learning journey coherence problem don’t tend to be the same people who think about data interoperability. Two types of situations have historically driven the development of EdTech interoperability specifications:
- You sell a thing. I sell another thing. Our customers want our things to talk to each other, typically to make their own lives easier. For example, they want the students from your thing to get into my thing without a lot of manual effort that tends to lose them along the way. Or they want the grade from my thing to be reported back to your thing. It’s typically simple administrative stuff.
- Speaking of grades, we need some. Our customers use our things to give students grades. Whatever else they are doing, they must also give students grades. Or progress. Or other fundamental stuff related to grading. Not learning. Grading.
Most technical interoperability conversations about tracking learning tend to involve a mix of people who know too little about pedagogy and people who know too much about it within too narrow a context. Both types tend to get lost in the weeds. We either talk about magical reusable learning objects, which could be anything from a picture to an hour-long module with many parts, or we debate taxonomies of assessment types or pedagogical philosophies round and round in circles.
We can try a number of different strategies to solve this problem once we put our mind to it. Some are social, some are technological, and the best are a mindful mix of the two. But rather than making this post even longer, I’m going to stop on this point: Before we can settle on an effective method developing a semantic mesh for pedagogy, we need to collectively decide that it is an urgent problem to solve. I think people feel it viscerally from their pandemic experiences. But I don’t think we’re collectively articulating the problem yet.
This post is my stake in the ground.
The post Barriers to Coherent Digital Learning Experiences: A Full-Stack View appeared first on e-Literate.