Posts with the tag OmniUpdate
OU Campus components can create powerful interfaces behind the scenes while providing a simple interface for users to interact with. One of the more exciting things is that your components can export XSLT which can be interpreted by the translator when publishing.
The new version of the University of Northern Colorado’s Map has launched! It includes many improvements over the previous version that I developed in 2008. We commissioned Map Formation to design an illustrated drawing of the campus that could be overlayed on top of Google Maps or OpenStreetMap. Using the Adobe Illustrator file they provided, I coded out the new map.
In 2008 I built out the previous Map for the University of Northern Colorado which, at the time, was nice and functional. However, over the years there were major issues with maintaining the content and usability. The content was maintained in a SQL database and was managed by a web interface.
The biggest issue with the old map was wayfinding. North was not the top of the map rather the right corner. The illustration of the map was designed this way so it could be placed on a single page spread in a printed vide book. This was fine for print materials when they were marketing the campus but hurt wayfinding.
The New Map’s User Interface
I borrowed elements from Google Maps for the user interface. Google has done a large amount of research in user experience so the “Card” layout was borrowed from them. Each clickable feature on the map has the option of showing a “card”, photo, or 360. Most features display a card which has information about the item including a photo or video, photo gallery, a text description and up to four links.
The example shown here is the Bank of Colorado feature card. The heading, image and gallery design is based on Google Maps.
Here are a few other features with options:
One of the requirements that I had when building out the map was that any user with permissions should be able to add and update features. Using our existing CMS, OmniUpdate, I created an interface with text-boxes for users to enter information for items on the map.
The CMS publishes out JSON files that the map uses to read the feature collections. The CMS makes it easy and quick to maintain the content behind each feature. If OmniUpdate wasn’t used, the features could be edited by a text editor or ported into a different CMS if needed.
As discussed above, each feature on the map can have multiple properties and media associated with it. The example here is about half the options that each feature can have. Other options include the feature’s icon, what happens when it is clicked, if it should be added to the search, and the links the card should have.
Each layer has a folder that keeps its features organized. If a new building or feature needs to be added, any user with permissions can click the new button and add the details.
The top file
_all.json.pcfloops through the files in the folder and creates a GeoJSON file that Google Maps can parse using the loadGeoJson() function. When a feature is clicked on, the information for that feature is called from its individual JSON file and loaded into the user interface.
Another feature I wanted to provide was temporary maps that could be used short term for specific events. The first one that was launched was for freshman move-in day. For the event, some parking lots have 20-minute time restrictions and popup information tents are located around the campus.
The map includes many other features and enhancements not mentioned here. Please play around with it and let me know what you think and if you have any questions.
About two years ago I wrote a gadget that would check in files across all sites in OU Campus. We have been using it since then and it has worked great! Recently, however, we started using workflow and noticed that the gadget would check back in files that are under workflow, scheduled or set to expire. Obviously, this was a bug in the code. While reviewing the code recently to fix this bug it wasn’t going to be a quick easy fix. So, instead of hacking it together, I rewrote the gadget and have released it as Version 1!
At the 2018 OU Conference in Anaheim California, I mentioned that I am using GIT to push XSL changes to both a testing and development XSL locations. Some folks expressed interest in this setup and wanted to know more about it. This post will hopefully answer some questions and help you determine what setup will work best for you.
All of our production XSL sits on its own site called “XSL”, our development XSL sits in two sites called “XSL-dev” and “XSL-Jesse”. Start with one development instance if you are going to use this setup, for the write-up below I will talk about “XSL-dev”.
The PCFs that are in the sites “arts”, “extended”, “mcb”, and “www” reference the “XSL” site for their XSL. The reason these are broken up into their own sites is that they are subdomains. The fewer sites in OU Campus the better as it makes managing the system a whole lot easier. But that’s a different post.
Connecting a PCF to the XSL
The second line of every PCF file references the path the to XSL location including an optional attribute for the site:
<?pcf-stylesheet site="XSL" path="/xsl/interior-rows.xsl" title="Interior Page" extension="aspx" ?>
If you don’t include the site attribute, the XSL parser assumes the site the PCF exists on. Many installations don’t include this attribute so many folks don’t know about it. By single sourcing your XSL, it doesn’t need to be duplicated across multiple sites and is managed in one place.
The “XSL-dev” site contains the same files as the “XSL” site with development modifications. With my personal XSL development workflow, I may modify multiple XSL templates at a time and don’t want to copy files to make changes (e.g. common-1.xsl). I also make changes via trial and error so I expect to throw errors and can’t make these changes on production XSL. That’s where the “XSL-dev” site comes in. Let’s say I’m making a new snippet and I want to build and test it out. I will create a PCF page and change the site to “XSL-dev”:
<?pcf-stylesheet site="XSL-dev" path="/xsl/interior-rows.xsl" title="Interior Page" extension="aspx" ?>
The path stays the same since the files are identical other than the modifications I am about to make, or am in the process of making. I can now modify all the XSL files in “XSL-dev” without risk of breaking something in production.
Writing XSL outside of OU Campus using WebDAV
XSL is written outside of OU Campus using Dreamweaver (put the pitchfork down and keep reading) and published to OU Campus. You can use whatever application or IDE (Integrated Development Environment) you would like as long as it allows for a WebDAV connection. Ideally, it also has a keyboard shortcut for publishing.
In the site settings for your XSL site, navigate down to “Optional Features” and make sure “WebDAV” is checked.
Next, go to “Setup”/”Users” and find your username. Scroll down to “Restrictions” and make sure the “Allow WebDAV Access” button is checked. A textbox below will appear with the “WebDAV URL”, copy that.
In your IDE set up a site/folder where your XSL will be saved locally. Use the WebDAV URL as the remote/publish location. You will also need your OU Campus username, password, and the site name. Below is an example of this setup using Dreamweaver.
Append the site to the end of the URL and fill out your username and password. Start with the production version of your XSL. Now you should be able to download all of your XSL to your local machine using your IDE. Initialize a new GIT repository and push it to your favorite GIT server where you also have created a GIT Repository.
Version Control using GIT
Each site has its own GIT repo associated with it. We are using Bitbucket to host them but you can use GitLab, GitHub or any other GIT server. We have added the prefix of OMNI to the start of our repositories so we can identify them.
“OMNI XSL-Dev” and “OMNI XSL-Jesse” repositories are forks of the “OMNI XSL” repository which means that “OMNI XSL” needs to be created and then forked for the number of development instances that you want to have. After the “OMNI XSL repository is set up, create a fork of it which will become your development repo, in this example, it is the “OMNI XSL-Dev” repository. Clone that repository onto your local machine. You now have two repositories on your local machine, one for development and one for production. The screenshot below is from GitKraken, but again, use whatever GIT interface (or terminal) you are comfortable with:
In your IDE setup a second site/folder for your Development repository with WebDAV that points to your OU Campus development site, in this example, “XSL-dev”.
Now you have both production and development XSL on your local machine, source controlled with GIT and connected to OU Campus via WebDAV. One of the reasons I separate out development changes into its own repository is that I currently work on many branches at one time some that will never see production. By having my own set of XSL templates I can keep them as messy or clean as I want. Below is every unmerged branch that exists in “OMNI JESSE XSL”:
That might drive my coworkers crazy, it might drive you crazy! Compare that to the “OMNI XSL” unmerged branches:
It is also harder to accidentally do a pull request with a different repository than accidentally merge a feature branch into the master.
Here is an example of a full workflow from development to production:
- Create a new PCF file and change the site to “XSL-dev”.
- Check out a new branch in “OMNI XSL-Dev”.
- In your IDE open “XSL-dev” and develop something really cool.
- You will probably throw all sorts of XSL errors, which is why we are in development.
- Make a change, publish to OU Campus, preview page, repeat. This is where the keyboard shortcut comes in handy. Publishing to see changes is usually really quick, even though it may sound clunky.
- Commit your finished changes to your “XSL-dev” repository. Push the branch you created in step 2 to Bitbucket.
- In Bitbucket do a pull request into the “OMNI XSL” repository from your feature branch.
- If you have a team, do a code review!
- Reject changes if needed and go back to step 3 and continue developing.
- If everything looks good, continue to step 6.
- Accept Pull Request in production repository.
- Pull changes from pull request in OMNI XSL repository onto your local machine.
- Publish the production XSL changes to the production XSL OU Campus site.
- Go back to Bitbucket and sync your development copy back with master.
- Switch to your master branch in “XSL-dev” and pull the local copy. Delete your feature branch that has been merged.
- Change the site back to “XSL” from the PCF from step 1. Or delete the file if you no longer need it. Or forget about it as I often do!
I usually forget to do steps 9 & 10 which isn’t a problem as long as it happens every once and a while.
You know what would be super cool? Use a GIT hook to publish the changes from OMNI XSL to OmniUpdate!
90% of the commits to our XSL happen this way, but every once in a while an emergency hotfix does get committed straight to the master bypassing the workflow. However, this change is always committed and distributed down to the forked repositories. So it’s not as strict as it looks!
That is the workflow that we use at the University of Northern Colorado uses and it might not be appropriate for all users. The workflow makes it look more intense than what it really is. Feel free to pick and choose parts of it that will work with your organization’s business processes. It might be a good idea to use your sandbox instance from OUTC to test this out before going into production.
The versioning for dev and production doesn’t have to be in their own sites. Within your “XSL” site (or even default site) you could have a folder for production and testing and change the path attribute rather than the site attribute. This is just the way I set it up in the beginning and it works for me.
As always, use caution and test everything.
At the 2017 OmniUpdate Training Conference Hackathon a group created an application that would connect Amazon’s Alexa with OU Campus. The tweet to the right is a demonstration of asking Alexa to check in files for OU Campus. The application will also return the number of files checked out.
The University of Northern Colorado has recently embraced all things analytics and as part of that we had a need to add event tracking code to buttons. The best use of this is when a pdf or document needs to be tracked. Since that content can’t have the analytics code, the link to it can fire an event that can be tracked in Google Analytics.
The Google Event Tracking Gadget will help web contributors create event tracking codes on links.
This gadget works with the most recent version of Google Analytics, or analytics.js. Full documentation about events can be found within Google’s Event Tracking Documentation.
The final code that is produced by the gadget is this:
Test to see if it worked
Publish the file with OU Campus and click on the link. Next, login to Google Analytics and open up a view that is part of that page. On the right column select “Real-Time” / “Events”. The current events happening on the site will be displayed. It usually takes a few seconds for an event to show up.
Item #3 below is the event that was fired when the link from the code above was clicked on.
Clicking into the row will show the Event Label, which in this case was “Jesse”
Using a server rewrite, a url such as http://example.org/jesse could read from http://example.org/jesse.php. However, in the content management system the link would go to the .php version. How can the URL be rewritten from:
Our web authors love icons and for the longest time they inserted them as icons that they found from google searches. Our designer would design custom icons for sites and they would be converted to svgs and placed in the page. Both of these solutions worked but were clunky. Luckily, icon fonts and icon sets have been popping up which provide end users an array of icons to use on their pages.
Non-content tag’s in WUSIWYG editors
The biggest challenge with inserting icons into a WYSIWYG interface is that they are usually only code and don’t render until after you preview the page. Most icon code looks similar to this:
<i class="material-icons xl UNC-blue">assessment</i>
In this example, the word assesssment would be written on the page without any context. Within the WYSIWYG editor, a user could get stuck within the tag and could be difficult to get out of. Typing content inside the tag would be hidden by CSS.
OU Campus users are familiar with table transformations as it allows users to create tab navigation, accordions and other content structures where some of the content may be hidden when the page loads. This solution works great in edit mode because the web author can edit all of the content on the page. If your installation was like ours, you may have the ‘Place cursor at the end of this sentence to add more content’ content which helps with the cursor issue in WYSIWYG editors with tables.
I’d like to introduce the concept of an image transformation for icons. The benefit of using an image instead of a table is that the tag is inline so icons can be placed with content. When the user inserts and image with a specific class to the page, it gets converted into an icon.
In the picture above the source of the image is a placeholder, it doesn’t matter what image this is because it will be replaced using XSL on preview and publish. (The source could be a dependency tag if the image was within OU but this image will never get served to the public.)
The description is the one of hundreds of icons that could be inserted. This demo uses Google’s Material Icon set but any icon set would work.
Class is the additional classes that the tag needs to render out the icon. material-icons is the parent class that is used by Google and will be searched for in the XSL match below. xl is the size (extra-large) and gold is the color of the icon. The Dimensions are also added but this only effects the editor.
After hitting [OK], the WYSIWYG editor puts an image in the editor that we can edit just like an image:
<img class="material-icons xl gold" src="https://www.unco.edu/omni/gadgets/icons/img/placeholder.svg" alt="account_balance_wallet" width="200" height="200" />
On preview and publish the image needs to be replaced with the icon code. A simple template match to find the class .material-icons will work:
So you expect web authors to remember all these steps?
This solution works great if you follow these steps and know the short code for each icon. Expecting web authors to know this and go through the process each time is not reasonable. This is where the Material Icons Gadget comes in! With a side bar gadget, the user can select their icon, size and color from dropdowns and then insert the code into the editor. The only downside is that when they are in edit mode they see the placeholder image.
Take a look at the Readme file at the root of the GitHub repository for detailed instructions. CSS for the icon set, colors and sizes needs to be added. The XSL transformation needs to be applied and the gadget needs to be installed.
Google’s icon set was used here but any icon set could work. In fact the gadget could be expanded to provide multiple icon sets for users. If you feel like contributing, please do on Github!
In the previous tutorial, data was pulled into a page using OU Campus’ Tags. In this tutorial, data will be pulled in based on a folder location, sorted and then displayed.
Get All the Journal Data Files and Organize them by File Name
In case you haven’t read the previous tutorial, Data files are standalone .pcf files that are similar to a database entry. Each file contains the information for a single journal entry: Journal Name, Description, URL and a dropdown for Active, which will tell the XSL if the data file should be pulled in or not.
All the data files exist in one folder and XSL can grab those files and parse the data. The XSL needs to get those file names and sort them alphabetically:
Two variables are setup, the first tells the XSL where to look for our data files. In this demo, the value is hard coded in. The second variable is a variation of the first but for OU Campus includes the root and the site. These are separated out into individual variables because
$data-folderis used later when tags are pulled in.
Next a variable of
$sortedcopyis defined by looping through all of the files in the directory. This is stored as a variable because to use the functionality of
preceding-siblingis used in the next code snippet and this and
<xsl:sort />can not be used in the same loop.
Loop Through the Sorted Data Files
The following for-each statement is a bit longer, and could be separated out into multiple functions, but then the parameters would need to be passed to each function. This decision would depend on how much of the code is reused. For this tutorial, it is a single loop and is split up into three separate code snippets below:
Each data file in the
$sortedcopyvariable is looped on line 2. The variables are setup for the file name (line 5) and then the content of that data file is stored into the second variable (line 6).
Next the page type is checked to make sure it is a library-database file. In many OU instances, a
_props.pcffile may exist that should not be pulled in.
Does the Journal Need an A-Z Heading?
On the A-Z Listing page, the journals should be listed out in alphabetical pagination. If the journal’s title starts with a new letter then a heading for that label should be displayed. XSL functions below have been separated out into individual lines for better readability, but in practice it could be on one line.
Line 3 gets the first character of the current title using the XSL function
substring(string, starting integer, ending integer). Then on Line 2 it converts it to uppercase with the XSL function
upper-case(string). If the character is not converted to a uppercase, the lowercase version would be treated as a different letter.
This is my favorite part of the code. To get the previous starting letter the XSL function
preceding-sibling()can be used. Line 10 should be true unless this is the first node in the data set. If this is not checked, the first item will fail and an XSL error will be thrown.
The first part of line 13:
doc(concat($data-location,'/', preceding-sibling::file))starts by selecting the previous data file, then gets the database-name:
/document/ouc:properties/parameter[@name='database-name']. Similar to the current letter’s code it selects just the first letter and then capitalizes it. Now the current letter and previous letter are both stored as variables.
Need a Heading?
On line 20 the current letter is compared against the previous letter and if it isn’t the same, or if it is the first letter, a heading with that letter is displayed.
Display the Journal’s Information
The journals now need to be displayed. If there are a lot of journals, then the description could be hidden. Or in our final version of the UNC Libraries version accordions are used.
Lines 2 – 4 display the name, description and create a link for the database.
Lines 7-9 call a template that will get the tags for the page. Here the $page-path parameter, that was defined in the first code example, is used again. This template is described next.
Finally, lines 14 and 17 close open tags from the code snippet above.
List the Subjects Associated with the Journal
Below the link to the journal database, there will be a list of the subject tags that this journal is associated with. This will help with usability for the end users who may not be aware that the categories exist. These subject tags will link to the subject page so the user can explore more databases.
$page-pathis passed to the function and on line 5, the tags associated with the data file are returned as the variable $page-tags. On line 6 the tags are looped and lines 8-10 the tag is passed to an additional template to display the tag.
Display the Tag and Link to the Subject Page
Bootstrap and Foundation have a class for tags called .label. This will be wrapped with an anchor to link to that subject page.
The variable on line 6 removes the prefix of ‘library-database’ which was the naming convention used in the previous blog post. On line 9 an anchor is used to link to that subject page. This only works if the tag name and the subject file name are exactly the same. If the files are in a sub folder, that should be defined here.
Line 12 replaces the underscore in a tag with a dash. This naming convention allows for a tag with multiple words. As an example: library-database-hispanic_studies gets converted into hispanic studies but keeps the URL of hispanic_studies.aspx.
Thanks for Reading
The Library website at the University of Northern Colorado has a list of academic journals that they subscribe to. This list is constantly changing and needs to be updated by Library staff. The academic journals are organized in an A-Z listing as well as organized by subject. Each journal can belong to a single subject or multiple subjects.
If this was a SQL database there would be three tables. One table for the journals, a table for subjects and a table linking between the subjects and the journals.
With OU Campus ‘tags’ can be added to the page to create the link between journals and subjects. On each subject page, the journals with that individual tag get pulled in.
Data files are standalone .pcf files that are similar to a database entry. Each file contains the information for a single journal entry: Journal Name, Description, URL and a dropdown for Active, which will tell the XSL if the data file should be pulled in or not. Line 9 defines the page type as
library-databasewhich will be used in case other OU users assign tags to their pages.
Data File XSL
In the XSL for the Data File, the tags that are associated with the page are looped and displayed:
GetTags()returns a node with the sibling node of
. Display the tag name: ` `
The next step (will take some time but only has to be done once,) is to create a data file for every Library Academic Journal.
The data files will be pulled in based on the tags that are associated with that data file. Each data file can have an unlimited number of tags associated with it allowing it to belong to an unlimited number of subject areas. A tag needs to be created for each subject area. This may also take some time, and could be been done when the DataFile is being created. A naming convention would be a good idea. For this demo
library-database-name. is used.
Tags are stored on OU Campus and not with the individual .pcf file, so if you have cloned the repo you will need to add tags to each data file. It would also be a good idea to duplicate the data file to have more than one page to pull in.
Now that the data files have been created and assigned tags, they need to be pulled into the subject pages. The tag that will be pulled in will be the first tag associated with the subject page. For the demo, the subject page will pull in Accounting Journals. Attach the
library-database-accountingtag to this subject page.
Below is the XSL for the subject pages. On line 4 the first tag that is associated with the page is assigned to the variable
$page-tag. Then all pages with that tag are are assigned to the variable
$tag-selectFinally, the data files are looped and the path of each data file is passed to a template to display the content:
On line 16 of the previous code example, the data file’s path is passed to the function GetContentFromSingleDataFile. More code could be nested here, but breaking it up into multiple templates is easier to read and is better organization and re-usability of code.
This code snippet takes the path as a parameter (line 3) and creates a variable of the full path to the file (line 6). Next, the content of the page is stored as a new variable (line 9). Lines 6 and 9 could be combined into a single variable.
Line 12 checks to see if the page-type is a
library-databasefile. This is important since anyone can assign tags to a page in OU Campus. By selecting only pages with the page type of library-database no renegade pages will be pulled in.
Checking to make sure the Active dropdown is set to True (line 15) will allow end users to turn on and off databases as access is changed.
Finally, it the database name, description and link are outputted.
The final step would be to create subject pages for each subject. UNC has about 50 academic journal subjects and a page will be created for each one as well as a master A-Z listing. If there is interest, I can write up an article about the listing page which won’t use the tags since every journal will get pulled in.
This is my first technical post so feedback is welcomed. The full source code is on GitHub.
Our Content Management System, OmniUpdate, allows users to create reusable pieces of content called assets. As of this post, there are five different types of assets and two of them and two of them allow the user to add source code: Web Content and Source Code.
At the 2016 OmniUpdate Hackathon I joined a group of six people who built a gadget that unpublishes files from the web server. The idea started out as a simple sidebar gadget that when clicked would go to the production server and remove the file while keeping it on the staging server. After about two hours of coding the gadget was “completed”. At that point, if you unpublished a file and navigated a different page, the button was still disabled. This was a known bug and I reported it the next day. Yesterday I had sometime to fix this error and I believe the gadget is ready to be used.