Posts with the tag XSLT

  • OU Campus XSL Development for Production and Testing using GIT

    At the 2018 OU Conference in Anaheim California, I mentioned that I am using GIT to push XSL changes to both a testing and development XSL locations. Some folks expressed interest in this setup and wanted to know more about it. This post will hopefully answer some questions and help you determine what setup will work best for you.

    XSL Location

    Site Drop Down List

    All of our production XSL sits on its own site called “XSL”, our development XSL sits in two sites called “XSL-dev” and “XSL-Jesse”. Start with one development instance if you are going to use this setup, for the write-up below I will talk about “XSL-dev”.

    The PCFs that are in the sites “arts”, “extended”, “mcb”, and “www” reference the “XSL” site for their XSL. The reason these are broken up into their own sites is that they are subdomains. The fewer sites in OU Campus the better as it makes managing the system a whole lot easier. But that’s a different post.

    Connecting a PCF to the XSL

    The second line of every PCF file references the path the to XSL location including an optional attribute for the site:

    <?pcf-stylesheet site="XSL" path="/xsl/interior-rows.xsl" title="Interior Page" extension="aspx" ?>

    If you don’t include the site attribute, the XSL parser assumes the site the PCF exists on. Many installations don’t include this attribute so many folks don’t know about it. By single sourcing your XSL, it doesn’t need to be duplicated across multiple sites and is managed in one place.

    The “XSL-dev” site contains the same files as the “XSL” site with development modifications. With my personal XSL development workflow, I may modify multiple XSL templates at a time and don’t want to copy files to make changes (e.g. common-1.xsl). I also make changes via trial and error so I expect to throw errors and can’t make these changes on production XSL. That’s where the “XSL-dev” site comes in. Let’s say I’m making a new snippet and I want to build and test it out. I will create a PCF page and change the site to “XSL-dev”:

    <?pcf-stylesheet site="XSL-dev" path="/xsl/interior-rows.xsl" title="Interior Page" extension="aspx" ?>

    The path stays the same since the files are identical other than the modifications I am about to make, or am in the process of making. I can now modify all the XSL files in “XSL-dev” without risk of breaking something in production.

    Writing XSL outside of OU Campus using WebDAV

    XSL is written outside of OU Campus using Dreamweaver (put the pitchfork down and keep reading) and published to OU Campus. You can use whatever application or IDE (Integrated Development Environment) you would like as long as it allows for a WebDAV connection. Ideally, it also has a keyboard shortcut for publishing.

    In the site settings for your XSL site, navigate down to “Optional Features” and make sure “WebDAV” is checked.

    Optional Features - WebDAV

    Next, go to “Setup”/”Users” and find your username. Scroll down to “Restrictions” and make sure the “Allow WebDAV Access” button is checked. A textbox below will appear with the “WebDAV URL”, copy that.

    User Restrictions

    In your IDE set up a site/folder where your XSL will be saved locally. Use the WebDAV URL as the remote/publish location. You will also need your OU Campus username, password, and the site name. Below is an example of this setup using Dreamweaver.

    Site setup in Dreamweaver

    Append the site to the end of the URL and fill out your username and password. Start with the production version of your XSL. Now you should be able to download all of your XSL to your local machine using your IDE. Initialize a new GIT repository and push it to your favorite GIT server where you also have created a GIT Repository.

    Version Control using GIT

    Each site has its own GIT repo associated with it. We are using Bitbucket to host them but you can use GitLab, GitHub or any other GIT server. We have added the prefix of OMNI to the start of our repositories so we can identify them.

    Bitbucket Repo List

    “OMNI XSL-Dev” and “OMNI XSL-Jesse” repositories are forks of the “OMNI XSL” repository which means that “OMNI XSL” needs to be created and then forked for the number of development instances that you want to have. After the “OMNI XSL repository is set up, create a fork of it which will become your development repo, in this example, it is the “OMNI XSL-Dev” repository. Clone that repository onto your local machine. You now have two repositories on your local machine, one for development and one for production. The screenshot below is from GitKraken, but again, use whatever GIT interface (or terminal) you are comfortable with:

    Git Kraken Repos

    In your IDE setup a second site/folder for your Development repository with WebDAV that points to your OU Campus development site, in this example, “XSL-dev”.

    Now you have both production and development XSL on your local machine, source controlled with GIT and connected to OU Campus via WebDAV. One of the reasons I separate out development changes into its own repository is that I currently work on many branches at one time some that will never see production. By having my own set of XSL templates I can keep them as messy or clean as I want. Below is every unmerged branch that exists in “OMNI JESSE XSL”:

    Git Branch Branches

    That might drive my coworkers crazy, it might drive you crazy! Compare that to the “OMNI XSL” unmerged branches:

    Master Unmerged Changes

    It is also harder to accidentally do a pull request with a different repository than accidentally merge a feature branch into the master.

    Full workflow

    Here is an example of a full workflow from development to production:

    1. Create a new PCF file and change the site to “XSL-dev”.
    2. Check out a new branch in “OMNI XSL-Dev”.
    3. In your IDE open “XSL-dev” and develop something really cool.
      • You will probably throw all sorts of XSL errors, which is why we are in development.
      • Make a change, publish to OU Campus, preview page, repeat. This is where the keyboard shortcut comes in handy. Publishing to see changes is usually really quick, even though it may sound clunky.
    4. Commit your finished changes to your “XSL-dev” repository. Push the branch you created in step 2 to Bitbucket.
    5. In Bitbucket do a pull request into the “OMNI XSL” repository from your feature branch.
      • If you have a team, do a code review!
      • Reject changes if needed and go back to step 3 and continue developing.
      • If everything looks good, continue to step 6.
    6. Accept Pull Request in production repository.
    7. Pull changes from pull request in OMNI XSL repository onto your local machine.
    8. Publish the production XSL changes to the production XSL OU Campus site.
    9. Go back to Bitbucket and sync your development copy back with master.
    10. Switch to your master branch in “XSL-dev” and pull the local copy. Delete your feature branch that has been merged.
    11. Change the site back to “XSL” from the PCF from step 1. Or delete the file if you no longer need it. Or forget about it as I often do!

    I usually forget to do steps 9 & 10 which isn’t a problem as long as it happens every once and a while.

    You know what would be super cool? Use a GIT hook to publish the changes from OMNI XSL to OmniUpdate!

    90% of the commits to our XSL happen this way, but every once in a while an emergency hotfix does get committed straight to the master bypassing the workflow. However, this change is always committed and distributed down to the forked repositories. So it’s not as strict as it looks!

    Hotfix Change

    Final thoughts

    That is the workflow that we use at the University of Northern Colorado uses and it might not be appropriate for all users. The workflow makes it look more intense than what it really is. Feel free to pick and choose parts of it that will work with your organization’s business processes. It might be a good idea to use your sandbox instance from OUTC to test this out before going into production.

    The versioning for dev and production doesn’t have to be in their own sites. Within your “XSL” site (or even default site) you could have a folder for production and testing and change the path attribute rather than the site attribute. This is just the way I set it up in the beginning and it works for me.

    As always, use caution and test everything.

  • Tackling Quality Control with XSL

    Tackling Quality Control with XSL first slide

  • 2017 OmniUpdate User Training Conference

    Pulling in COntent With OU Tags & Data Files Slide 1

  • Removing Extensions with XSL to create SEO Friendly URLs

    Using a server rewrite, a url such as http://example.org/jesse could read from http://example.org/jesse.php. However, in the content management system the link would go to the .php version. How can the URL be rewritten from:

  • Create an A-Z directory with XSL

    In the previous tutorial, data was pulled into a page using OU Campus’ Tags. In this tutorial, data will be pulled in based on a folder location, sorted and then displayed.

    View a simplified but workable code on GitHub.

    Get All the Journal Data Files and Organize them by File Name

    In case you haven’t read the previous tutorial, Data files are standalone .pcf files that are similar to a database entry. Each file contains the information for a single journal entry: Journal Name, Description, URL and a dropdown for Active, which will tell the XSL if the data file should be pulled in or not.

    All the data files exist in one folder and XSL can grab those files and parse the data. The XSL needs to get those file names and sort them alphabetically:

    Two variables are setup, the first tells the XSL where to look for our data files. In this demo, the value is hard coded in. The second variable is a variation of the first but for OU Campus includes the root and the site. These are separated out into individual variables because $data-folder is used later when tags are pulled in.

    Next a variable of $sortedcopy is defined by looping through all of the files in the directory. This is stored as a variable because to use the functionality of preceding-sibling is used in the next code snippet and this and <xsl:sort /> can not be used in the same loop.

    Loop Through the Sorted Data Files

    The following for-each statement is a bit longer, and could be separated out into multiple functions, but then the parameters would need to be passed to each function. This decision would depend on how much of the code is reused. For this tutorial, it is a single loop and is split up into three separate code snippets below:

    Each data file in the $sortedcopy variable is looped on line 2. The variables are setup for the file name (line 5) and then the content of that data file is stored into the second variable (line 6).

    Next the page type is checked to make sure it is a library-database file. In many OU instances, a _props.pcf file may exist that should not be pulled in.

    Does the Journal Need an A-Z Heading?

    On the A-Z Listing page, the journals should be listed out in alphabetical pagination. If the journal’s title starts with a new letter then a heading for that label should be displayed. XSL functions below have been separated out into individual lines for better readability, but in practice it could be on one line.

    Current Letter

    Line 3 gets the first character of the current title using the XSL function substring(string, starting integer, ending integer). Then on Line 2 it converts it to uppercase with the XSL function upper-case(string). If the character is not converted to a uppercase, the lowercase version would be treated as a different letter.

    Previous Letter

    This is my favorite part of the code. To get the previous starting letter the XSL function preceding-sibling() can be used. Line 10 should be true unless this is the first node in the data set. If this is not checked, the first item will fail and an XSL error will be thrown.

    The first part of line 13: doc(concat($data-location,'/', preceding-sibling::file[1])) starts by selecting the previous data file, then gets the database-name: /document/ouc:properties/parameter[@name='database-name']. Similar to the current letter’s code it selects just the first letter and then capitalizes it. Now the current letter and previous letter are both stored as variables.

    Need a Heading?

    On line 20 the current letter is compared against the previous letter and if it isn’t the same, or if it is the first letter, a heading with that letter is displayed.

    Display the Journal’s Information

    The journals now need to be displayed. If there are a lot of journals, then the description could be hidden. Or in our final version of the UNC Libraries version accordions are used.

    Lines 2 – 4 display the name, description and create a link for the database.

    Lines 7-9 call a template that will get the tags for the page. Here the $page-path parameter, that was defined in the first code example, is used again. This template is described next.

    Finally, lines 14 and 17 close open tags from the code snippet above.

    List the Subjects Associated with the Journal

    Below the link to the journal database, there will be a list of the subject tags that this journal is associated with. This will help with usability for the end users who may not be aware that the categories exist. These subject tags will link to the subject page so the user can explore more databases.

    $page-path is passed to the function and on line 5, the tags associated with the data file are returned as the variable $page-tags. On line 6 the tags are looped and lines 8-10 the tag is passed to an additional template to display the tag.

    Bootstrap and Foundation have a class for tags called .label. This will be wrapped with an anchor to link to that subject page.

    The variable on line 6 removes the prefix of ‘library-database’ which was the naming convention used in the previous blog post. On line 9 an anchor is used to link to that subject page. This only works if the tag name and the subject file name are exactly the same. If the files are in a sub folder, that should be defined here.

    Line 12 replaces the underscore in a tag with a dash. This naming convention allows for a tag with multiple words. As an example: library-database-hispanic_studies gets converted into hispanic studies but keeps the URL of hispanic_studies.aspx.

    Thanks for Reading

    A simplified code example can be found on GitHub.

  • Pull in Files using XSL and Tag Management in OU Campus

    The Library website at the University of Northern Colorado has a list of academic journals that they subscribe to. This list is constantly changing and needs to be updated by Library staff. The academic journals are organized in an A-Z listing as well as organized by subject. Each journal can belong to a single subject or multiple subjects.

    If this was a SQL database there would be three tables. One table for the journals, a table for subjects and a table linking between the subjects and the journals.

    With OU Campus ‘tags’ can be added to the page to create the link between journals and subjects. On each subject page, the journals with that individual tag get pulled in.

    “Data Files”

    List of Datafiles

    Data files are standalone .pcf files that are similar to a database entry. Each file contains the information for a single journal entry: Journal Name, Description, URL and a dropdown for Active, which will tell the XSL if the data file should be pulled in or not. Line 9 defines the page type as library-database which will be used in case other OU users assign tags to their pages.

    Data File XSL

    In the XSL for the Data File, the tags that are associated with the page are looped and displayed:

    GetTags() returns a node with the sibling node of . Display the tag name: ``

    The next step (will take some time but only has to be done once,) is to create a data file for every Library Academic Journal.

    Subject Tags

    The data files will be pulled in based on the tags that are associated with that data file. Each data file can have an unlimited number of tags associated with it allowing it to belong to an unlimited number of subject areas. A tag needs to be created for each subject area. This may also take some time, and could be been done when the DataFile is being created. A naming convention would be a good idea. For this demo library-database-name. is used.

    Assigning a Tag to a Data File

    Tags are stored on OU Campus and not with the individual .pcf file, so if you have cloned the repo you will need to add tags to each data file. It would also be a good idea to duplicate the data file to have more than one page to pull in.

    Subject Pages

    Now that the data files have been created and assigned tags, they need to be pulled into the subject pages. The tag that will be pulled in will be the first tag associated with the subject page. For the demo, the subject page will pull in Accounting Journals. Attach the library-database-accounting tag to this subject page.

    Assign the tag to the Subject Page

    Below is the XSL for the subject pages. On line 4 the first tag that is associated with the page is assigned to the variable $page-tag. Then all pages with that tag are are assigned to the variable $tag-select Finally, the data files are looped and the path of each data file is passed to a template to display the content:

    On line 16 of the previous code example, the data file’s path is passed to the function GetContentFromSingleDataFile. More code could be nested here, but breaking it up into multiple templates is easier to read and is better organization and re-usability of code.

    GetContentFromSingleDataFile

    This code snippet takes the path as a parameter (line 3) and creates a variable of the full path to the file (line 6). Next, the content of the page is stored as a new variable (line 9). Lines 6 and 9 could be combined into a single variable.

    Line 12 checks to see if the page-type is a library-database file. This is important since anyone can assign tags to a page in OU Campus. By selecting only pages with the page type of library-database no renegade pages will be pulled in.

    Checking to make sure the Active dropdown is set to True (line 15) will allow end users to turn on and off databases as access is changed.

    Finally, it the database name, description and link are outputted.

    Wrap Up

    The final step would be to create subject pages for each subject. UNC has about 50 academic journal subjects and a page will be created for each one as well as a master A-Z listing. If there is interest, I can write up an article about the listing page which won’t use the tags since every journal will get pulled in.

    This is my first technical post so feedback is welcomed. The full source code is on GitHub.

  • Preventing users from adding JavaScript with XSL

    Our Content Management System, OmniUpdate, allows users to create reusable pieces of content called assets. As of this post, there are five different types of assets and two of them and two of them allow the user to add source code: Web Content and Source Code.