Notes and presentations from the 9th Jisc Learning Analytics Network meeting at Exeter University

Originally posted on Effective Learning Analytics.

20170222_131044

Our network meeting this week in Exeter was again fully booked; there does seem to be a growing interest in learning analytics in the UK. This was also a particularly informative meeting, I felt, and we were able to absorb a huge amount of expertise from our presenters.

Exeter University is ahead of much of the sector in this area, and has been preparing itself for learning analytics with some rigorous research activities as well as working on its data sources and technical infrastructure. We heard from three key members of the Exeter team.

Prof Wendy Robinson presenting

Prof Wendy Robinson, Academic Dean for Students was first (Slides ppt 420KB). Wendy introduced Exeter’s learning analytics project, which is led by academics in partnership with students and aims to help students monitor and benchmark their own performance. Enhancing the information available to academic staff on their students, and ascertaining the factors affecting student success are also key aims.

Wendy discussed the University’s “My Grades” feature of their student app, which shows module results to date, and is proving popular with students. There are plans to add more sophisticated functionality.

Joanne Smith from Psychology was next (Slides 1 ppt 733KB | Slides 2 ppt 525KB). Joanne had led a systematic review of the effectiveness of learning analytics interventions. This was a comprehensive and impressive bit of work. She and a colleague assessed peer reviewed studies as “strong”, “moderate” or “weak” based on various aspects of the methodology. Out of 547 publications identified only 20 were retained for inclusion in the analysis.

Joanne concluded from the literature review that the three key factors in predicting student “success” are: social/demographic factors, academic history and engagement (usually in a VLE). She also points to the evidence being limited due to the small number of research studies to date, and issues around the quality of some of them.

Hywel Williams showing where Exeter's data sources for learning analytics are coming from (or will do in the future)

Hywel Williams showing where Exeter’s data sources for learning analytics are coming from (green), aren’t coming from (red) or will do in the future (amber).

Joanne’s colleague, Hywel Williams (Slides pdf 2.5MB) then gave another fascinating insight into research carried out with colleagues Carmel Kent and Chris Boulton at Exeter surrounding the links between engagement and success. The data showed that use of the VLE at Exeter (and, he believed, by extension other bricks and mortar universities) was not a good predictor of student success.

They had identified many potential data sources for learning analytics. They ruled out the use of social media, private email and WiFi records as overly-intrusive. Sources that could be used included lecture recording data, access to past papers, use of the iExeter student app, VLE access, module feedback forms and careers data. Future data sources included attendance data, IT logins etc, use of the online helpdesk and assignment / assessment submissions.

Hywel and colleagues found that predictors varied across disciplines and also between high and low performers.

Shri Footring made an amateur but acceptable recording of the Exeter team’s presentations using Periscope.

Paul Bailey and Rob Wyn Jones then gave an update on activities in the Jisc Learning Analytics Project. (Slides pdf 2.8MB) | Recording  The slides give a good summary of the current status of the project. Paul also demonstrated the Data Explorer tool, which has been developed by Jisc to enable institutions to provide quick and easy analyses of the data they hold in the learning records warehouse.

20170222_133004

Our afternoon sessions were on the theme of the connection between learning analytics and learning gain. We began with Dr Camille Kandiko Howson, Academic Head of Student Engagement at Kings College London (Slides pdf 1.25MB).

Camille is an expert in learning gain and has been working across the 13 HEFCE-funded pilot projects in the area. She discussed the many different aspects of learning that can potentially be measured, such as intellectual skills, communication skills, interpersonal skills, vocational and employment preparedness and personal life quality enhancement. There are various ways to attempt to measure these, and different measures for students, subjects and institutions.

20170222_142323Dr Ian Scott from Oxford Brookes University was next (Slides pdf 5.8MB). He discussed the data sources in use at his university, and the ABC Learning Gains project, carried out with Surrey University and the Open University.

The project had had a number of ethical concerns, in particular with students opting in or out of the data collection. A literature review found that the concept of learning gain is mainly used to measure the effect of particular educational interventions. Research carried out by the project on Open University students showed that socio-demographic factors were the strongest predictors of variance in learning gain, in particular ethnicity and prior educational level. Meanwhile for Business students engagement with the VLE correlated with higher learning gains, while for Arts students it was not related.

There is also a recording of Camille’s and Ian’s slot.

Students at ExeterOur final presentation was by Dr John Whitmer, Director for Analytics & Research at Blackboard, who came in by Skype from California. We heard from John, getting on for two years ago now, at our network event at Nottingham Trent. He’s been leading on some fascinating research since then, examining the data on large number of students and their use of their VLE, and exploring new ways of visualising the data (Slides to come). Recording

Our next session will be on 3rd May 2017, at the University of Strathclyde in Glasgow.

 

 

12 new learner stories now available

Originally posted on Jisc Digital Student.

The idea for the Digital Learner Stories came from feedback at a consultation event at the end of the Jisc Digital student: Skills sector study.  Participants remarked that it would be useful to have real learner voices talking briefly about their digital experiences in various post-16 sectors.  This would be a resource to help groups within institutions hear directly from learners and have better conversations about the role that technology plays in the present and in planning for the future; from teacher/lecturer training to library spaces to access to hardware and robust wifi.

And it has been a joy to work on the Digital Learner Stories and hear twelve stories from  learners across the sectors. They provide inspirational snapshots of their digital experiences in HE, FE and Skills, as an adult learner in an evening class, as an apprentice, as  part time and/or full time learners/employees, learning in a physical classroom or online  at a college, university or as part of a continuing education institution.  As individuals talk about how technology supports or makes a difference to their studies, what comes through is a love of learning – formally and informally – and a realisation that digital opportunities loosen some of the constraints  of traditional education.  Some of the participants would be unlikely to have been able to study in a previous time.

We are grateful to the staff who helped us find volunteers who were willing to share their digital stories.  We are indebted to the 12 learners who gave their time to talk with us and to record a short video of their thoughts, sharing apps and enthusiasm for the role that digital technology plays in their lives. We hope that the stories and videos will  stimulate discussion  and promote individual and institutional reflection on the access and opportunities that are highlighted in these 12 stories.

Consent for learning analytics: some practical guidance for institutions

Originally posted on Effective Learning Analytics.

What information do students need about the use of their data for learning analytics? When should students be asked for their consent for this? How is it best to obtain that consent? What happens if a student wishes to opt out?

Consent continues to be one of the main concerns for universities and colleges when thinking about deploying learning analytics. We covered some of the issues in a podcast last year but it’s become clear that what institutions really need is concrete guidance on how to deal with these questions.

After talking to Andrew Cormack, Jisc Technologies’ Chief Regulatory Officer, I’ve put together the following guidance. This should not be taken as legal advice; we would welcome commentary from others who have been considering these issues.

The Data Protection Act
The UK Data Protection Act 1998 (DPA), based on the EU’s Data Protection Directive, has set the context for data collection and use from a legal perspective for nearly 20 years. Institutions should already have in place policies for processing the personal data of students, ensuring compliance with the DPA.

In order to process personal data one or more conditions must be met. Obtaining the free, informed, consent of the individual is one of these. However there are two other conditions which may be relevant in the case of learning analytics. Processing can also be justified on the basis that:

  1. It is necessary in relation to a contract that the student has entered into, or
  2. The processing is assessed as being in the “legitimate interests” of the organisation

Taking a course can be regarded as the student entering a contract. To fulfil this contract, the university or college needs to process certain data, such as the student’s name, date of birth, address, and the modules they are taking. Particularly where modules are delivered digitally, activity records may be a necessary part of providing the contracted service, or kept as part of the organisation’s legitimate interest in ensuring that systems and resources are not misused. If, in addition, students are invited to submit records of the hours they spend studying, for example, this could be based on their free, informed consent.

When processing using legitimate interest as justification, the law provides additional protection by requiring that the interests of the institution must be balanced against any risk to the interests of the individual. Individuals may request an individual assessment of this balance (and exclusion of their data from processing) if their circumstances involve an increased risk. To satisfy this balancing test, processing should be designed to minimise the impact on individuals. Learning analytics may, for example, help to identify improvements that can be made to a course. That could be regarded as being in the legitimate interests of the organisation and to benefit both current and future cohorts without impacting the rights or freedoms of any individual, thus satisfying the balancing test.

European data protection regulators have explained that a single transaction may involve processing under several different conditions, and that trying to squeeze these into any single condition may actually weaken the protection of the individuals concerned. The data and processing involved in learning analytics may require using different conditions for the stages of data collection, data analysis, and individual intervention.

Sensitive personal data
Data which is of a more sensitive nature is defined separately in the law and is subject to additional protections. This includes attributes such as a person’s religion, ethnicity, health, trade union membership or political beliefs. Some of these may be irrelevant for learning analytics and can be ignored (or perhaps should be ignored from an ethical perspective). However, if it is identified that people from a particular ethnic group, for example, are at greater academic risk, then there is a strong argument that it could be justified to use that characteristic in the predictive models in order to target additional support more effectively.

In the case of sensitive data, the legitimate interests of the organisation cannot be used as justification for its processing. It is likely that the only justification for processing this data will be if the individual has given their explicit consent. The student should also be told exactly what the data will be used for: they have the right not to provide this data, or to have it excluded from any particular type of processing.

The forthcoming EU General Data Protection Regulation
The Data Protection Act 1998 and other national legislation in EU member states will be replaced imminently by the General Data Protection Regulation (GDPR). This will apply across the whole EU and will not be customised for individual countries as with the previous legislation. The UK Government has stated that it expects organisations to comply with the GDPR when it comes into force on 25th May 2018, irrespective of the UK’s plans to leave the EU.

The GDPR will continue to allow data processing to be carried out, as at present, on the basis of the legitimate interests of the organisation, or if it is necessary for the performance of a contract with the data subject.

Using consent as the basis for processing under the GDPR
However if consent is used as the legal basis the GDPR attaches new conditions. It requires clear, affirmative action – pre-ticked boxes, for example, would not be sufficient. A record must also be kept of how and when the consent was provided. In addition, students will have the right to withdraw their consent at any time. The GDPR also strongly disapproves of attempts to obtain “consent” as a condition of providing a service.

One challenge with using consent as the legal basis for learning analytics is that the request must make explicit to the student all the consequences of either providing or withholding their consent. This is perhaps not really feasible or fair when asking students to sign a document on the first day of their course.

A second issue with using consent as your basis for data collection is that – because of the requirement to fully explain the consequences – at the moment you request consent you freeze the activities to which that consent can apply. New types of analysis or intervention cannot be added if they were not envisaged at the time consent was obtained. Given the rate of development of learning analytics, this may prevent both organisations and students obtaining its full benefits.

A hybrid approach: using “legitimate interest” for analysis and consent for intervention
Andrew has argued in his paper “Downstream Consent: A Better Legal Framework for Big Data” that a hybrid approach is the best way forward. This considers collection, analysis and intervention as distinct stages under data protection law. We don’t need to request additional consent for most of the data collection if it is “data debris” which is being collected lawfully anyway e.g. the log files in a VLE. We can use legitimate interest as the justification for further analysis of the data, provided students are aware of this and it is done in ways that minimise the impact on individuals. This might include identifying groups or patterns of experience or behaviour that might benefit from a specific intervention or different treatment.

What we will need, though, is consent from students to intervene on the basis of these analytics, since here the intention is to maximise the (beneficial) effect on the individual. By postponing this request to the time when specific interventions are known, we will be in a much better position to explain to the student the consequences of granting or refusing their consent.

Over time, it seems likely that learning analytics processes currently regarded as add-ons will become an integral, normal and expected part of how education is conducted. I have argued this in a blog post entitled extreme learning analytics. If (or when) students contract with universities or colleges for a personally-tailored education, much of this collection, analysis and intervention will change from a legitimate interest of the organisation to a necessary part of its contract with the individual.

Note: There have been some suggestions that the Information Commissioner’s Office (ICO) may class universities and colleges as “public authorities” for the purposes of the GDPR, in which case they may be prohibited from using legitimate interests for some activities. If this were to occur, the alternative justification that those activities are “necessary for a task in the public interest” could be used, though this requires no balance of interests test so provides less protection for individuals and their data.

Privacy notice
Before any personal information is collected, organisations must inform individuals of all the purposes for which it will be used, who (if anyone) it may be disclosed to and the individual’s rights. These privacy notices (also known as fair processing notices) are required so that individuals know what processing will result from their decision, e.g. to become a student. The privacy notice is distinct from any request for consent – indeed it is required even when consent is not the basis for the processing. See the ICO Guide on privacy notices.

Institutions should already have a privacy notice, which will describe the data required for a student to study there. Text should be added to this which explains what additional purposes learning analytics may be used for – for example to improve the provision of education and to offer personalised recommendations – and declare these as secondary purposes of processing.

The notice may refer to more detailed policy documents such as an institutional learning analytics policy and student guide – see examples. These should explain the measures taken to protect students, staff and their data, and the circumstances in which consent will be sought.

Requesting consent
Unlike the collection and processing of data, taking interventions with students on the basis of the analytics will require their explicit consent. A common example would be for a personal tutor to contact a student if it appears that they are unlikely to pass a module – in order to see if anything can be done to help. Students could be enabled to opt-in to such interventions via a web-based form, or to refuse interventions at the time they are offered. They will also need to have the opportunity of opting out subsequently if they change their minds. The consequences of opting in or out must be explained to them.

Conclusion
Institutions must inform students and staff of the personal information being collected about them and the purposes, including any results of learning analytics, for which it may be used. Provided data collection and analysis are done in ways that minimise the risk of impact on individuals, however, it may not be necessary to obtain individual consent for these stages. Indeed relying on consent risks leaving holes in the dataset and students missing out on the benefits of the analytics for their learning, thus potentially disadvantaging them.

Under both the DPA and the GDPR data collection and processing can often be justified, and the interests of institutions and students better protected, by other legal grounds, such as legitimate interest. Provided institutions ensure that any remaining risk is justified by the benefits to the organisation and its members, this will enable a range of learning analytics to take place, e.g. identifying problems with modules.

Using sensitive data or taking personalised interventions with learners on the basis of the analytics will require their explicit consent. The student should be enabled to opt in to the type of intervention(s) they would prefer, and subsequently to opt out again, or refuse individual interventions, if they wish.

News from the Building digital capability project team

Originally posted on Jisc digital capability codesign challenge blog.

Although it has been quiet on the blog recently, we have been busy behind the scenes with some new developments. We will also be starting a series of blog posts in March 2017, to launch a suite of resources to support colleges and universities with the development of digital capability of their staff and students. So bookmark this site and look out for the series of blog posts from Helen Beetham over the forthcoming weeks. We are pleased to be presenting on this work at DigiFest together with colleges and universities who are taking forward their developments on digital capabilities.

Piloting the discovery tool

We are delighted to be working with 14 institutions on a closed pilot of a beta version of our Discovery tool aligned to the digital capability framework.

Discovery tool

Discovery tool

The tool has been designed to support individuals and managers in a range of roles by helping them to identify and reflect on their current digital capability and make plans to improve their capability through a set of recommended actions and resources.

The following institutions are working with us over the next 6 months to pilot the discovery tool and our wider set of digital capability resources:

  • Coleg Y Cymoedd
  • Derwentside College
  • Hartpury College
  • North Lindsey College
  • Hull College Group
  • School of Pharmacy, Cardiff University
  • University of Derby
  • University of East London
  • Glasgow Caledonian University
  • University of Hertfordshire
  • University of Hull
  • Institute of Education, University of Reading
  • The Open University
  • University of Southampton

The findings from the pilot will be informing the further development of the discovery tool which will move to a more sustainable platform for a roll out of an open pilot in Autumn 2017.

Developing organisational approaches to digital capability

6 Elements of digital capabilities model

6 Elements of digital capabilities model

In March we will be launching a suite of resources to support colleges and universities with the development of the digital capability of their staff and students. We are creating an online guide on ‘Developing organisational approaches to digital capability’ authored by Clare Killen and Helen Beetham, which will be launched late March. The online guide aims to support organisational leads with responsibility for developing staff and student digital capabilities in FE and HE by offering a structured approach showing how our digital capability framework can be used alongside a suite of tools and resources to help you to build a contextualised model for developing digital capability in your organisation.

The guide will link through to the following resources which have all been updated following feedback from an extensive consultation with practitioners and managers across further and higher education:

  • Updated digital capability framework
  • Organisational lens on the digital capability framework – which will provide guidance on how to approach digital capability across four key areas within an educational organisation of teaching, research, content and communications.
  • Strategic steps towards organisational digital capability – a 4 step model
  • An audit tool and checklist – a valuable starting point for conversations within the organisation
  • Seven digital capability ‘profiles’ outlining the digital capabilities required by different roles, including HE and FE teacher, learner, library and information professional, learning technologist, researcher and leader
  • Series of case studies highlighting how universities and colleges are developing staff digital capability

These resources will all be published in March and linked from the Building digital capability project page with supporting blog posts here.

If you have any queries please contact us at digitalcapability@jisc.ac.uk

We look forward to your feedback on these forthcoming resources.

Lisa Gray, Heather Price and Sarah Knight

Show me the evidence…

Originally posted on e-Learning Stuff.

I think this line is really interesting from a recent discussion on the ALT Members mailing list.

…in particular to share these with academics when they ask for the evidence to show technology can make a difference.

Often when demonstrating the potential of TEL and learning technologies to academics, the issue of evidence of impact often arises.

You will have a conversation which focuses on the technology and then the academic or teacher asks for evidence of the impact of that technology.

From my experience when an academic asks for the evidence, then the problem is not the lack of evidence, but actually something else.

Yes there are academics who will respond positively when shown the “evidence”, however experience has taught me that even when that happens then there is then another reason/problem/lack of evidence that means that the academic will still not start to use technology to “make a difference”.

When an academic asks “for the evidence to show technology can make a difference” the problem is not the lack of evidence, but one of resistance to change, fear, culture, rhetoric and motivation.

You really need to solve those issues, rather than find the “evidence”, as even if you find the evidence, you will then get further responses such as, wouldn’t work with my students, not appropriate for my subject, it wouldn’t work here, it’s not quite the same, not transferable…. etc…

Despite years of “evidence” published in a range of journals, can studies from Jisc and others, you will find that what ever evidence you “provide” it won’t be good enough, to justify that academic to start embedding that technology into their practice.

As stated before, when someone asks for the “evidence” more often then not this is a stalling tactic so that they don’t have the invest the time, energy and resources into using that technology.

Sometimes it can be “fear” as they really don’t have the capabilities to use technology and lack the basic ICT confidence to actually use various learning technologies, and as a result rather then fess up their lack of skills, they ask for the “evidence”, again to delay things.

Just turn it around, when you ask those academics who do use technology then, you find that the “evidence” generally plays little or no part in their decisions to make effective use of technology.

So what solutions are there to solve this issue? Well we need to think about the actual problems.

A lot of people do like things to remain as they are, they like their patterns of work, they like to do what they’ve always done. This is sometimes called resistance to change, but I think it’s less resistance to change, and more sticking to what I know. I know what works, it works for me, and anything else would require effort. This strikes me more about culture, a culture where improvement, efficiency and effectiveness are seen as not important and the status quo is rarely challenged.

Unless an organisation is focused strategically and operationally in improvement, widening participation, becoming more efficient, then it is hard to get people to think about changing their practice.

When it comes to embedding learning technologies we often talking about changing the culture of an organisation. This can be hard, but doesn’t necessarily have to be slow. I am reminded of a conversation with Lawrie Phipps though in which he said we have to remember that academics often like the current culture, it’s why they work in that place and in that job. So don’t be surprised when you are met with resistance!

Creating a culture which reflects experimentation, builds curiosity and rewards innovation, isn’t easy, but also isn’t impossible. There are various ways in which this can be done, but one lesson I have learnt in making this happen, is that the process needs to be holstic and the whole organisation needs to embrace that need to change the culture. What I have found that you need to identify the key stakeholders in the organisation, the ones who actually have the power to make change happen. I found in one college I worked in that the real “power” wasn’t with the Senior Leadership Team (who often had the same frustrations I had when it came to change) but the Heads of Faculty, the managers who led and managed the curriculum leaders. They had the power to make things happen, but they didn’t always realise they held that power.

Getting the rhetoric right, but also understood across the organisation is critical for success in embedding learning technologies. Often messages are “broadcast” across an organisation, but staff don’t really understand what is meant by them and many staff don’t think it applies to them. Getting a shared understanding what is required from a key strategic objective is challenging. I have done this exercise a few times and it works quite well, pick a phrase from your strategic objectives and ask a room of staff or managers what it means and to write it down individually. You find that everyone usually had a different understanding of what it means. A couple of examples to try include buzz phrases such as “the digital university” and “embrace technology”.

Finally looking at what motivates people to use technology to improve teaching, learning and assessment.

When I was teaching, I would often experiment with technology to see if it made a difference, if it did, I adopted it, if it didn’t I stopped using it. The impact on the learners was minimal, as I didn’t continue to use technology that didn’t make a difference or was even having a negative impact. What I also did was I applied the same process and logic to all my teaching. So when I created games to demonstrate various economic processes, if they made a difference I used them again, if they didn’t then I would ask the learners how they would change or improve them. When I gave out a reading list of books, I would ask the learners for their feedback and, those that didn’t make a difference or had no positive impact, then they would be removed from the list! I was personally motivated, but we know you can’t just make that happen.

When I was managing a team I ensured that any experimentation or innovation was part of their annual objectives and created SMART actions that would ensure they would be “motivated” to do this. Again you need to identify the key stakeholders in the organisation, the ones who actually have the power to make this happen.

So when someone asks you to show them the evidence what do you do?

Seven weeks to go…

Originally posted on Jisc Digital Student.

There are just over seven weeks to go before we close the door on data collection for this pilot version of the Digital Student Experience Tracker.

Seve

There have been a few questions about this deadline, so I’ll try to answer them here. First, though, it’s important to say that seven weeks is a long time! It is absolutely not too late to launch the Tracker and collect a useful body of data from your learners. We only recommend a window of about two weeks for the survey anyway – unless you have a strategy for reviving interest, for example around a specific initiative or event. So even if you are further behind than you hoped to be, there is still plenty of time to hit that deadline.

Why do we need a deadline at all? Although it’s hard sometimes to remember, this is actually a pilot process. We are learning about the Tracker, how best to run it, how best to support it, and what value it can be. So we need a deadline for closing the pilot, analysing the data, and asking you how it has gone. We already have lots of ideas for improvements, for example around greater flexibility in the questions asked, better integration with other data sources and so on. Unless we take time to really consider the evidence, we won’t be able to make the changes you want and (hopefully!) launch an improved Tracker service later in the year.

We are very grateful to everyone who has been part of the process. Even if you find that you run out of time in the pilot phase, we hope that the thinking and planning has been useful – for example has helped you to identify champions and to raise awareness. Perhaps you are in a better place to run the Tracker in the future, or perhaps you have decided that it’s not for you. We will still want to ask you about your experience – as far as we are concerned, it is all learning.

Why did we choose this particular time? The November to March window was the most popular with our first pilot phase institutions when we polled them (there is a FAQ about timing which explains this). It has proved not to be ideal with our Australian colleagues due to their having a different academic year. And if you planned to include final year HE students in your population – and missed earlier opportunities to launch the survey – then these last 7 weeks are not ideal because of the NSS. But people are finding ways around these problems. Running right up to the deadline, targetting students that are not involved in other surveys, using selective rather than random sampling – these are all options that you have.

How about strategies for encouraging more participants to complete the Tracker in the time available? Here are some fantastic posters from Vikki and the team at Epping Forest College. They’re using QR codes so learners can go straight to the survey, and they’ve also been using differential communications to reach different groups of learners – helping them to achieve over 200 responses in just a few days.

DigitalStudent Review Poster 2 TELL USThere are tips for engaging FE, HE and online learners in our Guide to Engaging Learners and some real world experiences in our Tracker case studies. For example:

  • Make sure you are reaching learners through different channels – posters, flyers, email, social media, texts, via the student desktop or VLE or app.
  • Tell learners about something specific you are already doing to improve their digital experience, or (if you are starting to collect data) something specific you have already found out. This encourages them to focus on positive outcomes and to think that their opinion will really make a difference.
  • Emphasise that the survey is quick and easy to complete.
  • Generate some real-world activity e.g. with a launch event, or focus on live completions, with helpers taking the survey out into student areas on mobile devices.
  • Ask learners to help you design communications about the Tracker, e.g. short video, infographic.
  • Ask sympathetic staff to allow learners to complete the survey live at the end of taught sessions.
  • Emphasise the benefits to students of an enhanced digital environment and employability skills.

Finally, I recently ran a Q and A session for our Australia/New Zealand pilot sites. I’ve uploaded the slides here here in case they are of interest (they may be a bit slow to load).

Happy Tracking!

 

 

Satellite tagging saving wildlife

Originally posted on lawrie : converged.

There are times when my role in education technology and my personal life as a conservationist and Environmental Science graduate feel poles apart. But occasionally I am reminded how the affordances of technology impact across so many aspects of our lives, and how the advances being made can benefit things that we are passionate about, in my case conservation, and especially when it relates to birds.

The ability to satellite track birds has helped conservationists understand many elements of bird behaviour, such as migration. In the early days of this type of technology it was prohibitively expensive, accessible only to a few large research projects. But as the technology advances, becomes both more affordable and more reliable it starts to pervade other areas of conservation and more people can get involved.

So how does satellite tagging work? There are various types of satellite tags. Most of these tags weigh anywhere from 10-25 grams and are placed onto the bird’s body. Often these tags are placed further back on the body of the bird using a harness. Birds of prey often double their weight before migrating or moving elsewhere so these harnesses must accommodate for their weight gain without falling off during flight.

After a bird has been tagged, the transmitter will send signals up to a satellite. This information will be sent up and then back down to the bird conservationists. There are various kinds of information that researchers can learn about birds through this technology including activity and inactivity, location, speed, temperature, and migration routes. All of this information helps conservationists understand how the bird fares in the sky and where the bird ends up or dies.

diagram of satellite tags

Injured Kestrel

an illegally shot kestrel (Malta)

If you have read any of my posts involving birds on this site you will know I have strong opinions around some aspects of conservation. One area that I am particularly interested in is the prevention of wildlife crime. Persecution of birds of prey in the UK is a particular problem. Iconic species such as the Hen Harrier are practically extinct in England because of illegal killing, but many other species are also regularly persecuted. As someone involved in volunteering and charity work around wildlife I was approached by a team looking for help in raising funds for a specific project looking to provide evidence about the extent of illegal persecution. I was happy to get involved. This is a project that will be able to find any of the birds that have died; because of the tags they can locate them and find out how they died, and if it was a crime.

 

 

But this project also got me thinking about the digital capabilities of the various stakeholders involved and how that has changed.

Successful researchers in this field communicate with a huge range of people (some might say stakeholders). They need to be able to use specialist software packages that communicate with the satellite data and a host of other packages needed to integrate across systems, including GPS and statistical software. There is a hardware capability – satellite tags are not plug and play (yet), you don’t just switch it on and connect as if it were a usb dongle. There is also a lot of data security and protection to go through for this kind of project.

But to get to that stage we needed to raise funds – I’ve been helping the researchers use facebook and twitter, and JustGiving, a crowding funding site to provide the funds and to keep people up to date on the project. We also set up some project tools for communication and will be looking at other tools as the project develops.

When I left conservation work and took on my current job, in the late 90’s, I thought I would never be able to engage in this kind of research. The changes in the technology (for example satellite tagging) as made the cost entry level much lower for researchers. But communication tools, social media and the ability to campaign, through crowdfunding, Facebook and Twitter means that the research can be inclusive and accessible, giving individuals the opportunity to ask questions and stay up to date. Whilst most researchers in academic departments don’t require crowdfunding for their work, maybe there are lessons to be drawn from the public engagement of this type of “citizen-funded research”.

If you are interested, and the project has yet to reach its goal, check out the funding page

Finally, and as an aside, looking at how Satellite technology works in monitoring birds I have been thinking about how I can do something to monitor birds in my own garden, using the technology – so watch this space as I learn to do some coding and hardware building.

Incoming! Student expectations of technology

Originally posted on Inspiring learning.

young student using a macbook and iphone

Image: http://unsplash.com CC0

In which I spin out an anecdote about my kids to make a bigger point about institutional uses of technology.

Over the last few weeks I’ve been talking to people about new students’ expectations about the use of technology. They were mostly along the lines of how do further and higher education institutions address the fact that many schools offer quite a sophisticated digital experience which is not always easy to match in post-compulsory education.

We successfully avoided the “digital natives” trap in these exchanges, but it did make me reflect on my own children’s experiences of technology. My daughters is in year 7 and my son is in year 5 in a well-resourced local state school. As you’d expect their home environment has above the average amount of tech in it so let me acknowledge that this is the experience of middle class, European privilege, not a universal experience.

Truth be told, the way technology is used at their school is restricted to certain pockets of practice, usually subject-related such as in Design Technology or ICT. The Maths teachers use My Maths for some homework activities which both of them enjoy (I’m slightly jealous ‘cos that looks fun) but there’s no VLE or eportfolio, no digital badges or “digital by default” approaches.

The really interesting stuff they get up to happens elsewhere. 3 examples:

My daughter had some paired homework to do with a friend who lives half a mile away. I’d been mentally preparing myself for contacting parents to arrange time for them to do it. In the meantime homework was done. The two of them had managed the whole thing via Facetime without any prompting. It was just the easiest, most logical way of achieving what they had to do.

A few weeks back, my son had to draw a poster of the solar system. For a while I thought he was talking to himself until I realised he was using my work iPad to ask Siri for the information he needed. It wasn’t the fact that he was getting the information from the web for this, just his choice of interface. I’ve been a bit sniffy about voice interaction and virtual assistants in the past but here he was using this as his first port of call (and then using Google to fill in the blanks that Siri couldn’t quite manage).

The last example was from yesterday evening where my son was showing my wife and I a channel he’d discovered on YouTube called Map Men. Seriously, have a look – it’s hilarious and for a geographer like me it was a real “that’s m’boy” moment. I have the occasional feeling of guilt about how much screen time we allow our two to have but seeing him using YT to augment the things he’s learning in school is pretty cool. He’s also using it to teach himself electronics, making automated machines and traffic lights in Scrap Mechanic.

So what?

I offer these examples not to brag about my kids…OK maybe I am a bit…more as an example of how technology figures in the lives of the two young people I know best, the sort of people who are likely to end up continuing education beyond school and into the types of organisation I now support.

I think a lot of this could be considered post-digital behaviour. I don’t believe either of my kids were thinking “I know, I’ll use technology to fix this problem”. These were just the tools that they had to hand to fix a problem and I’m not sure they could really grasp why their Dad was talking about it so much afterwards!

Also, none of these examples had anything to do with institutional uses of technology. It didn’t involved a VLE and it wasn’t “ICT” homework. Nobody had to sit down with them and “train” them and it also wasn’t anything that I suggested they do. This is practice they have developed for themselves and these approaches are likely to characterise their use of technology in learning.

Linking to CoDesign 2017

JIsc is shortly going to announce the areas of research and development the education community would most like to see us pursue over the next few years, called CoDesign Challenges. One of the possible areas relates to the next generation of learning environments. There are questions in this area about how students’ use of technology can be best accommodated in the future.

It’s not that I think there’s no place for big organisational things like VLEs, but we’ll need to give serious consideration to how well these systems can accommodate the more flexible and varied practices and technologies learners bring with them. There’s a challenge for people responsible for infrastructure, system design as well as the capabilities of teaching staff.

The cost of not taking it seriously is student experience that doesn’t reflect a crucial aspect of western society and fails to capitalise on its benefits at the same time as helping to addressing it’s problems.

Just don’t mention digital natives!

 

The post Incoming! Student expectations of technology appeared first on Inspiring learning.

Student Digital Tracker February Update

Originally posted on Jisc Digital Student.

Our student digital experience tracker is well underway with many institutions having launched their surveys and are busy gathering students’ responses. With many colleges and universities designating February as ‘Digital February’ the month they will be promoting the tracker to their students, we thought it would be timely to offer an update on what we know to date.

Here is a breakdown of participating institutions:
• 58 FE colleges and 3 Sixth Form colleges
• 8 Skills Providers
• 6 Adult and community learning providers
• 51 UK Universities
• 5 Welsh institutions
• 7 International universities from New Zealand, Australia and South Africa of which one South African University is partnering with 5 other South African universities

In total we have 181 Tracker surveys built in BOS and used by 135 institutions. To date we have gathered a total of 7 179 student responses. We look forward to further data coming in over the next 2 months until we close the surveys on 31st March. On the 4th April, pilot institutions will be able to view their data benchmarked against their sector grouping. We will then start our analysis of the data and will be producing a detailed report in May 2017.

Look out for Helen Beetham’s blog post which will have further updates on approaches pilot institutions are using to engage their students in the Tracker surveys.

Further information on the Tracker and the pilot is available here.