I recently presented on small data for my mobile health class. I have posted my slides here. I would be happy to receive your thoughts and comments:
Author: gtrivedi
Using Machine learning to help Manage Diabetes
I participated in the PennApps hackathon in Philadelphia this weekend. While most of the city was struck with a bad snow storm, a group of hackers holed up inside the Penn engineering buildings to work on some cool hacks. My team consisting of three other hackers: Daniel, Alex and Madhur, decided to work on an app that could predict blood glucose levels of diabetes patients by building machine learning models.
We used the OneTouch Reveal API to gather some data provided by the Johnson & Johnson’s company. They are the manufacturers of OneTouch glucose monitors for diabetes patients. They also give their patients an app for tagging events like exercise (light, moderate, heavy etc.), when they eat food and use insulin (different kinds – fast acting, before/after meals etc.). Our team thought that it might be a good idea to hack on this dataset to find out whether we could predict patients’ glucose levels without them having them to punch a hole in their fingers. A real world use case for this app would be to alert a patient when we predicted unusual glucose levels or have them do an actual blood test when the confidence on our predictions falls low.
We observed mixed results for the patients in our dataset. We did reasonably well for those with more data, but others had very few data points to make good predictions. We also saw that our predictions became more precise as we considered more data. Another issue was that the OneTouch API did not give sufficient information about food and exercise events for any of the patients – mostly without additional event tagging. As a result, our models were not influenced much by them.
We believe that in the near future, it would be common for the patients to have such monitors communicate with other wearable sensors such as smart watches. Such systems would be able to provide ample information about one’s physical activity etc., to make more meaningful predictions possible. Here’s a video demonstrating our proof-of-concept:
.
Interactive Natural Language Processing for Legal Text
Update: We received the best student paper award for our paper at JURIX’15!
In an earlier post, I talked about my work on Natural Language Processing in the clinical domain. The main idea behind the project is to enable domain experts to build machine learning models for analyzing text. We do this by designing usable tools for NLP without really having the need to send datasets to machine learning experts or understanding the inner working details of the algorithms. The post also features a demo video of the prototype tool that we have built.
I was presenting this work at my program’s bi-weekly meetings where Jaromir, a fellow ISP graduate student, pointed out that such an approach could be useful for his work as well. Jaromir also holds a degree in Law and works on building AI systems for legal applications. As a result, we ended up collaborating on a project on using the approach for statutory analysis. While, the main topic of discussion in the project is on the framework in which a human experts cooperate with a machine learning text classification algorithm, we also ended up augmenting our approach with a new way of capturing and re-using knowledge. In our tool datasets and models are treated separately and our not tied together. So, if you were building a classification model for say statutes from the state of Alaska, when you need to analyze laws from Kansas you need not start from scratch. This allows us to be in a better starting place in terms of all the performance measures and build a model using fewer training examples.
We will be presenting this work at JURIX’15 during the 28th year of the conference focusing on legal information systems. Previously, we had presented portions of this work at the AMIA Summit on Clinical Research Informatics and at the ACM IUI Workshop on Visual Text Analytics.
References
Machines learn to play Tabla
Update: This post now has a Part 2.
If you follow machine learning topics in the news, I am sure by now you would have come across Andrej Karpathy‘s blog post on The Unreasonable Effectiveness of Recurrent Neural Networks.[1] Apart from the post itself, I have found it very fascinating to read about the diverse applications that its readers have found for it. Since then I have spent several hours hacking with different machine learning models to compose tabla rhythms:
Inspired by @seaandsailor, used @karpathy‘s char-rnn to make a tabla rhythm https://t.co/kqzZG3q2A2 Amazed how well it learnt on small data
— Gaurav Trivedi (@trivedigaurav) May 26, 2015
Although Tabla does not have a standardized musical notation that is accepted by all, it does have a language based on the bols (literally, verbalize in English) or the sounds of the strokes played on it. These bols may be expressed in written form which when pronounced in Indian languages sound like the drums. For example, the theka for the commonly used 16-beat cycle – Teental is written as follows:
Dha | Dhin | Dhin | Dha | Dha | Dhin | Dhin | Dha Dha | Tin | Tin | Ta | Ta | Dhin | Dhin | Dha
For this task, I made use of Abhijit Patait‘s software – TaalMala, which provides a GUI environment for composing Tabla rhythms in this language. The bols can then be synthesized to produce the sound of the drum. In his software, Abhijit extended the tabla language to make it easier for users to compose tabla rhythms by adding a square brackets after each bol that specify the number of beats within which it must be played. You could also lay more emphasis on a particular bol by adding ‘+’ symbols which increased their intensity when synthesized to sound. Variations of standard bols can be defined as well based on different the hand strokes used:
Dha1 = Na + First Closed then Open Ge
Now that we are armed with this background knowledge, it is easy to see how we may attempt to learn tabla like a language model using Natural Language Processing techniques. Predictive modeling of tabla has been previously explored in "N-gram modeling of tabla sequences using variable-length hidden Markov models for improvisation and composition" (Avinash Sastry, 2011). But, I was not able to get access to the datasets used in the study and had to rely on the compositions that came with the TaalMala software.[2] This is comparatively a much smaller database than what you would otherwise use to train a neural network: It comprises of 207 rhythms with 6,840 bols in all. I trained a char-rnn and sampled some compositions after priming it with different seed text such as “Dha”, “Na” etc. Given below is a minute long composition sampled from my network. We can see that not only the network has learned the TaalMala notation but it has also understood some common phrases used in compositions such as the occurrence of the phrase “TiRa KiTa“, repetitions of “Tun Na” etc.:
Ti [0.50] | Ra | Ki | Te | Dha [0.50] | Ti [0.25] | Ra | Ki | Ta | Tun [0.50] | Na | Dhin | Na | Tun | Na | Tun | Na | Dha | Dhet | Dha | Dhet | Dha | Dha | Tun | Na | Dha | Tun | Na | Ti | Na | Dha | Ti | Te | Ki | Ti | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Dhin [0.50] | Dhin | Dhin | Dha | Ge | Ne | Dha | Dha | Tun | Na | Ti [0.25] | Ra | Ki | Ta | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Dha [1.00] | Ti | Dha | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Dhet | Dhin | Dha | Tun | Na | Ti [0.25] | Ra | Ki | Ta | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Ti | Ka | Tra [0.50] | Ti | Ti | Te | Na [0.50] | Ki [0.50] | Dhin [0.13] | Ta | Ti [0.25] | Ra | Ki | Te | Tra | Ka | Ti [0.25] | Ra | Ki | Te | Dhin [0.50] | Na [0.25] | Ti [0.25] | Ra | Ki | Te | Tra | Ka | Dha [0.34] | Ti [0.25] | Ra | Ki | Ta | Tra | Ka | Tra [0.50] | Ki [0.50] | Tun [0.50] | Dha [0.50] | Ti [0.25] | Ra | Ki | Ta | Tra | Ka | Ta | Te | Ti | Ta | Kat | Ti | Dha | Ge | Na | Dha | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Dhin | Dhin | Dhin | Dha | Tun | Na | Ti | Na | Ki | Ta | Dha [0.50] | Dha | Ti [0.50] | Ra | Ki | Te | Tun [0.50] | Tra [0.25] | Ti [0.25] | Ra | Ki | Te | Tun | Ka | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Ki [0.25] | Ti | Dha | Ti | Ta | Dha | Ti | Dha [0.50] | Ti | Na | Dha | Ti [0.25] | Ra | Ki | Te | Dhin [0.50] | Na | Ti [0.25] | Ra | Ki | Te | Tra | Ka | Dha [0.50] | Ti [0.50] | Ra | Ki | Te | Tun [0.50] | Na | Ki [0.25] | Te | Dha | Ki | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Tun | Ti [0.25] | Ra | Ki | Te | Dhin [0.50] | Na | Ti [0.25] | Te | Dha | Ki [0.25] | Te | Ki | Te | Dhin [0.50] | Dhin | Dhin | Dhin | Dha | Dha | Tun | Na | Na | Na | Ti [0.25] | Ra | Ki | Ta | Ta | Ka | Dhe [0.50] | Ti [0.25] | Ra | Ki | Te | Ti | Re | Ki | Te | Dha [0.50] | Ti | Dha | Ge | Na | Dha | Ti [0.25] | Ra | Ki | Te | Ti | Te | Ti | Te | Ti | Te | Dha [0.50] | Ti [0.25] | Te | Ra | Ki | Te | Dha [0.50] | Ki | Te | Dha | Ti [0.25]
Here’s a loop that I synthesized by pasting a composition sampled 4 times one after the another:
Of course, I also tried training n-gram models and the smoothing methods using the SRILM toolkit. Adding spaces between letters is a quick hack that can be used to train character level models using existing toolkits. Which one produces better compositions? I can’t tell for now but I am trying to collect more data and hope to add updates to this post as and when I find time to work on it. I am not confident if simple perplexity scores may be enough to judge the differences between two models, specially on the rhythmic quality of the compositions. There are many ways in which one can extend this work. One there is a possibility of training on different kinds of compositions: kaidas, relas, laggis etc., different rhythm cycles and also from different gharanas. All of this would required collecting a bigger composition database:
If you have access to any good tabla compositions database(s) please do let me know. Thanks! — Gaurav Trivedi (@trivedigaurav) May 26, 2015
And then there is a scope for allowing humans to interactively edit compositions at places where AI goes wrong. You could also use the samples generated by it as an infinite source of inspiration.
Finally, here’s a link to the work in progress playlist of the rhythms I have sampled till now.
References
- Avinash Sastry (2011), N-gram modeling of tabla sequences using variable-length hidden Markov models for improvisation and composition. Available: https://smartech.gatech.edu/bitstream/handle/1853/42792/sastry_avinash_201112_mast.pdf?sequence=1.
Footnotes
- If you encountered a lot of new topics in this post, you may find this post on Understanding natural language using deep neural networks and the series of videos on Deep NN by Quoc Le helpful. ^
- On the other hand, Avinash Sastry‘s work uses a more elaborate Humdrum notation for writing tabla compositions but is not as easy to comprehend for tabla players. ^
Bike ride from Pittsburgh to DC
This week I did a 335 mi (540 km) bicycle tour from Pittsburgh to Washington DC along with a group of 3 other folks from the school. This is the longest I have ever biked and covered the distance over a period of 5 days. The entire trip is divided into two trails – the 150 mile Great Allegheny Passage from Pittsburgh to Cumberland, followed by the 185.5 mile long Chesapeake and Ohio Canal (C&O Canal) Towpath.
We carried camping equipment on our bikes and enjoyed a lot of flexibility in deciding where to stay each night, although we roughly followed the original plan that our group agreed upon before starting the trip. We biked for 8-12 hours during the day and stayed overnight at each of the following cities:
Day | City | Miles | Daily Mileage | Elevation in feet |
---|---|---|---|---|
0 | Pittsburgh, PA | 0 | 0 | 720 |
1 | Ohiopyle, PA | 77 | 77 | 1,230 |
2 | Frostburg, MD | 134 | 57 | 1,832 |
3 | Little Orleans, MD | 193 | 59 | 450 |
4 | Harpers Ferry, MD | 273 | 80 | 264 |
5 | Georgetown, Washington DC | 335 | 62 | 10 |
If there’s one change I could make in this schedule, it would be to avoid staying over at Harpers Ferry which involved climbing a foot bridge without any ramp for the bikes. It is even more difficult if you are carrying a lot of weight on your bike racks. On the positive side, it allowed us to experience the main streets of Harpers Ferry which is rightly called “a place in time”. Another tip that you could use is to take the Western Maryland Trail near Hancock. It runs parallel to the route and is a paved one, which provides a welcome break after long hours of riding on the C&O trail.
There are lots of campsites near the trail. There are hiker-biker camps near most major towns on the C&O trail and are free to use. We also camped at commercial campgrounds, like at the Trail Inn Campground in Frostburg, where we could use a shower. You can also get your laundry done at these places and save some luggage space. For food and drinks – I suggest that you follow the general long distance biking guidelines about eating at regular intervals while on the bike. I also strongly recommend using a hydration backpack though it adds to the weight you have carry on your shoulders.
I used a hybrid bike – Raleigh Misceo and was very comfortable riding it through all parts of the trail. I was expecting a couple of flat tires specially on the C&O sections with loose gravel and other debris on the trail, but didn’t face any problems. As long as you are not using a road bike with narrow tires you should be good on these trails. Finally for getting back to Pittsburgh we rented a minivan and put our bikes in the trunk which had ample space for 4 bikes with their front wheels taken off.
If you decide to take this tour in future, we have plenty of online guides available for each of the GAP and C&O Canal trails. For a paper-based guide, I would recommend buying the Trailbook published by the Allegheny Trail Alliance. We also created a small webapp called the GAP Map that helped us plan our trip and prepare a schedule.
Here are some of the scenic views along the tour as captured from my phone camera:
Mathematics, Tabla and the Arts
Spring break is here and I finally have ample time to practice my tabla. In the absence of a regular schedule and a teacher, I rely on online videos to improve my skills. Following my YouTube recommendations, I came across this talk given by Manjul Bhargava to a group of school children in Bangalore. Not many of you may know that Dr. Bhargava is not only the 2014 Fields Medal winner, but he is also an accomplished tabla player who has studied under one of the greatest tabla player of our times – Zakir Hussain.
I thought I should post this on my blog for it is certainly the kind of talk that I would have cherished as a kid attending it. Also, I really liked the way he simplified and explained a reasonably difficult concept to his audience. I am sure it would have made a lot of minds curious about the topic:
If you found this interesting, you can find a nice tutorial on it with the title Mathematics for Poets and Drummers by Dr. Rachel Hall (also has an extended version that I haven’t been through yet). Also if this talk inspired you to pick up tabla, I found this very useful series of videos on a YouTube channel by Tej Singh for beginning and intermediate tabla players.
Clinical Text Analysis Using Interactive Natural Language Processing
Update: Here’s our full paper announcement with source-code release…
I am working on a project to support the use of Natural Language Processing in the clinical domain. Modern NLP systems often make use of machine learning techniques. However, physicians and other clinicians, who are interested in analyzing clinical records, may be unfamiliar with these methods. Our project aims to enable such domain experts make use of Natural Language Processing using a point-and-click interface . It combines novel text-visualizations to help its users make sense of NLP results, revise models and understand changes between revisions. It allows them to make any necessary corrections to computed results, thus forming a feedback loop and helping improve the accuracy of the models.
Here’s the walk-through video of the prototype tool that we have built:
At this point we are redesigning some portions of our tool based on feedback from a formative user study with physicians and clinical researchers. Our next step would be to conduct an empirical evaluation of the tool to test our hypotheses about its design goals.
We will be presenting a demo of our tool at the AMIA Summit on Clinical Research Informatics and also at the ACM IUI Workshop on Visual Text Analytics in March.
References
- Gaurav Trivedi. 2015. Clinical Text Analysis Using Interactive Natural Language Processing. In Proceedings of the 20th International Conference on Intelligent User Interfaces Companion (IUI Companion ’15). ACM, New York, NY, USA, 113-116. DOI 10.1145/2732158.2732162 [Presentation] [PDF]
- Gaurav Trivedi, Phuong Pham, Wendy Chapman, Rebecca Hwa, Janyce Wiebe, Harry Hochheiser. 2015. An Interactive Tool for Natural Language Processing on Clinical Text. Presented at 4th Workshop on Visual Text Analytics (IUI TextVis 2015), Atlanta. http://vialab.science.uoit.ca/textvis2015/ [PDF]
-
Gaurav Trivedi, Phuong Pham, Wendy Chapman, Rebecca Hwa, Janyce Wiebe, and Harry Hochheiser. 2015. Bridging the Natural Language Processing Gap: An Interactive Clinical Text Review Tool. Poster presented at the 2015 AMIA Summit on Clinical Research Informatics (CRI 2015). San Francisco. March 2015. [Poster][Abstract]