{"id":616,"date":"2017-08-17T15:58:38","date_gmt":"2017-08-17T13:58:38","guid":{"rendered":"http:\/\/dfki.de\/smartfactories\/?page_id=616"},"modified":"2020-07-02T10:15:18","modified_gmt":"2020-07-02T10:15:18","slug":"software","status":"publish","type":"page","link":"https:\/\/smartfactories.dfki.de\/?page_id=616","title":{"rendered":"Software"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">GitHub Repository<\/h2>\n\n\n\n<p>We publish selected software packages via GitHub. You can find our most recent developments on <a rel=\"noreferrer noopener\" href=\"https:\/\/github.com\/DFKI-Interactive-Machine-Learning\/\" target=\"_blank\">https:\/\/github.com\/DFKI-Interactive-Machine-Learning\/<\/a> <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Multimodal Multisensor Activity Annotation and Recording Tool<\/h2>\n\n\n\n<p>Wearable and ubiquitous sensor systems provide physiological data that can be embedded into intelligent user interfaces. Including data should enhance humans and computers such that interaction experience is improved. We provide a tool for recording and one for annotating multimodal data from multiple sensors under an open source license.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Recording | MMART<\/strong><\/h3>\n\n\n\n<p>The <strong>M<\/strong>ultimodal <strong>M<\/strong>ultisensor <strong>A<\/strong>ctivity <strong>R<\/strong>ecording <strong>T<\/strong>ool (MMART) supports common 2D\/3D cameras and interaction devices (Myo, Leap Motion) to be synchronously recorded. You can find the source code and further instructions on <a rel=\"noopener noreferrer\" href=\"https:\/\/github.com\/DFKI-Interactive-Machine-Learning\/MMART\" target=\"_blank\">Github<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"\/wp-content\/uploads\/annotool_rec.png\" alt=\"Recording Diagram\" class=\"wp-image-634\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Annotating | MMAAT<\/strong><\/h3>\n\n\n\n<p>The data format is shared with our <strong>M<\/strong>ultimodal <strong>M<\/strong>ultisensor <strong>A<\/strong>ctivity <strong>A<\/strong>nnotation <strong>T<\/strong>ool (MMAAT) that allows for intuitive labeling of the data providing multiple viewports for visualizing data. The resulting supervised data can be exported for machine learning purposes. You can find the source code and further instructions on <a rel=\"noopener noreferrer\" href=\"https:\/\/github.com\/DFKI-Interactive-Machine-Learning\/MMAAT\" target=\"_blank\">Github<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"\/wp-content\/uploads\/sepant_ui.png\" alt=\"Annotation GUI\" class=\"wp-image-633\"\/><\/figure>\n\n\n\n<p><strong>Citation<\/strong><br>If you use our software please cite our <a href=\"http:\/\/dl.acm.org\/citation.cfm?id=2971459\" target=\"_blank\" rel=\"noopener noreferrer\">corresponding publication<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Fine-tuning deep CNN models on specific MS COCO categories<\/h2>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft\"><a href=\"\/wp-content\/uploads\/py-faster-rcnn-ft-demo2.png\"><img decoding=\"async\" src=\"\/wp-content\/uploads\/py-faster-rcnn-ft-demo2-268x300.png\" alt=\"py-faster-rcnn-ft-demo2\" class=\"wp-image-650\"\/><\/a><\/figure><\/div>\n\n\n\n<p><br>Fine-tuning of a deep convolutional neural network (CNN) is often desired. We aim at using fine-tuned image classification models for, e.g., interactive machine learning and multimodal interaction. To this end, we needed an easy-to-use tool for fine-tuning or re-training state-of-the-art deep neural networks on custom datasets or subsets of existing corpora.<\/p>\n\n\n\n<p><strong>py-faster-rcnn-ft<\/strong><br>We forked the original version of <a target=\"_blank\" rel=\"noopener noreferrer\">py-faster-rcnn<\/a> for adding changes relevant to our research. The py-faster-rcnn-<strong>ft<\/strong> software library can be used to <strong>f<\/strong>ine-<strong>t<\/strong>une the VGG CNN M 1024 model on custom subsets of the Microsoft Common Objects in Context (MS COCO) dataset. For example, we improved the procedure so that the user does not have to look for suitable image files in the dataset by hand which can then be used in the demo program. Our Implementation randomly selects images that contain at least one object of the categories on which the model is fine-tuned.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"\/wp-content\/uploads\/py-faster-rcnn-ft.png\"><img decoding=\"async\" src=\"\/wp-content\/uploads\/py-faster-rcnn-ft.png\" alt=\"py-faster-rcnn-ft\" class=\"wp-image-648\"\/><\/a><\/figure>\n\n\n\n<p>py-faster-rcnn-ft is publicly available on <a href=\"https:\/\/github.com\/DFKI-Interactive-Machine-Learning\/py-faster-rcnn-ft\" target=\"_blank\" rel=\"noopener noreferrer\">Github<\/a>.<\/p>\n\n\n\n<p><strong>Citation<\/strong><\/p>\n\n\n\n<p>If you use our software for a research project, we appreciate a reference to our corresponding <a href=\"https:\/\/arxiv.org\/abs\/1709.01476\" target=\"_blank\" rel=\"noopener noreferrer\">paper<\/a> (<a href=\"https:\/\/arxiv.org\/pdf\/1709.01476\">PDF<\/a>).<\/p>\n\n\n\n<p>BibTeX entry:<br><code><br>\n\t@article{Sonntag2017a,<br>\n\t\ttitle = {{Fine-tuning deep CNN models on specific MS COCO categories}},<br>\n\t\tauthor = {Sonntag, Daniel and Barz, Michael and Zacharias, Jan and Stauden, Sven and Rahmani, Vahid and F\u00f3thi, \u00c1ron and L\u0151rincz, Andr\u00e1s},<br>\n\t\tarchivePrefix = {arXiv},<br>\n\t\tarxivId = {1709.01476},<br>\n\t\teprint = {1709.01476},<br>\n\t\tpages = {0--3},<br>\n\t\turl = {http:\/\/arxiv.org\/abs\/1709.01476},<br>\n\t\tyear = {2017}<br>\n\t}<\/code><\/p>\n","protected":false},"excerpt":{"rendered":"<p>GitHub Repository We publish selected software packages via GitHub. You can find our most recent developments on https:\/\/github.com\/DFKI-Interactive-Machine-Learning\/ Multimodal Multisensor Activity Annotation and Recording Tool Wearable and ubiquitous sensor systems provide physiological data that can be embedded into intelligent user &hellip; <a href=\"https:\/\/smartfactories.dfki.de\/?page_id=616\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"parent":0,"menu_order":6,"comment_status":"closed","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-616","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/smartfactories.dfki.de\/index.php?rest_route=\/wp\/v2\/pages\/616","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/smartfactories.dfki.de\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/smartfactories.dfki.de\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/smartfactories.dfki.de\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/smartfactories.dfki.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=616"}],"version-history":[{"count":1,"href":"https:\/\/smartfactories.dfki.de\/index.php?rest_route=\/wp\/v2\/pages\/616\/revisions"}],"predecessor-version":[{"id":904,"href":"https:\/\/smartfactories.dfki.de\/index.php?rest_route=\/wp\/v2\/pages\/616\/revisions\/904"}],"wp:attachment":[{"href":"https:\/\/smartfactories.dfki.de\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=616"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}