Showing posts with label Medical. Show all posts
Showing posts with label Medical. Show all posts

Monday, February 18, 2013

Nicholas Volker Case: Molecular Decision Support


One In A Billion : A Boy's life, a medical mystery
(http://www.jsonline.com/features/health/111224104.html)


Nicholas Volker is a little boy with a rare, devastating disease. In a desperate bid to save his life, Wisconsin doctors must decide: Is it time to push medicine's frontier?



Treating each patient as if they are the “average human” is often not effective
Must combine phenotype and genotype in the exam room
Treating each person based on their uniqueness is a truly big data analytics challenge
New tools are required for safe, high quality, cost effective, personalized medicine

Wednesday, January 30, 2013

병원 진료과 영문명(영문<-> 한글)




Friday, December 14, 2012

Clinical Language Understanding: the Future of Electronic Medical Record Software

source: http://www.meaningfulusenetwork.com/clinical-language-understanding-the-future-of-electronic-medical-record-software/


Clinical Language Understanding: the Future of Electronic Medical Record Software

As EMR vendors aim to entice doctors with more functional electronic medical record features, voice recognition technology has gained tremendous importance. Though voice recognition programs generally identify human speech and transform it into free text, their shortfall in the EMR world has been their inability to translate free text to fit into many systems’ check box and drop down menu components.
With natural language processing (NLP), however, relevant information is extracted from narrative text and populated into corresponding fields. NLP, when applied to the medical domain, is referred to as clinical language understanding (CLU). The CLU engine, for example is able to identify that “cancer” is a “disease” and can relay this to the natural language processor.
Using these technologies, EMR vendors like Nuance are developing platforms to help doctors chart more accurately and efficiently. The Dragon medical 360/M.D. Assist software uses CLU to detect missing details and unclear associations between findings, and it can also prompt doctors to identify more specific diagnoses (i.e. chronic versus acute). The MModal tool, meanwhile, can alert clinicians, while dictating, if they are overlooking any vital information that’s already in the patient’s chart. This can include labs or other tests in the patient’s electronic medical record that may be relevant to what the clinician is currently charting. Similarly, ChartLogic uses dictation and NLP capabilities in a product it calls Stella (similar to Apple’s Siri), which is compatible with many leading EMR software.
Electronic medical records in general are headed in the direction of technologies, like CLU, that have the ability to help providers increase input speed and improve workflow. Over time, we are likely to see a growing trend in more advanced voice recognition functions being incorporated into EMR software, which will benefit both physicians and patients alike.

Nuance Tech: Voice Recognition and Clinical Language Understanding

source: http://www.medgadget.com/2012/12/nuance-tech-voice-recognition-and-clinical-language-understanding.html



Nuance Tech: Voice Recognition and Clinical Language Understanding

by  on  • 8:54 am


While at RSNA, we got a chance to check out the latest voice recognition technology from Nuance. These are the guys that make Dragon NaturallySpeaking software, and also power many of the automatic call centers that help direct people and provide information based on natural language processing.  Nuance offers voice recognition and clinical language understanding (CLU) technology that can be embedded into just about any computer software or smartphone/tablet app. The company has released a short video demonstrating the capabilities of its 360 | Development Platform that allows integration of its technology into 3rd party tools:


Wednesday, December 5, 2012

i2b2 (Informatics for Integrating Biology and the Bedside)

source: https://www.i2b2.org/


i2b2: Informatics for Integrating Biology & the Bedside - A National Center for Biomedical Computing
MISSION
i2b2 (Informatics for Integrating Biology and the Bedside) is an NIH-funded National Center for Biomedical Computing based at Partners HealthCare System. The i2b2 Center is developing a scalable informatics framework that will enable clinical researchers to use existing clinical data for discovery research and, when combined with IRB-approved genomic data, facilitate the design of targeted therapies for individual patients with diseases having genetic origins. This platform currently enjoys wide international adoption by the CTSA network, academic health centers, and industry.  i2b2 is funded as a cooperative agreement with the National Institutes of Health.

DRIVING BIOLOGY PROJECTS  RESOURCES


Overview
Current DBPs
    Autoimmune/CV Diseases
    Diabetes/CV Diseases
Past DBPs
    Airways Diseases
    Hypertension
    Type 2 Diabetes Mellitus
    Huntington's Disease
    Major Depressive Disorder
    Rheumatoid Arthritis
    Obesity
    Overview
    Computational Tools
    De-Identification Demo
    Software
    NLP Research Data Sets
    NLP Shared Tasks
    Academic Users' Group

    SOFTWARE
      

    HIGHLIGHTS
    Video from i2b2 Tutorial
    Developing an i2b2 Cell and Client Plugin
    in conjunction with CTSA KFC Meeting and AMIA Annual Symposium
    Wednesday, November 7, 2012
    Northwestern University
    Now available via AUG Webpage 

    SECOND DEDICATED i2b2 ANNUAL ACADEMIC USERS' GROUP MEETING
    and NLP Workshop
    July 24 - 25, 2012

    *** i2b2 NLP DATA SET #4 (from 2010 Challenge) ***
    NOW AVAILABLE FOR RESEARCH PURPOSES
    A complete set of annotated and unannotated, deidentified patient discharge summaries from the First, Second (Obesity),Third (Medication) and Fourth Shared Tasks for Challenges in NLP for Clincial Data are now available to the community for research purposes. Check it out at ourNLP Data Sets page . Please note you must register AND submit a DUA for access.
    *** Publications In the News ***
     Gallagher PJ, Castro V, Fava M, Weilburg JB, Murphy SN, Gainer VS, Churchill SE, Kohane IS, Iosifescu DV, Smoller JW, Perlis RH,  Antidepressant response in individuals with Major Depressive Disorder exposed to NSAIDS: a pharmacovigilance study.  Am J Physchiatry 2012;169:1065-1072.
    Castro V, Gallagher PJ, Clements CC, Murphy SN, Gainer VS, Weilburg JB, Fava M, Churchill SE, Kohane IS, Smoller JW, Iosifescu DV, Perlis RH.  Incident user cohort study of risk for gastrointestinal bleed and stroke in individuals with Major Depressive Disorder treated with antidepressants.  Brit Med J Open.  2012 Mar 30;2(2):e000544.
    Kohane IS: Using electronic health records to drive discovery in disease genomics.  Nature Review Genetics. 2011;12:417-428.  doi10.1038/nrg2999.
    *** i2b2 ROADMAP for 2011 ***
    Release 1.6 = “Modifier-enabled i2b2”
    Now available at our Software Page.
    This release allows for the use of modifiers to compliment the concept codes and be used in patient queries. For example multiple modifiers such as dose, frequency, and route can enhance the medication entry.

    Major software enhancements
    • Client queries that specify observations should occur in the same encounter or same instance of an encounter
    • Client queries that enable modifiers to be used for specifying patients and encounters
    • Services to allow for the inclusion of previous queries, as well as patient and encounter sets in new Boolean queries
    • Items from Patient and Encounter tables can be used in queries
    • Query performance metrics and enhancements
    • Queries using units other than the normal units (Unit Conversion)
    Release 1.7 = “Temporal Query enabled i2b2”
    Version 1.7 of the i2b2 software features two important innovations. The first is the introduction of temporal queries, an implementation of the temporal theory development from Core 1. This will allow the definition of events from collections of observations, and then the ordering of those events to occur in specific sequences with a specified number of intervening hours or days. The second innovation is support for the identity management cell which will enable the maintenance of protected health information in a separate area of the i2b2 Hive. The workflows that begin with unencrypted, fully identified notes or other kinds of identified data will be able to produce de-identified data at various levels. Depending on the transformation chosen, de-identified data may also be linked back to its original identified data. We anticipate 1.7 to be released in the beginning of 2013. 
    Release 2.0 = “Clinical-Trial enabled i2b2”
    In the summer of 2013 we are planning on the release of a version of i2b2 which represents a bundle of Community Supported i2b2 Plug-ins and Tools. This version will specifically support the end to end use case for the discovery of a set of patients for a clinical trial. Current plug-ins and features will be enhanced and new features added from the i2b2 community to provide increasingly focused selection capabilities to narrow down to a final, well qualified patient set. The process of advancing the levels of approval by data use agreements and institutional review boards will be supported to allow the recruitment process to proceed with the maximal respect for patient privacy and confidentiality. 


    Interview: Understanding Clinical Language Understanding with Carina Edwards, VP Solutions Marketing at Nuance Healthcare

    source : http://www.hitconsultant.net/2012/06/14/interview-understanding-clinical-language-understanding-with-carina-edwards-vp-solutions-marketing-at-nuance-healthcare/


    Interview: Understanding Clinical Language Understanding with Carina Edwards, VP Solutions Marketing at Nuance Healthcare


    Carina Edwards, VP Marketing Solutions at Nuance Healthcare
    There is a growing demand to extract structured, “actionable” information from unstructured (dictated) medical documents. Clinical Language Understanding (CLU) technology allows a computer to read and understand electronic free text and extract data for use in countless applications across the healthcare spectrum.  To understand and learn more about CLU technology, HIT Consultant spoke with Carina Edwards, VP, Solutions Marketing at Nuance Healthcare for a deep dive into:
    • CLU Technology
    • CLU technology implications for ICD-10 and Meaningful Use
    • Nuance’s Partnership with 3M HIS
    LISTEN BELOW OR CLICK HERE TO DOWNLOAD 

    HIT Consultant: Give me a brief overview of what exactly is clinical language Understanding (CLU).
    Carina Edwards: So clinical language understanding (CLU) is the technology that Nuance has launched as a combination of natural language processing (NLP) and statistical analysis that allows us to take any form of documentation in text form, in speech form and extract the relevant clinical facts and codify them against the medical vocabulary. Be that, ICD-9, CBT, SNOMED, etc. It’s the technology itself that we refer to as the clinical language understanding (CLU).
    HIT Consultant: What role does that play particularly for ICD-10?
    Carina Edwards: So, it’s actually the utilization of clinical language understanding (CLU) that’s an important differentiator here. In it of itself, clinical language understanding is not a product perse’ and so what Nuance has done is we have embedded that technology into solutions that go to market with a specific use case. So, for ICD-10 as an example, we have two different solutions that are being brought into the market place. The first is called MD Assist. And we’ve worked with 3M to develop this solution and what it does it allows physicians, while they’re dictating, to look at the documentation, understand the level of specificity, and get prompted for more specificity when necessary.
    So, let’s give a real life example. If I’m in the electronic health record and I’m using Dragon Medical to dictate into the electronic medical record. When I’ve done my dictation, and I have my full patient story documented, I’ll hit save, I’ll go to the next field for instance at that point in time, a query will prompt. So, if I had said, patient presents with heart failure. In the ICD-10 world, that would equate to about 50 or 60 codes, so I need more specificity. And so I prompt the physician to say, “What type of heart failure? What was the acuity? And the specificity of the heart failure?” So then quickly they have two radio buttons they can select: acute, systolic heart failure and now that goes right into the documentation and it’s one and done. So, that’s the MD Assist solution. And that’s on the front end capture. If you have the most specific documents, then the coders won’t have to go back, the clinical document improvement specialist won’t have to go back and query the physician to get further information to drive to appropriate reimbursement. Now that’s the front end solution.
    On the back end, if you think about the full workflow of that document, we’re also putting clinical language understanding (CLU) inside of 3M’s 360 Encompass Computer Assisted Coding Application. As the document flows through the system, it goes into the 360 Encompass workflow, when the coder is presented with those facts, they’ll be extracted from the document using clinical language understanding and tagged to the ICD-10 code structure. Almost at that first pass from which then they can edit. And so, it’s a nice seamless transition.  Because once again, if you can get more specificity from the source of the documentation at the point of documentation, you’ll drive improved processing, improved efficiency through the backend. And the computer assisted coding solution are really meant to provide that first pass that then the coders can work from, edit and submit.[pullquote] if you can get more specificity from the source of the documentation at the point of documentation, you’ll drive improved processing, improved efficiency through the backend. And the computer assisted coding solution are really meant to provide that first pass that then the coders can work from, edit and submit.[/pullquote] So that’s the full workflow of the two things that relate to ICD-10.
    Now if you move further down to you next question around Meaningful Use, clinical language understanding (CLU) is also very appropriate here. So the first power addressing Meaningful Use from this perspective, from the clinical language understanding perspective, we have the best-in-class eScription technology platform for transcription. We process about a billion patient records a year through eScription across the US. The workflow in that scenario is that the physician picks up a phone or uses his iPhone and we have a digital dictation recorder and he quickly captures the dictation and sends then, based on the patient, to transcription. eScription platform leverages speech recognition and a modeling engine that pulls that document and presents the transcription with a first pass at the final report, they edit, they QA and they submit that back to the physician for signing in the electronic record signing queue. Upon that submission, we now run that document through clinical language understanding (CLU) and what that produces is a HL7-CDA level 2 document that now is appended to the transcribed final report.
    So when it goes back to the EHR, it’s in that format so that they can pull the structured information out of that report that is needed as they’re populating fields for Meaningful Use.
    HIT Consultant: From an integration standpoint, does it easily integrate with all existing EMRs for a hospital?
    Carina Edwards: Yes, so that’s the best part. So we’ve participated, we’ve been a long-time member of the Health Story Initiative and that defines the HL7-CDA standard, and so as we move to Stage 2 Meaningful Use, EHRs need to be able to consume and reconcile HL7-CDA Level 2 documents. So, it’s a great win-win for the industry. The best part here is that we let physicians use the clinical capture workflow that’s most efficient for them. Be it Dragon Medical on the front end, directly into the electronic medical record or leveraging eScription to transcription on the backend to get their documentation in. But both coming in with both structured, end full physician narrative. So you’re not losing the detail of the patient story but you’re still gaining the structure required to meet all the different regulatory compliance.
    HIT Consultant: Now you mentioned the partnership with 3M. Just briefly discuss that partnership.
    Carina Edwards: Certainly. 3M is a very strategic partner to Nuance. They are a leader in the encoder business across the US. The relationship is multifaceted. We’ve jointly developed the MD Assist solution. So, what we’ve done there is that 3M’s proprietary knowledge is the amount of queries that they provide for their clinical document improvement program (CDIP). And we’ve taken those query sets and combined it with our clinical language understanding technology (CLU) and our base platform: Dragon, eScription, Dictaphone Enterprise Speech and then 3M’s Chartscripts platform. And now, on any one of those platforms the combination is the MD Assist solution. When you’re using one of those foundation products, MD Assist is sold as an add-on and it combines 3Ms knowledge and queries and we’ve automated that into what is understood and gleamed from the CLU entrance. As the example I gave you earlier, as the document is produced and now it’s texted in a notes fields, clinical language understanding extracts those facts and we run that against the 3M rule set and then we prompt the physician based on that knowledge. So that’s the first part of the partnership was this co-development of MD Assist.
    The second part of the partnership is just the embeddedness of leveraging our technology within their inpatient computer assisted coding product so the 360 Encompass, inpatient CAC product is powered by Nuance’s clinical language understanding engine. So we have joint RED teams that work and we have a joint office that drives all different vectors of the partnership.
    Part 2 of this interview will be posted soon. 
    About Carina Edwards:
    Carina is responsible for Nuance Healthcare’s marketing strategy and has direct line of authority for managing the solutions marketing efforts of the individual lines of business including HIM, Diagnostics and Dragon Medical. Prior to joining the company in January of 2011, Carina was the Vice President of Marketing and Product Management at Zynx Health. In this role, Carina transformed the marketing and product management capabilities for the Corporation, as well as redesigned the infrastructure, organization, and governance to achieve the organizations aggressive growth goals. Prior to Zynx Health, Carina held global marketing, product management, and business development leadership roles at Phillips Healthcare, Sapient, and Impact Innovations Group. Carina holds a Master of Business Administration (MBA) degree from Boston College, as well as a Bachelor of Science (BS) degree in Management Information Systems and Decision Sciences from George Mason University.

    UIMA - Unstructured Information Management Architecture


    UIMA

    From Wikipedia, the free encyclopedia
    Apache UIMA
    Developer(s)IBMApache Software Foundation (since October 2006)
    Stable release2.3.1 / March 22, 2011; 19 months ago[1]
    Written inJava with C++ Enablement
    Operating systemCross-platform
    TypeText miningInformation Extraction
    LicenseApache License 2.0
    Websitehttp://uima.apache.org/
    UIMA stands for Unstructured Information Management Architecture. An OASIS standard[2] as of March 2009, UIMA is to date the only industry standard for content analytics[citation needed].
    UIMA is a component software architecture for the development, discovery, composition, and deployment of multi-modal analytics for the analysis of unstructured information and its integration with search technologies developed by IBM. The source code for a reference implementation of this framework has been made available on SourceForge, and later on the website of the Apache Software Foundation.
    An example is a logistics analysis software system that could convert unstructured data such as repair logs and service notes into relational tables. These tables can then be used by automated tools to detect maintenance or manufacturing problems.
    Other examples are systems that are used in medical environments to analyze clinicalnotes.

    Contents

      [hide

    [edit]Structure of UIMA

    The UIMA architecture can be thought of in four dimensions:
    1. It specifies component interfaces in an analytics pipeline
    2. It describes a set of Design patterns
    3. It suggests two data representations: an in-memory representation of annotations for high-performance analytics and an XMLrepresentation of annotations for integration with remote web services.
    4. It suggests development roles allowing tools to be used by users with diverse skills

    [edit]IBM Watson - The Jeopardy Challenge

    In February 2011 a computer from IBM Research named Watson won a competition on Jeopardy! against Jeopardy star Ken Jenningsand undefeated Jeopardy champion Brad Rutter. Watson is a highly advanced computer from IBM Research that uses UIMA for real-time content analytics.[3]

    [edit]See also

    [edit]References

    [edit]External links


    OHNLP workshop - MedKad/p ...

    https://wiki.nci.nih.gov/download/attachments/63996717/OHNLP_workshop_AMIA09.ppt?version=1&modificationDate=1320409955000

    Open Health Natural Language Processing (OHNLP) Consortium

    source: https://wiki.nci.nih.gov/display/VKC/Open+Health+Natural+Language+Processing+(OHNLP)+Consortium



    Skip to end of metadata
    Go to start of metadata

    Open Health Natural Language Processing (OHNLP) Consortium

    What is the Goal of NLP?

    The goal of the Open Health Natural Language Processing Consortium is to establish an open source consortium to promote past and current development efforts and to encourage participation in advancing future efforts. The purpose of this consortium is to facilitate and encourage new annotator and pipeline development, exchange insights and collaborate on novel biomedical natural language processing systems and develop gold-standard corpora for development and testing. The Consortium promotes the open source UIMA framework and SDK as the basis for biomedical NLP systems. Applications created within UIMA consist of software components (referred to as annotators) and their associated configuration files and external resources. Within the framework, one can also create complete pipelines composed of a sequence of annotators and the data flow between them.

    Why use NLP?

    The clinical and research medical community creates, manages and uses a wide variety of semi-structured and unstructured textual documents. To perform research, to improve standards of care and to evaluate treatment outcomes easily — and ideally, in an automated fashion — access to the content of these documents is required. The knowledge contained in unstructured textual documents (e.g., pathology reports, clinical notes), is critical to achieving all of these goals. For instance, clinical research usually requires the identification of cohorts that follow precisely defined patient- and disease-related inclusion and exclusion parameters. Biomedical NLP systems extract structured information from textual reports, facilitating searching, comparing and summarization.

    What is NLP?

    Natural language processing (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages. Natural language generation systems convert information from computer databases into readable human language. Natural language understanding systems convert samples of human language into more formal representations such as parse trees or first order logic that are easier for computer programs to manipulate.
    NLP is used to classify, extract, encode and summarize from text documents. An NLP application will unlock the text to be used for decision support, outbreak detection and quality review.
    NLP applications in the biomedical domain include:
    • mining of information from biomedical documents and publications
    • retrieval of information from large, unorganized collections
    • communication with clinicians, patients, and scientists through natural language
    Examples of NLP tasks are:
    • classifying chief complaints into syndrome categories, for example the chief complaint of cough or SOB into the category of respiratory system
    • extracting a problem list from a history and physical examination of patient
    • determining change in a tumor size over a period of time (example of encoding)
    • summarizing pages of past clinical notes such as family history, chronic conditions, new complaints and test results
    Use application examples of NLP include:
    • identifying unreported MRSA infections
    • extracting information on pacemaker implantation procedures
    • identifying family history diagnoses
    • generating a problem list
    • identifying adverse events
    • matching patients to clinical trials
    There are two main approaches to NLP use application, the symbolic approach and the statistical approach.
    Symbolic NLP includes:
    • Morphological Knowledge (how words are created)
    • Lexical Knowledge (string matching)
    • Syntactic Knowledge (how words can be combine to form sentences)
    • Semantic Knowledge (what words mean)
    • Pragmatic Knowledge (how sentences are used in different situations)
    • Discourse Knowledge (how the preceding sentences affect interpretation of next sentences)
    Statistical NLP includes:
    • Modeling document content as bag-of-words (if “cough” appears > fluid)
    • Modeling probabilistic relationships among words and phrases (“purulent discharge” > fluid; “upon discharge > release)
    • Modeling probabilistic relationships between words and concepts (caries, cavity, abrasion > caries)

    Where do I find out more?