Thursday, December 25, 2008

Bruker Execs to Take Salary Cut in 2009

A few of Bruker’s top executives will take pay cuts in the range of 10 percent to 25 percent in 2009, the firm reported in a filing today with the US Securities and Exchange Commission.

The 2009 salary for Bruker Chairman, President, and CEO Frank Laukien will decrease 25 percent to $318,750 from his 2008 salary of $425,000. CFO and Treasurer William Knight will receive a 2009 salary of $288,000, down 10 percent from his 2008 salary of $320,000. Brian Monahan, corporate controller and EVP of Bruker Daltonics, will receive a 2009 salary of $180,000, down 10 percent from his 2008 salary of $200,000.

The firm said in the filing that the salary reductions were temporary, but it didn’t provide any other details. At the end of October, Bruker reported flat third-quarter revenue growth. During its Q3 conference call Laukien noted that beginning this past summer the firm began taking steps to reduce its operating and interest expense and reduce its exposure to currency fluctuations.

“We expect that our cost-cutting initiatives will already have noticeable positive effects in the fourth quarter of 2008 and first quarter of 2009, and that by the middle of 2009 we will see annualized reductions in our overall costs of greater than $12 million,” he said.

Bruker’s shares closed down 1 percent at $4.21 in an abbreviated trading session on the Nasdaq. While stocks across the board are down due to the current economic climate, Bruker’s shares have been hit particularly hard this year, falling 68 percent since the beginning of 2008.


Be part of XTractor community.

  • XTractor the first of its kind - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Monday, December 22, 2008

Google Shutters Its Science Data Service

The Google Datasets Project Comes to An End

Google will shutter its highly-anticipated scientific data service in January without even officially launching the product, the company said in an e-mail to its beta testers.

Once nicknamed Palimpsests, but more recently going by the staid name, Google Research Datasets, the service was going to offer scientists a way to store the massive amounts of data generated in an increasing number of fields. About 30 datasets — mostly tests — had already been uploaded to the site.

The dream appears to have fallen prey to belt-tightening at Silicon Valley's most innovative company.

"As you know, Google is a company that promotes experimentation with innovative new products and services. At the same time, we have to carefully balance that with ensuring that our resources are used in the most effective possible way to bring maximum value to our users," wrote Robert Tansley of Google on behalf of the Google Research Datasets team to its internal testers.

"It has been a difficult decision, but we have decided not to continue work on Google Research Datasets, but to instead focus our efforts on other activities such as Google Scholar, our Research Programs, and publishing papers about research here at Google," he wrote.

Axing this scientific project could be another sign of incipient frugality at Google. Just a couple weeks ago, Google CEO Eric Schmidt told the Wall Street Journal that his company would be cutting back on experimental projects. First described in detail by Google engineer Jon Trowbridge at SciFoo 2007 — the slides from a later version of the talk is archived on the Partial Immortalization blog — the project was going to store, for free, some of the world's largest scientific datasets. In Trowbridge's slides, he points out the 120 terabyte Hubble Legacy Archive and the one terabyte Archimedes palimpsest.

"'It's a sad story if it's true," wrote Attila Csordas, a stem cell biologist and author of Partial Immortalization who recently moved to Hungary from Tulane University, in an email to Wired.com. "Assuming it is true that might mean that Google is still a couple years away from directly helping the life sciences (on an infrastructural level)."

Other scientists remained hopeful that the service might return in better times.

"The Space Telescope Science Institute has had a long positive relationship with Google that started with our partnership in GoogleSky in early 2006," said astrophysicist Alberto Conti of STSI. "We were looking forward to Google's commitment to helping the astronomical community with the data deluge, and we are sure Google will reconsider this decision in the future. While perhaps understandable in this economic climate, it's sad to see Google leave the field."

And Conti noted, other companies may step up to help scientists manage their information.

"Amazon is doing exactly the opposite and they might actually fill the void," he said.

Google representatives did not respond immediately to request for comment.

Also read The Google Datasets Project Comes to An End - Oh My Chemistry - Who Cares For You?


Be part of XTractor community.

  • XTractor the first of its kind - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Happy Birthday XML

This comes as a tribute to XML on its one decade milestone!

As anyone with kids—or a good memory—knows, when you cross the "double digits" birthday threshold, it’s a big deal. This year, XML crossed this threshold on Feb. 10, and this got me thinking about questions that I might ask this 10-year-old in order to gain perspective on its past and future. I know I’m late, but XML is nothing if not flexible. It assured me that even a belated party is better than none, especially if I invited Alexander Falk, founder and CEO of Altova (its flagship product XML Spy is one of our favorites) and a real XML aficionado (www.xmlaficionado.com).

I began with this: "XML, you’re much more famous than your parent SGML, and your sibling HTML 4.01 was deprecated in favor of an XML standard, XHTML. Techies worldwide have heard about you, and mighty standards battles have been waged about you. However, growing is easy the first 10 years, but soon you’ll have to be able to point to practical accomplishments. What have you been up to lately?"

Confident as any precocious 10-year-old, XML replied: "First, I’ve inspired more than 38 core recommendations, everything from Canonical XML 1.1 to XQuery to XSL Transformations (XSLT), and many others. And, of course, the list of standards built on XML is enormous, including SOAP, DITA, and XBRL. And don’t forget: The World Wide Web consortium fosters only the development of basic XML standards. Other organizations such as OASIS and XBRL International have built many practical XML applications on top of the core recommendations. I’d say I’m off to a pretty good start."

I then pointed out that one of XML’s siblings may be making a comeback as HTML 5. Given earlier deprecation, one wonders why we would need a new version of HTML. One reason, according to the W3C, is that "new elements are introduced based on research into prevailing authoring practices." This sounds a lot like backsliding. XML countered: "Well, you know about sibling rivalry. And HTML 5 may be a long time in getting approved, if it ever is. And I might point out that even its authors admit it isn’t a complete replacement for XHTML. You see, I’m so flexible that I can do just about anything." Falk, in defense of XML, said, "I’m afraid that the reality is that a lot of HTML is still created by hand. Tools (such as DreamWeaver) have been very slow to enforce XHTML compliance, and people continue to generate sloppy HTML pages." Without wanting to spoil the party, I noted that, as ever, practicality triumphs. If sloppy webpages work, standards take a back seat.

I agreed about the need for practicality and moved on to my next topic: Microsoft Office’s OOXML for Word. I see this as essentially a replacement for its proprietary "rich text format," a way of displaying text on pages. Since XML has the separation of look-and-feel from content in its DNA, I asked: "Isn’t OOXML for Word’s use of XML pretty superficial, since you can’t really do much more with it than you could with rich text formats?" Sensing a rhetorical trap, XML replied, "Yes, I personally prefer OpenOffice’s ODF standard since it takes better advantage of other XML standards. But you can’t argue with Microsoft Office’s success, and it may become an ISO standard (ISO/IEC 29500) just as ODF is. Not only that, but this change forces Microsoft to document and manage future changes in an open way." That was true—XML seems wise beyond its 10 years—but Microsoft won’t support that standard until Office 2014, which means at least another 6 years hence.

However, since this was a party, I moved on to my last, least contentious question: "What do you think will be the biggest surprise use of your standard in the next 10 years, XML?" I was amazed to hear the response: "XBRL! This will transform public corporations and the financial services industry, affecting investors, processes, and data. It will level the investment playing field." I was impressed to hear XML’s familiarity with capital markets and investing. I was even more surprised when Falk agreed: "Altova is working on support for XBRL in the next major software release, v2009, and plans to have XBRL-specific features, including XBRL validation, taxonomy editing, and data mapping." This statement from a longtime XML vendor is a serious commitment indeed, and it can only support widespread acceptance of XBRL.

Differences of opinions aside, XML’s next 10 years look bright indeed.

By Robert J. Boeri - December 2008 Issue, Posted Dec 01, 2008


Be part of XTractor community.

  • XTractor the first of its kind - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Wednesday, December 17, 2008

Invitrogen’s Acquisition of U of Houston Startup VisiGen Pays Dividends for School

VisiGen Biotechnologies, a University of Houston gene-sequencing outfit acquired by Invitrogen in October for $20 million, represents the university’s biggest return on a spinout to date, and could eventually become one of its biggest overall tech-transfer wins, the school said last week.

As a result of VisiGen’s acquisition, UH, which held an undisclosed equity stake in the startup, will receive nearly $500,000 from the initial installment of the deal. Plans for follow-on installments were not disclosed. An indirect benefit comes from the fact that approximately half of VisiGen’s current employees are UH graduates, the school said.

Several of the scientists that founded the company will now continue to research second-generation sequencing techniques in their UH laboratories. Whether this research eventually sparks additional collaborations between VisiGen and the university is now up to Life Technologies, the biotech tool giant that resulted from Invitrogen’s multi-billion-dollar merger with Applied Biosystems last month.

Do you want to know more?


Be part of XTractor community.

  • XTractor the first of its kind - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Wednesday, December 10, 2008

Web 2.0 and Semantic Web for Bioinformatics

Here is a hand picked item, one of my favorites among the recent blogs that came across. This articles reflects and talks about many things which i have appreciated in the recent times and have blogged in the past web 2.0....

Why should a (bioinformatics) scientist learn web development ?

Up to now bioinformatics research with genomics datasets, has been happening like that: you download the data from a website of a big-iron institution (NCBI, TAIR), set them up locally, BLAST ‘em, MySQL’em, parse them with Perl script, and do all other sorts of un-imaginable things. Even though bioinformaticians might be un-aware of the term, part of the local processing that happens with the data is a mashup. This term translates to the combination of pieces of data from different sources, something akin to what has been happening on the web (see also Web 2.0 or programmable web). In no way this is close to the myriad of Web 2.0 mashups that exist out there, created using APIs offered openly by different servers. In this case different sets of data are brought together by the mashup developer, who also adds value to them through their re-combination (and reciprocally adds value to the providing server, through spreading out and offering a better view of their offered data).

While the big-iron bioinformatics institutions don’t quite live in a parallel universe from Web 2.0 (we have to credit the NCBI server for its GCI interface), there are light years away from the programmable web. That is both because of the technologies their are using (forget about Ruby on Rails and REST), but also because of the small number of institutions like NCBI offering APIs.

So why should a (bioinformatics) scientist learn web development ? Because this situation I am describing above will change. These bioinformatics institutions will adopt Web 2.0 at some point during the next years - I can bet you now that, OK maybe in 5 years, we will have an NCBI running a nice REST API backed by Rails or Django. But it might happen even earlier, when people take things at their own hands. And for that I refer you to Amazon Web Services, where bioinformaticians can build their own NCBI running on Rails and sell it to other Web 2.0-minded scientists, who understand the (added) value of an inter-operable web of data.


Be part of XTractor community.

  • XTractor the first of its kind Web 2.0 - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Tuesday, December 9, 2008

BioInformatics National Certification (BINC) Examination

University of Pune (UoP), on behalf of Department of Biotechnology (DBT), Government of India, will conduct the BioInformatics National Certification (BINC) examination. The objective of this examination is to certify bioinformatics professionals, trained formally as well as self-trained.

Eligibility

Graduate in Science, Agriculture, Veterinary, Medicine, Pharmacy, Engineering & Technology are eligible to appear in the examination. They need not have any formal training, viz. certificate, diploma or degree in Bioinformatics. Students in final year of Bachelor’s degree are also eligible to apply.

Application and syllabus

The online application begins on 1 December 2008 and will continue until 16 January 2009. The examination fee is Rs 600 for general category, Rs 450 for reserved category and US$ 100 for foreign students. Please visit the website (bioinfo.ernet.in/binc) for detail information. Syllabus consists of four sections: biology, physical and chemical sciences, IT, and bioinformatics.

Examination

The examination is scheduled on 21–22 February 2009 and will be in three parts. The paper I will be objective type, only those who pass this paper with a minimum of 40% marks will be eligible to appear for the paper II and paper III. The paper II will be short answer type, while the paper III will be a computer-based practical. The certification will be awarded to those who secure a minimum of 40% in all the three papers.

Research fellowships will be awarded to 15 BINC qualified Indian nationals to pursue Ph.D. in Indian Institutes/Universities. Note that the candidate must possess a postgraduate degree & meet the criteria of the institutes/universities in order to avail research fellowship. In addition, a cash prize of Rs 10,000 will be awarded to the top 10 BINC qualifiers.

For details refer to website: bioinfo.ernet.in/binc/

CURRENT 1640 SCIENCE, VOL. 95, NO. 11, 10 DECEMBER 2008


Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Wednesday, December 3, 2008

India Fights Back - WE NEED ACTION

Let us force the international community to persuade Pakistan to declare that their nuclear arsenals are not any more in state’s control and that any time they may fall in the hands of terror outfits camped in POK and also declare regions infested by terrorist training camps as federally uncontrollable regions.

PAK has to pay for this.. The babus are answerable!

We will not forget...WE NEED ACTION...The email campaign...

Mumbai: We Will Not be Divided. Sign the petition

'Rebuild India' Mission - Article Repository










Thursday, November 27, 2008

Perfect harmony

Ridiculed by some, Gaia theory - the idea that all living and non-living components on earth work together to promote life - is gaining support.

Earth is a perfect planet for life but, according to Gaia theory, this is no coincidence. From the moment life first appeared on Earth it has worked hard to make Earth a more comfortable place to live. Gaia theory suggests that the Earth and its natural cycles can be thought of like a living organism. When one natural cycle starts to go out of kilter other cycles work to bring it back, continually optimising the conditions for life on Earth. Named after the Greek Earth goddess, Gaia, the theory was developed in the 1960s by scientist Dr James Lovelock. At the time, Lovelock was working for Nasa, looking at methods of detecting life on Mars. The theory came about as a way of explaining why the Earth's atmosphere contains high levels of nitrogen and oxygen.

Initially, Gaia theory was ignored, and then later ridiculed by scientists such as Richard Dawkins and Stephen J Gould. However, in recent times stronger evidence for the theory has emerged and Gaia has started to gain support. The theory helps to explain some of the more unusual features of planet Earth, such as why the atmosphere isn't mostly carbon dioxide, and why the oceans aren't more salty. In its early years Earth's atmosphere was mostly carbon dioxide - the product of multiple volcanic burps. It wasn't until life arrived that the balance began to change. Bacteria produced nitrogen, an inert gas, and photosynthesising plants produced oxygen, a very reactive gas. Ever since that time, about 2,500m years ago, Earth's atmosphere has contained significant amounts of nitrogen and oxygen, supporting life on this planet. The nitrogen helps to keep things stable, preventing oxygen levels from climbing too high and fuelling runaway fires. Meanwhile, the oxygen supports complex life.

Gaia also helps to explain how the oceans are kept in balance. Rivers dissolve salt from rocks and carry it to the ocean, yet ocean salinity has remained at about 3.4% for a very long time. It appears that the salt is removed again when water is cycled through cracks on the ocean floor. This process keeps the oceans' salinity in balance and at a level that most lifeforms can tolerate. These processes are not thought to be conscious ones, or to favour any one life form over another. Gaia theory simply maintains that Earth's natural cycles work together to keep the Earth healthy and support life on Earth. Lovelock argues that humans have now pushed Gaia to her limit. In addition to filling the atmosphere with carbon dioxide, we have hacked our way through the "lungs" of the planet (the rainforests) and driven many species to extinction. He thinks we are heading for a very warm world, where only polar regions are comfortable for most life forms. Eventually, he suspects, Gaia will pull things back into check, but it may be too late for the human race.

Explainer: Feedback loops

Feedback loops often appear to keep the planet in balance. One good example of this is the way in which atmospheric carbon dioxide is kept in check. Carbon dioxide is pumped into the atmosphere by volcanoes, and removed by the weathering of rocks (encouraged by bacteria and plant roots in the soil). When it reaches the sea, the dissolved carbon dioxide is used by tiny organisms, known as coccolithophores (algae), to make their shells. When coccolithophores die they release a gas - dimethyl sulphate - which encourages the formation of clouds in the atmosphere. When atmospheric carbon dioxide levels become too high, coccolithophores get busy, locking up more carbon dioxide in their shells and pumping dimethyl sulphate into the atmosphere when they die - producing clouds which reflect back sunlight and help the Earth to cool. Conversely, if atmospheric carbon dioxide levels become low, coccolithophores reduce their activity.

Over the past 200 years mankind has greatly increased atmospheric carbon dioxide levels, and recently there has been evidence that algal blooms in the ocean are increasing. Could Gaia be trying to correct our mistake?

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Wednesday, November 26, 2008

Putting That Bioinformatics 101 Class to Work

In a paper called "Metagenome Annotation Using a Distributed Grid of Undergraduate Students" I just love this title! It's nerdy and cute, all at the same time. Says Sandra Porter.

The paper describes a class where students from Marseilles University investigate the function of unidentified genes from a Global Ocean Sampling experiment. All the sequences are obtained from the environmental sequence division at the NCBI.

French researchers describe their strategy for teaching undergraduate-level bioinformatics using cutting-edge genomic data and a Web-based learning tool. The students then annotated real metagenomic sequences from the Global Ocean Sampling experiment. "In return for their much-needed help sorting out oodles of DNA data, the undergrads gain a practical knowledge of the work involved in doing bioinformatics and metagenomics, and, most importantly of all, they get to experience what it's like to do real research," says Karen James at the Beagle Project. Jonathan Eisen's a fan of the work, too, not only because it was metagenomics and published in a PLoS journal, but also because the software is open source.

Pascal Hingamp et al discuss in detail the Open Source, Open Science system for metagenome annotation (see PLoS Biology - Metagenome Annotation Using a Distributed Grid of Undergraduate Students).

They do this as part of a course on metagenome annotation. And the software for running this is all Open Source and available. They say in a way this is a metagenomics version of the Undergraduate Genomics Research Initiative (UGRI) which was described in a PLoS Biology paper previously.

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Tuesday, November 25, 2008

Systems Biology Can Uncover Signatures of Vaccination Immune Response

A team of American and French researchers used systems biology to identify gene signatures predicting human immune responses to the yellow fever vaccine, YF-17D. The work appeared in an advanced online publication in Nature Immunology yesterday.

Using high-throughput gene expression measurements, multiplex analysis of cytokines and chemokines, and multi-parameter flow cytometry, investigators tested samples taken from more than a dozen individuals in the days and weeks following their yellow fever vaccination. Computational modeling allowed them to come up with signatures predicting CD8+ T-cell and neutralizing antibody responses to YF-17D — insights into vaccine immunogenicity that may inform future vaccine research and development.
“The identification of gene signatures that correlate with, and are capable of predicting, the magnitudes of the antigen-specific CD8+ T-cell and neutralizing antibody responses provides the first methodological evidence that vaccine-induced immune responses can indeed be predicted,” senior author Bali Pulendran, an immunologist and virologist at the Emory Vaccine Center in Atlanta, and his colleagues wrote.
The yellow fever vaccine, which was developed in the 1930s, has been administered to more than 600 million people around the world. Because it is among the most effective vaccines to date — protecting 80 to 90 percent of the individuals who receive it — the researchers reasoned that YF-17D could serve as a good model for studying the early immune response to vaccination.

Do you want to know more?

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Thursday, November 20, 2008

Position open Group Leader - Bioinformatics/Systems Biology

The Computational Biology Unit (CBU) has been established to conduct top-level European research in bioinformatics, and to serve functional genomics research in Norway with relevant training and services.

CBU is searching for an additional group leader. The group leader will carry out research in the field of computational biology/bioinformatics and contribute to the overall objectives of CBU. The group leader should have a PhD and post-doctoral experience including a solid publication record in a relevant subject. Candidates will be evaluated with emphasis on their ability to raise external funding and to supervise and carry out research projects. The research profile of the candidate should be within a relevant area for CBU. Candidates with profiles in direction of systems biology will be preferred. The group leader will direct a research group consisting of Ph.D. and post-doctoral scientists.

The CBU and its partners currently have bioinformatics research activities in the fields of protein biophysics, molecular modeling and protein dynamics, transcriptional regulation microarray and proteomics bioinformatics, and genome assembly and annotation. Activity has been initiated towards integrative bioinformatics and systems biology. There are excellent opportunities for collaboration with the molecular biological and biomedical as well as mathematical and informatics research groups in Bergen.

CBU is part of Bergen Center for Computational Science (BCCS) and located together with the Department for Informatics, the Molecular Biology Department and the SARS Centre for Marine Molecular Biology, a partner of EMBL. BCCS owns and operates large scale computing facilities that provide an excellent computational environment. CBU is partner in the Molecular and Computational Biology research school (http://www.mcb.uib.no). CBU is coordinating the bioinformatics technology platform for the national functional genomics programme (FUGE) in Norway.

Salary and professional resources are internationally competitive. Please send your CV, your ten most relevant publications, and a detailed statement of research interests to Professor Inge Jonassen (Inge.Jonassen@bccs.uib.no), head of the CBU. Evaluation of applications will commence January 4th 2009 and continue until a suitable candidate has been identified. For more information about CBU, please refer to http://www.cbu.uib.no/, or contact Inge Jonassen.

Courtesy:
Inge Jonassen, PhD
Department of Informatics and
Computational Biology Unit, BCCS
University of Bergen
HiB
No5020 Bergen
Norway



Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Monday, November 17, 2008

Leukemia Genome Project Highlights Second-Gen Sequencing Software Needs

The first effort to sequence a complete cancer genome has underscored the power of second-generation sequencing while further establishing the lack of a “killer software app” in the field.

In the study, published this week in Nature, a team of 48 scientists at the Genome Center of Washington University and elsewhere sequenced a female patient’s acute myeloid leukemia genome and compared it to the genome of her biopsied skin as well as reference genomes to uncover 10 cancer-associated mutations — eight of which were previously unknown.
The team used two high-throughput sequencing platforms — the Illumina Genome Analyzer and the Roche/454 FLX platform — and software tools such as Maq, Cross_Match, BLAT, and Decision Tree analysis. The team also did its own scripting and algorithm development in the course of the project, Rick Wilson, director of the Genome Sequencing Center at Washington University School of Medicine, said.

The AML sequencing team applied several established software tools and algorithms as well as those developed specifically for the project, underscoring the fact that second-generation sequencing projects are not taking place in a one-pipeline-fits-all world.


Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Thursday, October 30, 2008

The Brain Machine Interface

Dr. Justin Sanchez discusses technologies that enable direct brain to computer interfacing, Dr. Justin C. Sanchez, Director of the Neuroprosthetics Research Group, Assistant Professor, Department of Pediatrics, Division of Neurology, Department of Neuroscience, Department of Biomedical Engineering, University of Florida.

I really had no idea that the technologies that Justin has developed existed other than in science fiction. The possibilities are endless, and could change everything from computing, to flying planes, to simply changing the channel…

Do you want to know more? Listen to Dr. Justin Sanchez!

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Wednesday, October 29, 2008

XTractor™ on SelectScience.net


Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Monday, October 27, 2008

Freely mining PUBMED for your drug discovery needs everyday

Biomedical Data mining happens to be a long-standing problem in scientific research. Scientists are constantly in search of newer and innovative means to mine biomedical data.

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Wednesday, October 22, 2008

Does your everyday research constantly demand answers to such questions?

Ask XTractor - XTractor™ now turns ON its ‘Insta Search’. Get a snapshot of the XTractor data for your desired query using the Insta Search feature.

Which are the biological processes involved in Leukemia?
What are the drugs associated with Ovarian Neoplasm?
What are the diseases modulated by FRAP?

Then get a snapshot of the XTractor data from the latest published abstracts on your questions using the 'Insta Search' feature.


Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Monday, October 20, 2008

Complete Genomics Service Targets $1000 Genome by 2009

Complete Genomics emerged from stealth mode today brandishing an audacious service model for wholesale next-generation sequencing, with its first human genome already assembled and the CEO’s pledge to reach the magical “$1000 genome” price point as early as spring 2009.

Based in Mountain View, Calif., Complete Genomics has raised $46 million in three rounds of financing since its incorporation in 2006. Unlike its commercial next-gen sequencing rivals – Roche/454, Illumina, Applied Biosystems (ABI) and Helicos – Complete Genomics will not be selling individual instruments, but rather offer a service aimed initially at big pharma and major genome institutes.

“Our mission is to be the global leader in complete human genome sequencing,” chairman, president and CEO Clifford Reid in a briefing last week. “We are setting out to completely change the economics of genome sequencing so that we can do diagnostic quality human genome sequencing at a medically affordable price. Essentially, [we’ll] transition this genome sequencing world from a scientific and academic endeavor into a pharmaceutical and medical endeavor.”

Do you want to know more?

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Friday, October 17, 2008

International Project Launched to Sequence Human Microbiome, Share Data

In Heidelberg, Germany, today researchers from eight countries and the European Commission announced the formation of a new research enterprise, the International Human Microbiome Consortium (IHMC), which will sequence the genomes of tens of thousands of microorganisms that live in and on the human body and that influence human health.

Initial funding of more than US$200 million is being provided by the U.S. National Human Genome Research Institute (NHGRI) and the European Commission (EC).

Jane Peterson, associate director of extramural research at the NHGRI, said international collaboration is very important in advancing science, and that “the sum is more than the parts.” Participants in the IHMC have agreed in principle to the free and open release of data and resources, and the coordination of research plans, as well as to sharing innovative developments, she reported. Data from microbiome research already being conducted by the NIH Human Microbiome Project and the EC Metagenomics of the Human Intestinal Tract (MetaHIT) project will contribute an initial set of microbial genomes to the IHMC. Because the field is so young – less than three years old – there is much to be gained by collaboration, Peterson said.

Christian Desaintes, from the Research Directorate of the European Commission, said the IHMC’s goal for five years hence is to be sequencing 1000 microbiome genomes from over 1000 individuals’ body parts. The parts in question are the skin, mouth, nasal passages, gastro-intestinal tract, and urogenital tract.

Do you want to know more?

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Tuesday, October 14, 2008

XTractor™ - Creates a user base of over 900 users from 150 premier organizations worldwide.

US FDA, NIH, NCI, MD Anderson, Harvard Medical School, Novartis, Wyeth, AstraZeneca, Vertex, P&G, JNJ many more subscribe to XTractor.

“XTractor™ is an intelligence that works so unconscious even for the users to perceive. It is fascinating as to how quickly XTractor transforms the most complex scientific facts into structured knowledge almost instantaneously by bringing about cross entity relationships across abstracts thereby aiding in quicker decision making through the Drug Discovery process”
Bangalore, India, October 12, 2008 --(PR.com)-- Indian Life Sciences Informatics Company, Molecular Connections announced today, that its XTractor™ - The first of its kind scientific literature alert service launched in July ‘08 has built a user base of over 900 users form more than 150 premier organizations. XTractor™ has been widely accepted and received well in the scientific fraternity both the academia and the industry across the globe.

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Wednesday, September 24, 2008

HUGO's 13th Human Genome Meeting Hyderabad, India Sat 27-Tue 30 Sep 2008

I am off to HGM2008

HUGO's 13th Human Genome Meeting
Hyderabad, India Sat 27-Tue 30 Sep 2008

Hope to meet many of you there and BioSaga will be back in action form October

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Wednesday, September 17, 2008

The future of biocuration

Blogging about curation in the past several issues have been discussed but here is something very critical.
To thrive, the field that links biologists and their data urgently needs structure, recognition and support.
Biocuration, the activity of organizing, representing and making biological information accessible to both humans and computers, has become an essential part of biological discovery and biomedical research. But curation increasingly lags behind data generation in funding, development and recognition.

Three urgent actions to advance this key field. First, authors, journals and curators should immediately begin to work together to facilitate the exchange of data between journal publications and databases. Second, in the next five years, curators, researchers and university administrations should develop an accepted recognition structure to facilitate community-based curation efforts. Third, curators, researchers, academic institutions and funding agencies should, in the next ten years, increase the visibility and support of scientific curation as a professional career.

Failure to address these three issues will cause the available curated data to lag farther behind current biological knowledge. Researchers will observe an increasing occurrence of obvious gaps in knowledge. As these gaps expand, resources will become less effective for generating and testing hypotheses, and the usefulness of curated data will be seriously compromised. When all the data produced or published are curated to a high standard and made accessible as soon as they become available, biological research will be conducted in a manner that is quite unlike the way it is done now.

Researchers will be able to process massive amounts of complex data much more quickly. They will garner insight about the areas of their interest rapidly with the help of inference programs. Digesting information and generating hypotheses at the computer screen will be so much faster that researchers will get back to the bench quickly for more experiments. Experiments will be designed with more insight; this increased specificity will cause an exponential growth in knowledge, much as we are experiencing exponential growth in data today.

Also read this

Do you want to know more?

Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Wednesday, September 10, 2008

Just 400 bucks to sequence your own genome and make a personal genetic profile

Well it so exiting to watch how man makes his future ... The future, always so clear..., had become like a black highway at night. We were in uncharted territory now, making up history as we went along. The future is not set, because we control what happens through the choices we make. The GATTACA era is not far off.

23andMe has dramatically slashed the price for its service and expanded its offerings to include a lineage tracing service through a partnership with Ancestry.com. In a statement today said that by cutting the price for its genotyping service from $999 to $399 it is “democratizing personal genetics and expanding the opportunity for more people to benefit from the genetic revolution.”

The company said advances made to Illumina’s genotyping technology, specifically the introduction of the HumanHap550-Quad+ BeadChip, made the price cut possible. Illumina is the provider of genotyping tools for 23andMe’s services. 23andMe also said that beyond the new ancestry service it has added improved custom content to the BeadChip to include more SNP variations and rare mutations. “By taking advantage of continuing innovation we are able to introduce a new chip that will give people more relevant data at a lower price,” 23andMe Co-founder Anne Wojcicki said in a statement.

In addition to technological advances, there has been speculation from industry observers that the crop of new DTC genomics service providers, such as 23andMe, Navigenics, and DeCode Genetics, may be facing price pressure from an ongoing research initiative undertaken by the Coriell Institute for Medical Research earlier this year. Coriell is trying to recruit 100,000 volunteers — 10,000 by the end of 2009 — to provide DNA through a saliva sample for a similar, but free, service as those being offered by the commercial firms. The Camden, NJ-based institute plans to use the information in a research study exploring the utility of using genomic information in clinical decision making.

The company said the ancestry analysis service it will provide through the Ancestry.com partnership “allows users to trace their genetic lineage and discover the role that their ancestral origins have played in human history.” Ancestry.com’s DNA database contains over 7 billion names in 26,000 databases, and it includes more than 7 million user-submitted family trees, which enables customers to “trace their roots and connect with distant cousins,” 23andMe said.



Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Tuesday, September 9, 2008

Scientists - Get Networked

Social networking is the latest buzz on the internet. You’ve heard about it, but what does it mean to you as a scientist? Well for one thing, it means that networking has never been easier. Here are six of the best social networking sites for scientists that are designed to help you make and maintain your professional contacts.

1. SciLink is a souped-up networking site that actually knows who a lot of your contacts will be before you even sign up. Uniquely, SciLink mines literature databases to build a network of professional relationships that you can slot into (and of course expand further) when you sign up. You can also find jobs, discussion, news etc on the site.

2. MyNetResearch is a powerful website for finding collaborators for your project. You set up your own account/profile and build a network of contacts as with the other social networks but MyNetResearch is designed to help you find people who work in the areas you are interested in (or interested in expanding into) and arrange collaborations with them.

3. The Nature Network. As you might expect, this is the grand-daddy of science social networks. Not only can you set up a contact network, but you can also browse niche-specific forums and groups, start your own blog, and much more.

4. LinkedIn is a site professional networking site for all professions. Unlike the science-specific networking sites,your LinkIn contact list can contain contacts who are not scientists, which is useful if you actually know people in the real world too and it has a more professional atmosphere than Facebook so people of all ages are more likely to join up.

5. Labmeeting primarily allows you to archive, track and share your literature. From your account you can search for papers of interest and upload the PDFs to your account for later retreival. You can also set up streams to keep you informed of the latest publications in your fields of interest, which you can then add to your archive. However, you can also set up a group area to share papers and talk about your interests, and to schedule events, such as lab meetings.

6. XTractor This free service helps discovering newer scientific relations across abstracts. it provides manually curated and annotated sentences for the keywords of your choice. Maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies. Just play around with their drug, disease, etc entity types and you can actually track a drug or process across various diseases across abstracts :)

What social networking sites do you use?



Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.