Thursday, December 25, 2008

Bruker Execs to Take Salary Cut in 2009

A few of Bruker’s top executives will take pay cuts in the range of 10 percent to 25 percent in 2009, the firm reported in a filing today with the US Securities and Exchange Commission.

The 2009 salary for Bruker Chairman, President, and CEO Frank Laukien will decrease 25 percent to $318,750 from his 2008 salary of $425,000. CFO and Treasurer William Knight will receive a 2009 salary of $288,000, down 10 percent from his 2008 salary of $320,000. Brian Monahan, corporate controller and EVP of Bruker Daltonics, will receive a 2009 salary of $180,000, down 10 percent from his 2008 salary of $200,000.

The firm said in the filing that the salary reductions were temporary, but it didn’t provide any other details. At the end of October, Bruker reported flat third-quarter revenue growth. During its Q3 conference call Laukien noted that beginning this past summer the firm began taking steps to reduce its operating and interest expense and reduce its exposure to currency fluctuations.

“We expect that our cost-cutting initiatives will already have noticeable positive effects in the fourth quarter of 2008 and first quarter of 2009, and that by the middle of 2009 we will see annualized reductions in our overall costs of greater than $12 million,” he said.

Bruker’s shares closed down 1 percent at $4.21 in an abbreviated trading session on the Nasdaq. While stocks across the board are down due to the current economic climate, Bruker’s shares have been hit particularly hard this year, falling 68 percent since the beginning of 2008.


Be part of XTractor community.

  • XTractor the first of its kind - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Monday, December 22, 2008

Google Shutters Its Science Data Service

The Google Datasets Project Comes to An End

Google will shutter its highly-anticipated scientific data service in January without even officially launching the product, the company said in an e-mail to its beta testers.

Once nicknamed Palimpsests, but more recently going by the staid name, Google Research Datasets, the service was going to offer scientists a way to store the massive amounts of data generated in an increasing number of fields. About 30 datasets — mostly tests — had already been uploaded to the site.

The dream appears to have fallen prey to belt-tightening at Silicon Valley's most innovative company.

"As you know, Google is a company that promotes experimentation with innovative new products and services. At the same time, we have to carefully balance that with ensuring that our resources are used in the most effective possible way to bring maximum value to our users," wrote Robert Tansley of Google on behalf of the Google Research Datasets team to its internal testers.

"It has been a difficult decision, but we have decided not to continue work on Google Research Datasets, but to instead focus our efforts on other activities such as Google Scholar, our Research Programs, and publishing papers about research here at Google," he wrote.

Axing this scientific project could be another sign of incipient frugality at Google. Just a couple weeks ago, Google CEO Eric Schmidt told the Wall Street Journal that his company would be cutting back on experimental projects. First described in detail by Google engineer Jon Trowbridge at SciFoo 2007 — the slides from a later version of the talk is archived on the Partial Immortalization blog — the project was going to store, for free, some of the world's largest scientific datasets. In Trowbridge's slides, he points out the 120 terabyte Hubble Legacy Archive and the one terabyte Archimedes palimpsest.

"'It's a sad story if it's true," wrote Attila Csordas, a stem cell biologist and author of Partial Immortalization who recently moved to Hungary from Tulane University, in an email to Wired.com. "Assuming it is true that might mean that Google is still a couple years away from directly helping the life sciences (on an infrastructural level)."

Other scientists remained hopeful that the service might return in better times.

"The Space Telescope Science Institute has had a long positive relationship with Google that started with our partnership in GoogleSky in early 2006," said astrophysicist Alberto Conti of STSI. "We were looking forward to Google's commitment to helping the astronomical community with the data deluge, and we are sure Google will reconsider this decision in the future. While perhaps understandable in this economic climate, it's sad to see Google leave the field."

And Conti noted, other companies may step up to help scientists manage their information.

"Amazon is doing exactly the opposite and they might actually fill the void," he said.

Google representatives did not respond immediately to request for comment.

Also read The Google Datasets Project Comes to An End - Oh My Chemistry - Who Cares For You?


Be part of XTractor community.

  • XTractor the first of its kind - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Happy Birthday XML

This comes as a tribute to XML on its one decade milestone!

As anyone with kids—or a good memory—knows, when you cross the "double digits" birthday threshold, it’s a big deal. This year, XML crossed this threshold on Feb. 10, and this got me thinking about questions that I might ask this 10-year-old in order to gain perspective on its past and future. I know I’m late, but XML is nothing if not flexible. It assured me that even a belated party is better than none, especially if I invited Alexander Falk, founder and CEO of Altova (its flagship product XML Spy is one of our favorites) and a real XML aficionado (www.xmlaficionado.com).

I began with this: "XML, you’re much more famous than your parent SGML, and your sibling HTML 4.01 was deprecated in favor of an XML standard, XHTML. Techies worldwide have heard about you, and mighty standards battles have been waged about you. However, growing is easy the first 10 years, but soon you’ll have to be able to point to practical accomplishments. What have you been up to lately?"

Confident as any precocious 10-year-old, XML replied: "First, I’ve inspired more than 38 core recommendations, everything from Canonical XML 1.1 to XQuery to XSL Transformations (XSLT), and many others. And, of course, the list of standards built on XML is enormous, including SOAP, DITA, and XBRL. And don’t forget: The World Wide Web consortium fosters only the development of basic XML standards. Other organizations such as OASIS and XBRL International have built many practical XML applications on top of the core recommendations. I’d say I’m off to a pretty good start."

I then pointed out that one of XML’s siblings may be making a comeback as HTML 5. Given earlier deprecation, one wonders why we would need a new version of HTML. One reason, according to the W3C, is that "new elements are introduced based on research into prevailing authoring practices." This sounds a lot like backsliding. XML countered: "Well, you know about sibling rivalry. And HTML 5 may be a long time in getting approved, if it ever is. And I might point out that even its authors admit it isn’t a complete replacement for XHTML. You see, I’m so flexible that I can do just about anything." Falk, in defense of XML, said, "I’m afraid that the reality is that a lot of HTML is still created by hand. Tools (such as DreamWeaver) have been very slow to enforce XHTML compliance, and people continue to generate sloppy HTML pages." Without wanting to spoil the party, I noted that, as ever, practicality triumphs. If sloppy webpages work, standards take a back seat.

I agreed about the need for practicality and moved on to my next topic: Microsoft Office’s OOXML for Word. I see this as essentially a replacement for its proprietary "rich text format," a way of displaying text on pages. Since XML has the separation of look-and-feel from content in its DNA, I asked: "Isn’t OOXML for Word’s use of XML pretty superficial, since you can’t really do much more with it than you could with rich text formats?" Sensing a rhetorical trap, XML replied, "Yes, I personally prefer OpenOffice’s ODF standard since it takes better advantage of other XML standards. But you can’t argue with Microsoft Office’s success, and it may become an ISO standard (ISO/IEC 29500) just as ODF is. Not only that, but this change forces Microsoft to document and manage future changes in an open way." That was true—XML seems wise beyond its 10 years—but Microsoft won’t support that standard until Office 2014, which means at least another 6 years hence.

However, since this was a party, I moved on to my last, least contentious question: "What do you think will be the biggest surprise use of your standard in the next 10 years, XML?" I was amazed to hear the response: "XBRL! This will transform public corporations and the financial services industry, affecting investors, processes, and data. It will level the investment playing field." I was impressed to hear XML’s familiarity with capital markets and investing. I was even more surprised when Falk agreed: "Altova is working on support for XBRL in the next major software release, v2009, and plans to have XBRL-specific features, including XBRL validation, taxonomy editing, and data mapping." This statement from a longtime XML vendor is a serious commitment indeed, and it can only support widespread acceptance of XBRL.

Differences of opinions aside, XML’s next 10 years look bright indeed.

By Robert J. Boeri - December 2008 Issue, Posted Dec 01, 2008


Be part of XTractor community.

  • XTractor the first of its kind - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Wednesday, December 17, 2008

Invitrogen’s Acquisition of U of Houston Startup VisiGen Pays Dividends for School

VisiGen Biotechnologies, a University of Houston gene-sequencing outfit acquired by Invitrogen in October for $20 million, represents the university’s biggest return on a spinout to date, and could eventually become one of its biggest overall tech-transfer wins, the school said last week.

As a result of VisiGen’s acquisition, UH, which held an undisclosed equity stake in the startup, will receive nearly $500,000 from the initial installment of the deal. Plans for follow-on installments were not disclosed. An indirect benefit comes from the fact that approximately half of VisiGen’s current employees are UH graduates, the school said.

Several of the scientists that founded the company will now continue to research second-generation sequencing techniques in their UH laboratories. Whether this research eventually sparks additional collaborations between VisiGen and the university is now up to Life Technologies, the biotech tool giant that resulted from Invitrogen’s multi-billion-dollar merger with Applied Biosystems last month.

Do you want to know more?


Be part of XTractor community.

  • XTractor the first of its kind - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Wednesday, December 10, 2008

Web 2.0 and Semantic Web for Bioinformatics

Here is a hand picked item, one of my favorites among the recent blogs that came across. This articles reflects and talks about many things which i have appreciated in the recent times and have blogged in the past web 2.0....

Why should a (bioinformatics) scientist learn web development ?

Up to now bioinformatics research with genomics datasets, has been happening like that: you download the data from a website of a big-iron institution (NCBI, TAIR), set them up locally, BLAST ‘em, MySQL’em, parse them with Perl script, and do all other sorts of un-imaginable things. Even though bioinformaticians might be un-aware of the term, part of the local processing that happens with the data is a mashup. This term translates to the combination of pieces of data from different sources, something akin to what has been happening on the web (see also Web 2.0 or programmable web). In no way this is close to the myriad of Web 2.0 mashups that exist out there, created using APIs offered openly by different servers. In this case different sets of data are brought together by the mashup developer, who also adds value to them through their re-combination (and reciprocally adds value to the providing server, through spreading out and offering a better view of their offered data).

While the big-iron bioinformatics institutions don’t quite live in a parallel universe from Web 2.0 (we have to credit the NCBI server for its GCI interface), there are light years away from the programmable web. That is both because of the technologies their are using (forget about Ruby on Rails and REST), but also because of the small number of institutions like NCBI offering APIs.

So why should a (bioinformatics) scientist learn web development ? Because this situation I am describing above will change. These bioinformatics institutions will adopt Web 2.0 at some point during the next years - I can bet you now that, OK maybe in 5 years, we will have an NCBI running a nice REST API backed by Rails or Django. But it might happen even earlier, when people take things at their own hands. And for that I refer you to Amazon Web Services, where bioinformaticians can build their own NCBI running on Rails and sell it to other Web 2.0-minded scientists, who understand the (added) value of an inter-operable web of data.


Be part of XTractor community.

  • XTractor the first of its kind Web 2.0 - Literature alert service, provides manually curated & annotated sentences for the Keywords of your choice
  • XTractor maps, extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies
  • Enables customized report generation. With XTractor the sentences are categorized into biologically significant relationships
  • The categorized sentences could then be tagged and shared across multiple users
  • Provides users with the ability to create his own database for a set of Key terms
  • Users could change the Keywords of preference from time to time, with changing research needs
  • XTractor thus proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate


Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.











Tuesday, December 9, 2008

BioInformatics National Certification (BINC) Examination

University of Pune (UoP), on behalf of Department of Biotechnology (DBT), Government of India, will conduct the BioInformatics National Certification (BINC) examination. The objective of this examination is to certify bioinformatics professionals, trained formally as well as self-trained.

Eligibility

Graduate in Science, Agriculture, Veterinary, Medicine, Pharmacy, Engineering & Technology are eligible to appear in the examination. They need not have any formal training, viz. certificate, diploma or degree in Bioinformatics. Students in final year of Bachelor’s degree are also eligible to apply.

Application and syllabus

The online application begins on 1 December 2008 and will continue until 16 January 2009. The examination fee is Rs 600 for general category, Rs 450 for reserved category and US$ 100 for foreign students. Please visit the website (bioinfo.ernet.in/binc) for detail information. Syllabus consists of four sections: biology, physical and chemical sciences, IT, and bioinformatics.

Examination

The examination is scheduled on 21–22 February 2009 and will be in three parts. The paper I will be objective type, only those who pass this paper with a minimum of 40% marks will be eligible to appear for the paper II and paper III. The paper II will be short answer type, while the paper III will be a computer-based practical. The certification will be awarded to those who secure a minimum of 40% in all the three papers.

Research fellowships will be awarded to 15 BINC qualified Indian nationals to pursue Ph.D. in Indian Institutes/Universities. Note that the candidate must possess a postgraduate degree & meet the criteria of the institutes/universities in order to avail research fellowship. In addition, a cash prize of Rs 10,000 will be awarded to the top 10 BINC qualifiers.

For details refer to website: bioinfo.ernet.in/binc/

CURRENT 1640 SCIENCE, VOL. 95, NO. 11, 10 DECEMBER 2008


Be a part of the XTractor community. XTractor is the first of its kind - Literature alert service, that provides manually curated and annotated sentences for the Keywords of user preference. XTractor maps the extracted entities (genes, processes, drugs, diseases etc) to multiple ontologies and enables customized report generation. With XTractor the sentences are categorized into biological significant relationships and it also provides the user with the ability to create his own database for a set of Key terms. Also the user could change the Keywords of preference from time to time, with changing research needs. The categorized sentences could then be tagged and shared across multiple users. Thus XTractor proves to be a platform for getting real-time highly accurate data along with the ability to Share and collaborate.

Sign up it's free, and takes less than a minute. Just click here:www.xtractor.in.












Wednesday, December 3, 2008

India Fights Back - WE NEED ACTION

Let us force the international community to persuade Pakistan to declare that their nuclear arsenals are not any more in state’s control and that any time they may fall in the hands of terror outfits camped in POK and also declare regions infested by terrorist training camps as federally uncontrollable regions.

PAK has to pay for this.. The babus are answerable!

We will not forget...WE NEED ACTION...The email campaign...

Mumbai: We Will Not be Divided. Sign the petition

'Rebuild India' Mission - Article Repository