DropDownList Inside the Custom EditTemplate        
Hello Edwin,

You can review the Grid popup editing example implementation with Entity Framework and LocalDB in the Kendo UI for ASP.NET MVC Sample Application. You can find more detailed information about how to setup and run the application locally from the link below:


The source for the sample application can be found in the installation directory of Progress Telerik UI for ASP.NET MVC by navigating to \wrappers\aspnetmvc\Examples\VS2015\Kendo.Mvc.Examples.sln.

Regards,
Dimitar
Progress Telerik
Try our brand new, jQuery-free Angular 2 components built from ground-up which deliver the business app essential building blocks - a grid component, data visualization (charts) and form elements.

          Towards open and transparent data-driven storytelling: Notes from my Alan Turing Institute talk        

As mentioned in an earlier blog post, I've been spending some time at the Alan Turing Institute recently working on The Gamma project. The goal is to make data visualizations on the web more transparent. When you see a visualization online, you should be able to see where the data comes from, how it has been transformed and check that it is not misleading, but you should also be able to modify it and visualize other aspects of the data that interest you.

I gave a talk about my work as part of a talk series at The Alan Turing Institute, which has been recorded and is now available on YouTube. If you prefer to watch talks, this is a good 45 minute overview of what I've been working on with great video quality (the video switches from camera view to screen capture for demos!)

If you prefer text or do not have 45 minutes to watch the talk (right now), I wrote a short summary that highlights the most important ideas from the talk. You can also check out the talk slides, although I'll include the most important ones here.


          The Gamma dataviz package now available!        

There were a lot of rumors recently about the death of facts and even the death of statistics. I believe the core of the problem is that working with facts is quite tedious and the results are often not particularly exciting. Social media made it extremely easy to share your own opinions in an engaging way, but what we are missing is a similarly easy and engaging way to share facts backed by data.

This is, in essence, the motivation for The Gamma project that I've been working on recently. After several experiments, including the visualization of Olympic medalists, I'm now happy to share the first reusable component based on the work that you can try and use in your data visualization projects. If you want to get started:

The package implements a simple scripting language that anyone can use for writing simple data aggregation and data exploration scripts. The tooling for the scripting language makes it super easy to create and modify existing data analyses. Editor auto-complete offers all available operations and a spreadsheet-inspired editor lets you create scripts without writing code - yet, you still get a transparent and reproducible script as the result.


          The Gamma — Visualizing Olympic Medalists        

Olympic Games are perfect opportunity to do a fun data visualization project - just like New Year, you can easily predict when they will happen and you can get something interesting ready in advance. I used this year's Games in Rio as a motivation to resume working on The Gamma project. If you did not read my previous article, the idea is to build tooling for open, reproducible and interactive data-driven storytelling. When you see a visualization, not only you should be able to see how it has been created (what data it uses & how), but you should also be able to modify it, without much programming experience, and look at other interesting aspects of the data.

The first version of The Gamma project tries to do some of this, using historical and current data on Olympic medals as a sample dataset. You can play with the project at The Gamma web site:

Without further ado, here are the most important links if you want to explore how it all works on your own. However, continue reading and I'll describe the most important parts!

The project is still in early stages and the code needs more documentation (and ReadMe files, I know!) However, if you would be interested in using it for something or you have some interesting data to visualize, do not hesitate to ping me at @tomaspetricek. Also, thanks to the DNI Innovation Fund for funding the project and to the Alan Turing Institute for providing a place to work on this over the coming months!


          What are the top 10 values that make us Australian?        
Ever wondered what values make us uniquely Australian – and the differences amongst various age groups or locations? Check out the results of our National Value Assessment, done by Galaxy Poll in February 2016 for the Australian Futures Project. Data visualisation courtesy of News Corp Australia. View the results on news.com.au
          Daily Must Reads, October 11, 2012        

The best stories across the web on media and technology, curated by Lily Leung.1. Mayer to unveil new company goals for Yahoo! (AllThingsD) 2. The profile of a typical Twitter user (GigaOm) 3. How data visualizations can have agendas (CJR) 4. Do you suffer from information over-consumption? (AppNewser) 5. Tips for journalists on how to […]

The post Daily Must Reads, October 11, 2012 appeared first on MediaShift.


          Map of the Day – The Cost of Living Everywhere        
Today’s Map of the Day (MOTD) is a clever data visualization that tells the story of the cost of living Some interesting trends appear upon first glance at the map including: Canada costs more than the USA, Greenland is very high, and some of the least costly places to live are found in India, Mexico, […]
          The U.S. Government Pavilion at the SelectUSA Investment Summit – Federal Investment Resources at Your Fingertips        
Photo from the U.S. Government Pavilion at the 2016 SelectUSA Investment

The following is a cross-post from Tradeology, the official blog of the International Trade Administration (ITA)

By Andrew Owusu, Intern, SelectUSA

The 2017 SelectUSA Investment Summit provides a platform to communicate economic priorities and affirm the United States as the number one destination in the world for foreign direct investment.  Looking for U.S. economic data and analysis? The U.S. Government (USG) Pavilion in the Exhibition Hall at the Summit will feature representatives from federal agencies that can help participants find and understand U.S. economic indicators. They include:

  • The Economics and Statistics Administration (ESA) which performs high-quality economic analysis, disseminates national economic indicators   and fosters the mission of the U.S. Census Bureau (Census), the Bureau of Economic Analysis (BEA) and the Office of the Chief Economist (OCE).  OCE created the Assess Costs Everywhere (ACE) tool for businesses to evaluate locating in the U.S.;
  • The Bureau of Economic Analysis (BEA) is a trusted and impartial source of data and statistics on the U.S. economy. BEA data is comprehensive and available for free to all;
  • The U.S. Census Bureau is the leading source of data about the nation’s people and economy; The U.S. Census Bureau provides a wealth of demographic and economic data that can help potential businesses make informed investment decisions.

Looking for data specifically on FDI?  SelectUSA Stats is a public, free online data visualization tool that allows you to compare data on foreign direct investment into the United States. The tool features interactive dashboards that display graphical data. The data, from sources like BEA, includes useful categories such as greenfield investment by country, stock and flow, impacts on U.S. employment, exports, research and development; FDI trends by industry, country, and state. With a choice of multiple data agencies, attendees have numerous opportunities to pick up practical tools and strategies from expert practitioners in a variety of investment-related fields.

Officials will also be standing by to provide information on workforce and training programs, U.S. exports, supply chain, research and innovation, U.S. visas, and the Global Entry program. Below is a list of all of the federal agencies, along with their booth numbers, that are participating in the USG Pavilion.

GP1: U.S. Department of State

GP2: U.S. Citizenship and Immigration Services

GP3: U.S. Customs and Border Protection

GP6: U.S. Department of Commerce Rural Team

GP7-8: SelectUSA Stats

GP9: International Trade Administration, Industry and Analysis

GP10: Export Assistance Programs

GP11: U.S. Department of Agriculture, Rural Development

GP12: U.S. Department of Agriculture, Foreign Agricultural Service

GP13: U.S. Economic Development Administration 

GP14: Minority Business Development Agency

GP15: Bureau of Industry and Security

GP16: The Office of the U.S. Trade Representative

GP17:  U.S. Environmental Protection Agency

GP18: Export-Import Bank of the United States

GP19: U.S. Department of Transportation

GP20: U.S. Small Business Administration 

GP21: UNICOR

GP22: U.S. Department of Veterans Affairs

GP23-24: U.S. Department of Labor

GP25-26: Bureau of Economic Analysis

GP27:  Economics Statistics Administration

GP28: U.S. Census Bureau

GP29: National Institute of Standards and Technology, Manufacturing Extension Partnership

GP30: National Network for Manufacturing Innovation, Manufacturing USA

GP31-32: The U.S. Patent and Trademark Office

A number of U.S. government representatives will also be participating in the matchmaking system, so interested attendees will be able to request meetings and learn more about the various resources that the agencies offer.

There is still time to register! Head over to the Summit website to apply for registration to the top FDI event in the United States. If you are unable to attend but would like information on SelectUSA and our services, please visit our website or contact us.


          The Future of AI: Redefining How We Imagine        

FICO 25 years of AI and machine learning logo

To commemorate the silver jubilee of FICO’s use of artificial intelligence and machine learning, we asked FICO employees a question: What does the future of AI look like? The post below is one of the thought-provoking responses, from Sadat Nazrul, an analytic scientist at FICO, working in San Diego.

Looking at the next 20 years, I see us moving well beyond the productivity enhancements AI has brought about so far. With the advent of AI, we will be seeing a renaissance in our own personal lives as well as society as a whole.

Healthcare

Today, our gadgets have the ability to monitor the number of steps we take, the rate of our heart beat, as well as the contents of our sweat. All this rich information allows a team of doctors, engineers and analysts to monitor our well-being and to maintain our peak performance. Similarly, with innovations in genomic sequencing and neural mapping passing FDA trials, we will soon be seeing huge leaps in the field of personalized medicine. AI will help us understand individual physiological needs in order to come up with customized prescriptions and improve our overall health standards.

Cognitive Abilities

People are keen to improve cognition. Who wouldn’t want to remember names and faces better, to be able more quickly to grasp difficult abstract ideas, and to be able to “see connections” better? Who would seriously object to being able to appreciate music at a deeper level?

The value of optimal cognitive functioning is so obvious that to elaborate the point may be unnecessary. Today we express ourselves through art, movies, music, blogs, and a wide range of social media applications. In the field of image recognition, AI can already “see” better than we can by observing far more than the RGB. Virtual Reality allows us to feel as though we have teleported to another world. New idioms of data visualization and Dimensionality Reduction algorithms are always being produced for us to better experience the world around us.

We are constantly trying to enhance our 5 senses to go beyond our human limits. 10 years from now, these innovations, coupled with IoT gadgets, will act as extensions of who we are and help us experience our surroundings more profoundly.

Emotional Intelligence

Just as we enhance our 5 cognitive senses, so too do we enhance our ability to express ourselves and to understand those around us.

Many times, we don’t even know what we want. We strive to connect with those around us in a specific way or consume a particular product, just so we could feel a very unique emotion that we fail to describe. We feel much more than just happiness, sadness, anger, anxiety or fear. Our emotions are a complex combinations of all of the above.

With the innovations in neural mapping, we will better understand who we are as human beings and better understand the myriad emotional states that we can attain. Our complex emotional modes will be better understood as we perform unsupervised learning on brain waves and help find innovative ways to improve our emotional intelligence. This would include both understanding our own emotions and being more sensitive towards those around us.

Perhaps we can unlock new emotions that we have never experienced before. In the right hands, AI can act as our extensions to help us form meaningful bonds with the people we value in our lives.

Experience and Imagination

The effect of AI on our experience and imagination would result from an aggregate of better cognitive abilities and emotional intelligence. The latest innovations in AI may help us unlock new cognitive experiences and emotional states.

Let’s imagine that the modes of experience we have today is represented in space X. 10 years from now, let’s say that the modes of experience are represented in space Y. The space Y will be significantly bigger than space X. This futuristic space of Y may have access to new types of emotions other than our conventional happy, sad and mad. This new space of Y can even allow us to comprehend abstract thoughts that reflect what we wish to express more accurately.

This new space of Y can actually unlock a new world of possibilities that lies beyond our current imagination. The people of the future will think, feel and experience the world at a much richer degree than we can today.

Communication

10 years ago, most of our communication were restricted to phones and emails. Today, we have access to video conferences, Virtual Reality and a wide array of applications on social media. As we enhance our cognitive abilities and emotional intelligence, we can express ourselves through idioms of far greater resolution and lower levels of abstractions.

We already have students from the University of Florida achieving control of drones using nothing but the mind. We even have access to vibrating gaming consoles that take advantage of our sense of touch for making that Mario Kart game that much more realistic. 10 years from now, the way we communicate with each other will be much deeper and more expressive than today. If we are hopeful enough, we might even catch a glimpse of the holograms of Star Wars and telepathic communications of X-Men.

Virtual Realities of today only limit us to our vision and sense of hearing. In the future, Virtual Realities might actually allow us to smell, taste and touch our virtual environment. Along with access to our 5 senses, our emotional reaction to certain situations might be fine-tuned and optimized with the power of AI. This might mean sharing the fear of our main characters on Paranormal Activity, feeling the heartbreak of Emma or being excited about the adventures of Pokemon. All this can be possible as we explore the theatrical arts of smart Virtual Reality consoles.

Information Security

AI allow us to unearth more unstructured data at higher velocity to generate valuable insight. However, the risk of those very sensitive data falling into the wrong hands will also escalate.

Today, cybersecurity is a major concern on everyone’s mind. In fact, 2017 is the year the fingerprint got hacked. With the help of AI, information technology will get more sophisticated in order to protect the things we value in our lives.

It is human nature to want to go beyond our limits and become something much more. Everyone wants to live longer healthy lives, experience more vividly and feel more deeply. Technology is simply a means to achieve that end.

See other FICO posts on artificial intelligence.

The post The Future of AI: Redefining How We Imagine appeared first on FICO.


          When Drupal Met CARTO        

Drupal 8 is a powerful and customizable CMS.

It provides a lot of different tools to add, store, and visualize data, however spatial data visualization is a sophisticated and complicated topic — and Drupal hasn't always been the best option for handling it.


          KR343 Autorisiert        
DocPhil und Onkel Andi sprechen über die Zeitungskrise, Interview-Techniken, Interview-Autorisierungen, die Krise der Institutionen und vertikale Landwirtschaft.
DocPhil und Onkel Andi sprechen über die Zeitungskrise, Interview-Techniken, Interview-Autorisierungen, die Krise der Institutionen und vertikale Landwirtschaft. David McCandless: The beauty of data visualization (Facebook-Graph bei 6´20) Sascha Lobo über über Institutionen in der Krise Global Soil Week 2012 Efficient City Farming TED-Talk von Britta Riley: A garden in my apartment Britta Riley bei dctp.tv "Elektrischer Reporter" über soziale Netzwerke in Unternehmen
          Making the Data Visible: Ramping Up Library Reporting with LibraryTrac        
Last fall, I was desperate for a robust library attendance management system.  As soon as I appealed to my friends on Twitter for help, fellow school librarian  Margaux DelGuidice immediately responded and recommended we try LibraryTrac.  I was immediately sold and signed up right away.  Since November 2015, many of you have heard me sing … Continue reading Making the Data Visible: Ramping Up Library Reporting with LibraryTrac
           Data visualisation and statistical analysis within the decision making process         
Mahoney, Jamie (2013) Data visualisation and statistical analysis within the decision making process. In: 4th International Conference on Information Visualization Theory and Applications, 21st - 24th February, 2013, Barcelona.
          What Hamilton and Data Visualizations Have in Common        

Introducing the cast of Hamilton at the Tony Awards, President Barack Obama referred to the award-winning musical not only as "a smash hit, but a civics lesson our kids can't get enough of … where rap is the language of revolution, and hip hop it's urgent soundtrack."

Laura McGorman, Director of Client  Solutions, Commerce Data Service

Laura McGorman, Director of Client Solutions, Commerce Data Service

Based on a book written by Ron Chernow, Hamilton: The Musical launched a growth in book sales about our founding father. Following the Tony Awards, Hamilton's biography jumped to #8 on USA Today's best-seller list, its highest ranking ever. Presented with a potent dose of history-infused hip-hop, America's public experienced a renewed interest in the story of our nation's birth.

Meanwhile, more quietly, the U.S. Department of Commerce, my agency, also has been working to make important data come to life. Commerce has challenged digital innovators across the country to take our vast terabytes of public data on business, jobs, trade, weather, and demographics, and make it easier for the public to use.

Tableau and Enigma, an operational data management and intelligence provider, participated in this challenge and put their software to work on Commerce data about county business patterns, construction spending, home sales, and export laws. Together, these companies built visualizations and dashboards that job seekers, nonprofits, and small businesses can leverage to better understand their local economy, power their job search, and more strategically run their operations.

Using dashboards like the one below, people can explore this data and ask their own questions, like: How many total businesses are in the Seattle metro area? (Answer: about 99,000) And how does that compare to Los Angeles? (nearly 250,000 more). Public Commerce data, once only accessible to people with a software or analytics background, is now actionable for anyone.

This  visualization of Department of Commerce data shows the footprint of businesses  by metro regions.

This visualization of Department of Commerce data shows the footprint of businesses by metro regions.

Plato said the best way to train our children was through sharing information in ways that "amuses their mind" such that we can understand "the peculiar bent of the genius of each."

Data visualization tools and revolution-based musicals might seem worlds apart, but for me they strike a similar chord. Both transform public information via mechanisms through which audiences might learn best, whether it's hip-hop to understand history or interactive maps to explore our local communities. Both advance the goal of equitable public education, knowledge and common understanding of our nation. And by making facts not just available but engaging, both support the particular genius of each citizen. I am honored to be part of this work and look forward to seeing what's next from our diverse family of educators.

Laura McGorman is the Director of Client Solutions for the Commerce Data Service at the US Department of Commerce.


          Comment on Iouri Podladtchikov Lands Yolo flip at X Games Tignes by Task 1: Data visualisation | art, design and primary school teaching        
[…] McIntern, I. (2013, March 22). Iouri Podladtchikov Lands Yolo flip at X Games Tignes. Retrieved from https://onboardmag.com/snowboarding-events/iouri-podladtchikov-lands-yolo-flip-at-x-games-tignes.htm… […]
          Part Deux: The feedback cometh. Phil Sheperd.        
In answer to: The Evolution of the Visual Mapper (Part Deux)?


Phil Sheperdthe founding director of Gooisoft Ltd and developer of the most intriguing Thortspace: a 3D visual thinking tool gives his knowledgeable feedback to the questions posed by Visual Mapper.

One of the refreshing things about your approach, Wallace, is that you are clearly open to true innovation and not restricted by any form of preconceived ideas.



Mind-mapping works brilliantly in all its forms and in all its iterations for the purposes it is built for and I am sure that this is why, over the last thirty years or so, mind-mapping in general has gathered such an enthusiastic following amongst senior managers (just look at the superb Biggerplate annual survey to see who actually uses mind-mapping in the work environment).



But Mind mapping is only part of the story... In the original report from 2010,  Nick Duffill of Harport Consulting says "Visual mapping includes but is not limited to mind, concept, flow and argument mapping. Of course there are more tools included; but for the sake of argument these tools adequately cover the graphical capabilities of Visual mapping. Visual mapping may be a useful term to bridge the gap and emphasize the common goal of both mind maps and other data visualization formats."

One or two of your readers may know, my colleagues and I have been quietly building a collaborative 3D thought processing tool specifically for problem solving, where
the 'juice' is in the process rather than a finished mapand the main influences have not been mind-mapping or knowledge mapping but philosophers, modern psychologists and gaming-quality graphics-card capabilities so, although avid observers of all things graphical-thinking related, we could be perceived as working outside the mapping genre so I'm not sure we qualify to answer your questions(but I'll have ago anyway!)



Do you believe the Visual/Knowledge mapping arena has experienced a measurable advancement since the publication of the original article?


If yes: what do you believe have been the most relevant advancements?
1. Five and a half years is a very long time in the technology world so I would have expected quite dramatic advances. As Moore's Law has continued to prevail, there have been massive changes in hardware and computing power and big reductions in cost of access during this time. I don't yet see the world of Visual/Knowledge mapping having advanced at the same pace over this same period. That's not to say there haven't been any advances at all; there have been many superb incremental software improvements. Then there's been the growth of Biggerplate which, although specifically Mind Mapping oriented, represents an opportunity for promoting the genre to a wider audience.
During this particular five year period, however, Leaps and Bounds could have been expected - but I haven't seen them.



If not: what have been the most notable failures of advancement?
2. Perhaps there's been a general lack of breakthrough thinking about what might be possible using rapidly changing technological advances - but that applies to both users and developers.
There's a chicken and egg problem here. Developers have to take very expensive risks when investing in something highly innovative, because potential mainstream customers are very quick indeed to reach for their "too hard to learn" and "learning-curve time-investment" OFF switches almost before they get started and/or make a purchase decision.



What are the consequences for the arena based on advancements and/or failures?
3. The consequences of not investing in paradigm shifting innovation will simply be that those who do (on both sides, developers and users) will move rapidly forward while those who don't, will probably just stay where they are.
Grabbing the attention of a wider customer base is not easy but there is a growing group of potential customers in the young, millennials, who have much more open mindsets and are hungry for something different. Failing to inspire millennials with the beneficial possibilities inherent in visual knowledge mapping would be a wasted opportunity to say the least. There is a dilemma here of course; this is not the demographic that currently uses mind mapping in large numbers (see the Biggerplate survey)



Are we there yet as a mainstream addition or alternative to established productivity tool-sets?
4. The moment when one of the really big software corporations puts a visual/knowledge mapping tool into their mainstream product portfolio will be the time when maturity will occur.
This will be wonderful for all of us because we'll have a world in which untold numbers will realise that they can more easily solve problems, brainstorm, cope better with increasingly complex lives and collaborate across divides. It will also massively increase marketing possibilities for already existing development companies.

How do you envision the future of the Visual/Knowledge mapping arena?

5. As a medium for thinking and developing ideas, planning, collaborating and easily accessing and manipulating complex data, Visual/knowledge mapping can look forward to a very rosy future indeed but only if technological change is fully grasped.  The next five years are going to bring even greater innovative change to the technology world; indeed the only constant will be change itself.
We could see a breakthrough if developers can truly embrace and build for the needs of the millennial demographic in the context of up-coming 3D VR technologies



Above all, it will depend on the industry's ability to capture imaginations and powerfully demonstrate major advantages and benefits. How? Well, other industries have done it by collaborating and co-ordinating and investing in whole industry promotions....
          Comment on Announcing PTC Mathcad Prime 3.1 by Brent Edmonds        
Prime is fully backward compatible but not fully forward compatible – currently forward compatibility only works with maintenance releases. For example, Prime 2.0 M010 to Prime 2.0. We have to stack up customer requests such as this with requests for other Mathcad Prime functionality such as equation wrapping and better data visualization when we prioritize what we work on for each release. Our product usage information indicates that when a new version of Prime is released most of our Prime customers move to that version. Forward compatibility is on our list of enhancement requests, but we've been working on higher priority functionality to date.
          What's Box Boy Richard Gage Up to These Days?        
If you check his events page, it looks like "not much" is the answer. The most recent event shown is the 15th anniversary, where Gage shared the stage with Munchkin Barbara Honegger.

But it turns out that the founder of AE911Truth participated in the recent Nation of Islam conference. It's not like Gage to avoid publicizing such events; when he appeared there in 2012 he was certainly crowing about the opportunity to expose Louis Farrakhan's followers to 9-11 Troof.

Back then we bashed him a bit based on inside information we had stating that Boy Wonder Kevin Barrett would be appearing with him. As it turned out, our insider was wrong. Waterboy Kevin Ryan appeared instead.

And indeed, this may give us something of a clue as to why Richard Gage is not anxious to publicize his appearance at the Nation of Islam rally. This time around, not only was he appearing with Kevin "the Holocaust is a hideously destructive myth" Barrett, but also Christopher Bollyn.

Bollyn gave us quite a bit of amusement back in the early days of this blog. Once a reporter for the Holocaust-denying, white separatist rag the American Free Press, Bollyn was an early investigoogler of 9-11 nuttery, with the result that when a more respectable and scholarly-seeming man like David Ray Griffin came along, Bollyn's work was often cited. Indeed, I used to joke that Griffin seemed unable to complete a book without referring to him once or twice.

One afternoon, Bollyn was apparently drinking when he noticed a suspicious looking vehicle parked in his neighborhood. When he confronted the occupants of the vehicle, they apparently freely admitted they were local cops on a stakeout of a nearby residence that they suspected of drug-dealing.

Well, paranoid people are going to be paranoid, and Bollyn assumed that the cops were in fact spying on him. Big altercation, Bollyn assaults a cop, and he's up on charges. He was convicted and probably facing about 90 days in the big house, but he lammed instead.

So this time around, Gage shared the stage with two Holocaust deniers, one of whom may still be a fugitive from justice.

I'm sure he'd much rather we talk about his exciting new NIST whistleblower.

But it's a classic false appeal to authority. As Peter Michael Ketcham himself notes in the video, he did not work on the WTC investigation. He states that he was in the mathematical computations area, which leads me to wonder if we are in for some real deep calculations that prove inside job.

Not to worry. Ketcham's cited evidence has nothing to do with number crunching. It's the usual "symmetrical collapse into its own footprint at near free-fall acceleration." Ketcham is Charlie Sheen with less hair.

Here's Ketcham's Linked-In page. His current occupation?

Mobile application developer currently building a data visualization application for Apple iOS devices with an emphasis on accessibility for disabled users. Other interests include data science, virtual reality environments, haptic technologies, hierarchical data formats, matrix computations, and Swift numeric data types for rational numbers, complex numbers, and quaternions.

Again, if he were questioning the numbers used by NIST he might have some credibility. But he's not, he's just parroting the Truther talking points.
          The Strange Loop 2013        

This was my second time at The Strange Loop. When I attended in 2011, I said that it was one of the best conferences I had ever attended, and I was disappointed that family plans meant I couldn't attend in 2012. That meant my expectations were high. The main hotel for the event was the beautiful DoubleTree Union Station, an historic castle-like building that was once an ornate train station. The conference itself was a short walk away at the Peabody Opera House. Alex Miller, organizer of The Strange Loop, Clojure/West, and Lambda Jam (new this year), likes to use interesting venues, to make the conferences extra special.

I'm providing a brief summary here of what sessions I attended, followed by some general commentary about the event. As I said last time, if you can only attend one conference a year, this should be the one.

  • Jenny Finkel - Machine Learning for Relevance and Serendipity. The conference kicked off with a keynote from one of Prismatic's engineering team talking about how they use machine learning to discover news and articles that you will want to read. She did a great job of explaining the concepts and outlining the machinery, along with some of the interesting problems they encountered and solved.
  • Maxime Chevalier-Boisvert - Fast and Dynamic. Maxime took us on a tour of dynamic programming languages through history and showed how many of the innovations from earlier languages are now staples of modern dynamic languages. One slide presented JavaScript's take on n + 1 for various interesting values of n, showing the stranger side of dynamic typing - a "WAT?" moment.
  • Matthias Broecheler - Graph Computing at Scale. Matthias opened his talk with an interesting exercise of asking the audience two fairly simple questions, as a way of illustrating the sort of problems we're good at solving (associative network based knowledge) and not so good at solving (a simple bit of math and history). He pointed out the hard question for us was a simple one for SQL, but the easy question for us would be a four-way join in SQL. Then he introduced graph databases and showed how associative network based questions can be easily answered and started to go deeper into how to achieve high performance at scale with such databases. His company produces Titan, a high scale, distributed graph database.
  • Over lunch, two students from Colombia told us about the Rails Girls initiative, designed to encourage more young women into the field of technology. This was the first conference they had presented at and English was not their native language so it must have been very nerve-wracking to stand up in front of 1,100 people - mostly straight white males - and get their message across. I'll have a bit more to say about this topic at the end.
  • Sarah Dutkiewicz - The History of Women in Technology. Sarah kicked off the afternoon with a keynote tour through some of the great innovations in technology, brought to us by women. She started with Ada Lovelace and her work with Charles Babbage on the difference engine, then looked at the team of women who worked on the ENIAC, several of whom went on to work on UNIVAC 1. Admiral Grace Hopper's work on Flow-Matic - part of the UNIVAC 1 project - and subsequent work on COBOL was highlighted next. Barbara Liskov (the L in SOLID) was also covered in depth, along with several others. These are good role models that we can use to encourage more diversity in our field - and to whom we all owe a debt of gratitude for going against the flow and marking their mark.
  • Evan Czaplicki - Functional Reactive Programming in Elm. This talk's description had caught my eye a while before the conference, enough so that I downloaded Elm and experimented with it, building it from source on both my Mac desktop and my Windows laptop, during the prerelease cycle of what became the 0.9 and 0.9.0.2 versions. Elm grew out of Evan's desire to express graphics and animation in a purely functional style and has become an interesting language for building highly interactive browser-based applications. Elm is strongly typed and heavily inspired by Haskell, with an excellent abstraction for values that change over time (such as mouse position, keyboard input, and time itself). After a very brief background to Elm, Evan live coded the physics and interaction for a Mario platform game with a lot of humor (in just 40 lines of Elm!). He also showed how code updates could be hot-swapped into the game while it was running. A great presentation and very entertaining!
  • Keith Adams - Taking PHP Seriously. Like CFML, PHP gets a lot of flak for being a hot mess of a language. Keith showed us that, whilst the criticisms are pretty much all true, PHP can make good programmers very productive and enable some of the world's most popular web software. Modern PHP has traits (borrowed from Scala), closures, generators / yield (inspired by Python and developed by Facebook). Facebook's high performance "HipHop VM" runs all of their PHP code and is open source and available to all. Facebook have also developed a gradual type checking system for PHP, called Hack, which is about to be made available as open source. It was very interesting to hear about the pros and cons of this old warhorse of a language from the people who are pushing it the furthest on the web.
  • Chiu-Ki Chan - Bust the Android Fragmentation Myth. Chiu-Ki was formerly a mobile app developer at Google and now runs her own company building mobile apps. She walked us through numerous best practices for creating a write-once, run-anywhere Android application, with a focus on various declarative techniques for dealing with the many screen sizes, layouts and resolutions that are out there. It was interesting to see a Java + XML approach that reminded me very much of Apache Flex (formerly Adobe Flex). At the end, someone asked her whether similar techniques could be applied to iOS app development and she observed that until very recently, all iOS devices had the same aspect ratio and same screen density so, with auto-layout functionality in iOS 6, it really wasn't much of an issue over in Apple-land.
  • Alissa Pajer - Category Theory: An Abstraction for Everything. In 2011, the joke was that we got category theory for breakfast in the opening keynote. This year I took it on by choice in the late afternoon of the first day! Alissa's talk was very interesting, using Scala's type system as one of the illustrations of categories, functors, and morphisms to show how we can use abstractions to apply knowledge of one type of problem to other problems that we might not recognize as being similar, without category theory. Like monads, this stuff is hard to internalize, and it can take many, many presentations, papers, and a lot of reading around the subject, but the abstractions are very powerful and, ultimately, useful.
  • Jen Myers - Making Software Development Make Sense For Everyone. Closing out day one was a keynote by Jen Myers, primarily known as a designer and front end developer, who strives to make the software process more approachable and more understandable for people. Her talk was a call for us all to help remove some of the mysticism around our work and encourage more people to get involved - as well as to encourage people in the software industry to grow and mature in how we interact. As she pointed out, we don't really want our industry to be viewed through the lens of movies like "The Social Network" which makes developers look like assholes!.
  • Martin Odersky - The Trouble with Types. The creator of Scala started day two by walking us through some of the commonly perceived pros and cons of both static typing and dynamic typing. He talked about what constitutes good design - discovered, rather than invented - and then presented his latest work on type systems: DOT and the Dotty programming language. This collapses some of the complexities of parameterized types (from functional programming) down onto a more object-oriented type system, with types as abstract members of classes. Compared to Scala (which has both functional and object-oriented types), this provides a substantial simplification without losing any of the expressiveness, and could be folded into "Scala.Next" if they can make it compatible enough. This would help remove one of the major complaints against Scala: the complexity of its type system!
  • Mridula Jayaraman - How Developers Treat Ovarian Cancer. I missed Ola Bini's talk on this topic at a previous conference so it was great to hear one of his teammates provide a case study on this fascinating project. ThoughtWorks worked with the Clearity Foundation and Annai Systems - a genomics startup - to help gather and analyze research data, and to automate the process of providing treatment recommendations for women with ovarian cancer. She went over the architecture of the system and (huge!) scale of the data, as well as many of the problems they faced with how "dirty" and unstructured the data was. They used JRuby for parsing the various input data and Clojure for their DSLs, interacting with graph databases, the recommendation engine and the back end of the web application they built.
  • Crista Lopes - Exercises in Style. Noting that art students are taught various styles of art, along with analysis of those styles, and the rules and guidelines (or constraints) of those styles, Crista observed that we have no similar framework for teaching programming styles. The Wikipedia article on programming style barely goes beyond code layout - despite referencing Kernighan's "Elements of Programming Style"! She is writing a book called "Exercises in Programming Style", due in Spring 2014 that should showcase 33 styles of programming. She then showed us a concordance program (word frequencies) in Python, written in nine different styles. The code walkthrough got a little rushed at the end but it was interesting to see the same problem solved in so many different ways. It should be a good book and it will be educational for many developers who've only been exposed to one "house" style in the company where they work.
  • Martha Girdler - The Javascript Interpreter, Interpreted. Martha walked us through the basics of variable lookups and execution contexts in JavaScript, explaining variable hoisting, scope lookup (in the absence of block scope) and the foibles of "this". It was a short and somewhat basic preso that many attendees had hoped would be much longer and more in depth. I think it was the only disappointing session I attended, and only because of the lack of more material.
  • David Pollak - Getting Pushy. David is the creator of the Lift web framework in Scala that takes a very thorough approach to security and network fallibility around browser/server communication. He covered that experience to set the scene for the work he is now doing in the Clojure community, developing a lightweight push-based web framework called Plugh that leverages several well-known Clojure libraries to provide a seamless, front-to-back solution in Clojure(Script), without callbacks (thanks to core.async). Key to his work is the way he has enabled serialization of core.async "channels" so that they can be sent over the wire between the client and the server. He also showed how he has enabled live evaluation of ClojureScript from the client - with a demo of a spreadsheet-like web app that you program in ClojureScript (which is round-tripped to the server to be compiled to JavaScript, which is then evaluated on the client!).
  • Leo Meyerovich - Thinking DSLs for Massive Visualization. I had actually planned to attend Samantha John's presentation on Hopscotch, a visual programming system used to teach children to program, but it was completely full! Leo's talk was in the main theater so there was still room in the balcony and it was an excellent talk, covering program synthesis and parallel execution of JavaScript (through a browser plugin that offloads execution of JavaScript to a specialized VM that runs on the GPU). The data visualization engine his team has built has a declarative DSL for layout, and uses program synthesis to generate parallel JS for layout, regex for data extraction, and SQL for data analysis. The performance of the system was three orders of magnitude faster than a traditional approach!
  • Chris Granger - Finding a Way Out. Some of you may have been following Chris's work on LightTable, an IDE that provides live code execution "in place" to give instant feedback as you develop software. If you're doing JavaScript, Python, or Clojure(Script), it's worth checking out. This talk was more inspirational that product-related (although he did show off a proof of concept of some of the ideas, toward the end). In thinking about "How do we make programming better?" he said there are three fundamental problems with programming today: it is unobservable, indirect, and incidentally complex. As an example, consider person.walk(), a fairly typical object-oriented construct, where it's impossible to see what is going on with data behind the scenes (what side effects does it have? which classes implement walk()?). We translate from the problem domain to symbols and add abstractions and indirections. We have to deal with infrastructure and manage the passage of time and the complexities of concurrency. He challenged us that programming is primarily about transforming data and posited a programming workflow where we can see our data and interactively transform it, capturing the process from end to end so we can replay it forwards and backwards, making it directly observable and only as complex as the transformation workflow itself. It's an interesting vision, and some people are starting to work on languages and tools that help move us in that direction - including Chris with LightTable and Evan with Elm's live code editor - but we have a long way to go to get out of the "tar pit".
  • Douglas Hofstadter, David Stutz, a brass quintet, actors, and aerialists - Strange Loops. The two-part finale to the conference began with the author of "Gödel, Escher, and Bach" and "I am a Strange Loop" talking about the concepts in his books, challenging our idea of perception and self and consciousness. After a thought-provoking dose of philosophy, David Stutz and his troope took to the stage to act out a circus-themed musical piece inspired by Hofstadter's works. In addition to the live quintet, Stutz used Emacs and Clojure to provide visual, musical, and programmatic accompaniment. It was a truly "Strange" performance but somehow very fitting for a conference that has a history of pushing the edges of our thinking!

Does anything unusual jump out at you from the above session listing? Think about the average technical conference you attend. Who are the speakers? Alex Miller and the team behind The Strange Loop made a special effort this year to reach out beyond the "straight white male" speaker community and solicit submissions from further afield. I had selected most of my schedule, based on topic descriptions, before it dawned on me just how many of the speakers were women: over half of the sessions I attended! Since I didn't recognize the vast majority of speaker names on the schedule - so many of them were from outside the specific technical community I inhabit - I wasn't really paying any attention to the names when I was reading the descriptions. The content was excellent, covering the broad spectrum I was expecting, based on my experience in 2011, with a lot of challenging and fascinating material, so the conference was a terrific success in that respect. That so many women in technology were represented on stage was an unexpected but very pleasant surprise and it should provide an inspiration to other technology conferences to reach beyond their normal pool of speakers too. I hope more conferences will follow suit and try to address the lack of diversity we seem to take for granted!

I already mentioned the great venues - both the hotel and the conference location - but I also want to call out the party organized at the St Louis City Museum for part of the overall "wonder" of the experience that was The Strange Loop 2013. The City Museum defies description. It is a work of industrial art, full of tunnels and climbing structures, with a surprise around every corner. Three local breweries provided good beer, and there was a delicious range of somewhat unusual hot snacks available (bacon-wrapped pineapple is genius - that and the mini pretzel bacon cheeseburgers were my two favorites). It was quiet enough on the upper floors to talk tech or chill out, while Moon Hooch entertained loudly downstairs, and the outdoor climbing structures provided physical entertainment for the adventurous with a head for heights (not me: my vertigo kept me on the first two stories!).

In summary then, the "must attend" conference of the year, as before! Kudos to Alex Miller and his team!


          Certified KPI Professional and Practitioner - The KPI Institute , Dubai, Muscat, Kuwait, Doha, Manama, Jeddah, Riyadh, Kuala Lumpur         

Selection and data gathering are considered by practitioners all around the world to be the most challenging aspects in working with Key Performance Indicators (KPIs). A way to address these challenges is to build a sound framework to measure KPIs, starting from the moment they are selected, until results are collected to be centralized in performance reports.

The KPI Institute has developed a rigorous KPI Measurement Framework that embeds 10 years of research in the field and relies on best practices applicable in the real business environment.

This learning program is structured on two levels of certifications:

Certified KPI Professional - a three days training course focused on developing know-how in working with KPIs. The certification can be obtained by taking a multiple question Certification Exam in the last day of the course.

Certified KPI Practitioner - a two days training course meant to improve the practical skills in working with KPIs and developing instruments like scorecards, dashboards and KPI documentation forms. The applied exercises of this course will enable participants to complete a trial run of all the steps required to complete the portfolio which is the basis of the KPI Practitioner Certification.

The exercises will reflect a complete KPI implementation case study, from project planning to KPI data visualization. The training courses can be accessed individually. However, participants are eligible to receive the KPI Practitioner Certification only if they are KPI Certified Professionals.

5 Benefits

  • Develop the project plan for a KPI implementation initiative;
  • Practice a sound framework to ensure KPIs are aligned to strategy;
  • Receive personalized feedback on developing the KPI portfolio of instruments;
  • Expand your business network by becoming a member of the international Certified KPI Professionals Community;
  • Access +15 templates that help you implement a KPI Measurement Framework in your organization.

Learning Objectives

  • Understand KPI measurement challenges and how to address them;
  • Select KPIs for scorecards and dashboards from the organizational to the departmental and individual level;
  • Develop a KPI implementation project plan;
  • Optimize the KPI activation and data gathering process.
  • Differentiate between objectives, KPIs and initiatives; Understand KPI selection in different contexts;

Evaluation 

The certification process is finalized when you complete all stages of the learning experience. You will receive a: 

  • Certificate of Completion: after completing pre-course activities, passing the Certification Exam and the Learning Assessment Quiz;
  • Certificate of Attendance for Certified KPI Professional: after participating at the 3 days of on-site training course;
  • Certificate of Attendance for Certified KPI Practitioner: after participating at the 2 days of on-site training course;
  • Certified KPI Professional diploma: after you have successfully completed all of the 3 stages of the learning experience

Cost: Starting from USD 3,700

Discount: OFFERS AVAILABLE

Next Session:

Duration: 5 Days

Certified


          477 Skyrocketing Your Programming Career (With Ben Sullins From Teslanomics) - Simple Programmer Podcast        

I love to receive on this channel people I admire. As you know, I tend to give special attention to entrepreneurship and entrepreneurs, because I believe it is what makes people's life much much better.

Today, I decided to give Ben Sullins the chance to share his AMAAAZING story with you, so we could help more and more programmers around the world.

"As a life-long data geek, Ben dedicates his time helping others use data wisely. He makes information meaningful and has fun doing it.

His background affords him a unique set of knowledge that sets him apart in the data community. During his sixteen years of industry experience, he has consulted many high-tech companies including Facebook, Microsoft, LinkedIn, Cisco, Mozilla, Pluralsight and Genentech on democratizing data in their organizations. Moreover, Ben spent three months leading the charge at Facebook to grow its data culture by demonstrating proper tool implementation and data visualization techniques using Tableau. And with this expertise, Ben aims to provide exceptional service to his customers by enriching their lives with impactful smart data."
(Source: https://bensullins.com/about/)

Stay with us and learn how to skyrocket your career.

Teslanomics YouTube Channel: https://www.youtube.com/channel/UCbEbf0-PoSuHD0TgMbxomDg


          CIRI Data Now Archived at Dodd Center        

By Suzanne Zack
University of Connecticut Libraries

Storrs, CT – Graphic stories of torture and forced disappearances may seem more prevalent in certain parts of the world than others, based on news accounts, resolutions deliberated by the United Nations, or reports issued by watch dog organizations such as Amnesty International. But, in the larger picture, what types of human rights are most and least respected by governments in the world today and why?  
The CIRI Human Rights Data Project, which tracks 15 separate human rights in 195 countries from 1981 to the present, allows this larger picture to emerge. Now, UConn will host a digital archive of the CIRI project’s data, as well as the CIRI website itself (www.humanrightsdata.org). 

CIRI’s human rights data have been used by hundreds of governments and global organizations, including the United Nations, the World Bank, and USAID to make informed decisions. These data are also widely used by academics, think tanks, and financial institutions for a variety of purposes.

“The CIRI dataset provides highly-regarded quantitative indicators on the state of human rights worldwide. For well over a decade they have been a valuable input to the Worldwide Governance Indicators,” said Daniel Kaufmann, President of Revenue Watch and coauthor of the World Bank’s Worldwide Governance Indicators.

The CIRI project’s work spans three major research universities: the State University of New York at Binghamton (since 2004), the University of Connecticut (since 2010), and the University of Georgia (since 2012). The CIRI website allows users to either download the entire dataset, or create a custom dataset, choosing specific indicators, years, and countries. CIRI requires users to register in order to access the data, but the data are freely available upon registration.  To date, CIRI counts more than 13,500 registered users. 

“In this digital and data-driven age, measuring the human rights practices of governments has become an important part of the global human rights movement seeking to provide lives of dignity for all persons worldwide,” contends CIRI co-director Dr. David L. Richards, associate professor of political science and human rights at UConn. “And, having the CIRI project here at UConn helps our students make a connection between data and action in a first-hand way they would not get, otherwise. Best of all, perhaps: by taking an active role in CIRI’s work, students take a real part in world politics.” Richards notes. 

Richards is co-founder and co-director along with Dr. David L. Cingranelli, professor of political science at SUNY Binghamton. Dr. K. Chad Clay, assistant professor in the Department of International Affairs at the University of Georgia joined as a third co-director in the fall of 2012.  The CIRI project was initially designed for use by scholars seeking to test theories about the causes and consequences of human rights violations, as well as policymakers and analysts needing to estimate the human rights effects of a wide variety of institutional changes and public policies including democratization, economic aid, military aid, structural adjustment, and humanitarian intervention.

The CIRI archives constitute the first collection of data deposited in UConn’s new digital repository, a project currently underway for the campus community and the State of Connecticut by the University Libraries’ Archives & Special Collections.
“Bringing the CIRI Data Project to Archives & Special Collections will make it possible to provide long-term preservation of the data as well as the opportunity to develop new visualization tools as part of the Libraries’ support of research data management,” said Greg Colati, director of Archives & Special Collections. Using Richard’s work, the library is developing this new visualization tool in a collaborative effort between Archives & Special Collections, the Libraries Map and Geographic Center and CIRI.  

Richards says he is excited about working with Archives & Special Collections on the new data visualization tools and expects all of CIRI’s many types of users will make good use of this new feature, expected to be available in the fall of 2013.
The CIRI project, which is updated annually, provides measures of several types of internationally-recognized human rights, including:  physical integrity rights, or the right not to be tortured, extra-judicially killed, disappeared, or imprisoned for political beliefs; civil rights and liberties, or the right to free speech, freedom of association and assembly, freedom of domestic movement, freedom of international movement freedom of religion, and to participate in free and fair elections for the selection of government leaders.  Also tracked are: workers’ rights, such as the right to bargain collectively; and women’s rights to legal protection and equal treatment, politically and economically.

Among CIRI’s users is the Tony Blair Faith Foundation, which supports and collaborates with those who seek peace by promoting understanding and respect between the world's major religions. “The Tony Blair Faith Foundation has found CIRI data to be particularly useful in gauging Human Rights information globally,” says Parna Taylor, Director of Communications. “They are a valuable resource for the world and we are pleased to be able to use their data.”

While human rights has been taught at the collegiate level for some time, interest in the field now extends to the secondary teachers of Advanced Placement (AP) Comparative Government and Politics, making CIRI a familiar resource to yet  another audience. “By enabling students to look at patterns of respect and violations of human rights, CIRI’s data allow the formulation of questions about differences in respect across countries, differences in respect across time, and patterns of respect among different rights within countries.” Richards says.

“Human rights violations are frequently reported as narratives, as the stories of specific people -- which is also important for highlighting the humanity of the victims and recognizing how their rights have been violated,” observes Corinne Tagliarina, who is a Ph.D. candidate in Political Science and in the Human Rights certificate program at UConn.  “The narrative method makes it difficult to get a comprehensive look at how often specific countries violate different human rights.  CIRI offers a big picture view of human rights in the world.”

          Data Visualization with CSS: Graphs, Charts and More        
A good data presentation is an important aspect in web industry because it is the key to let visitors understand your content within seconds. The easier or faster your visitors get a grasp of your web content, the higher it reflects your professionalism in handling presentation. Criteria for a decent data presentation should be simple…
          Freebie: Professional Business Infographic Template        
Looking for a freebie? We have here, in collaboration our friends at Freepik, an exclusive release for HKDC readers. You’re looking at a business-themed statistical infographic template which is packed with elements that are great for data visualization purposes, for annual report presentations or for making an awesome-looking infographic. Included in this pack are different…
          Infogram wants to help you make beautiful infographics        
Latvian startup Infogr.am has launched its suite of online tools for building beautiful -- and shareable -- infographics on the web. Can it cash in on a growing trend for easy data visualization, or not?
          FOIA2011 emails reveal secret alarmist base        
The new and better updated Climategate 2.0 scandal has led to some more shocking revelations about the nefarious crimes of the fascisto-alarmists, red and yellow and green like traffic lights. This is going to be the final nail in the coffin of the global warmning scam. Look what I found among the liberated emails  (#3346) after a few seconds of browsing:
Dear Phil,

Thank you very much for this, I'll have a proper read now. I appreciate your advice.

I'll perhaps try and touch base with you next week,

Best regards,


Jon
 Now, what on earth is the "base" they are talking about? it can only mean one thing:


And this is from #3307:
Just to you - seems you could go a little further and be more clear as Stefan suggests.
Not a major change. Your call, though. Thanks, Peck
What do they mean by "major"? That is a military rank. Obviously, they are organizing some kind of army:

From #1858  we get the following revelation about an evil network of mind-controlled lackeys that cover the entire world:
The ultimate strategy is to get a collaborative centre at a number of
regions throughout the world and build a network of like-minded people
Finally, in #1687 we find irrefutable evidence that the global warming cabal has been colluding with Google in order to prevent the truth about so-called global warming to come out:
As you know, I was also going to follow-up with Google in California
(and maybe NY), about the data visualization angle and their overall
interest. Not sure if this would be something to also make it into the
NYT foray, but please send over whatever you have as update from the
Exeter meeting that could possibly build upon the work that Philip,
Stefan, and I have done.
These terrible secrets we uncovered after only a couple of minutes of browsing in the FOIA2011 archives.  Who knows what other evil conspiracies and sinister plots are waiting to be brought into the light by blog science? Lots, I can tell you, lots!
          Uses and abuses of data visualizations in mass media        
Uses and abuses of data visualizations in mass media from numeroteca Audio (.mp3). ESS Visualisation Workshop 2016. Valencia. May 17-18, 2016. Abstract Data visualizations are a …
          Comment on Some excellent data visualisation of the current economic crisis (especially in Europe), courtesy of Der Spiegel by JANE BURGERMEISTER REPORT: ‘ECB has put 100s of billions of euros of worthless debt on its books which tax payers will have to pay for, reports Der Spiegel’ « The Watchdog        
[...] Some excellent data visualisation of the current economic crisis (especiall… [...]
          AppLoad 168 - Dumb ways to die        

APPS :

ZAPPS :

NEWS:

  • L'iPhone pas cher pas si pas cher ?
  • La théorie de Nillay Patel sur le design d'iOS7
  • La vidéo dans Instagram : mauvaise idée ?

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 167 - Psychedélique        

APPS :

ZAPPS :

  • Office sur iPhone (iPhone / Gratuit)
  • Licra (Android / Gratuit)
  • Adobe Kuler (iPhone / Gratuit)

NEWS:

  • La vidéo arrive sur Instagram le 20 ?
  • Test et résumé de nouveautés d'iOS7

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 166 - WWDC 2013        

Replay de l'épisode spécial enregistré en live pendant la keynote de la WWDC 2013 d'Apple ! On "découvre" ensemble le nouvel iOS, mais aussi les nouveaux OSX, Macbook air, et Mac pro.

LIENS :

Et les animateurs :
- Jérôme
- Cédric
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 165 - HipstERmatic        

APPS :

ZAPPS :

NEWS:

  • Google IO: 900M, Google Play Games, Music & Eductation, S4 Nexus... Et la situation face à Apple

Lien de l'émission :

LIENS :

Et les animateurs :
- Jérôme
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 164 - Des rectangles et des carrés        

APPS :

ZAPPS :

  • MoMi (OVH Manager) (iOS & Android / Gratuit)
  • NetAtmo (WP8 / Gratuit)

NEWS:

  • Ecosya sur Windows 8

LIENS :

Et les animateurs :
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 163 - Les sons émerveillés        

APPS :

ZAPPS :

  • Paris en Flyover (après Lyon) (iOS / Gratuit)
  • Youtube 3.0 (WP8 / Gratuit)

NEWS:

  • La taxe de la mission Lescure
  • Flipboard editor (et Flipboard 2.0 sur Android !)
  • Retour sur Google Now et pebble
  • Les soldes des devices WP8
  • Les affaires winStagram et ItsTagram

LIENS :

Et les animateurs :
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 162 - La madeleine pourrie de Proust        

APPS :

ZAPPS :

NEWS:

  • Keynote WWDC le 10 juin (avec live !)

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 161 - Beau comme un figatelli        

APPS :

ZAPPS :

NEWS:

  • Mais où est Windows Phone ?!
  • Le "touch cover" pour iPad de Logitech
  • l'iPhone original obsolète le 11 juin !

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 160 - Status borg        

APPS :

ZAPPS :

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 159 - Manipulation mentale        

APPS :

ZAPPS :

NEWS:

  • Galaxy Note 8, Acer A1-810, et... Surface 7" ?
  • Office sur iOS et Android en 2014
  • L'affaire AppGratis
  • LIENS :

    Et les animateurs :
    - Jérôme
    - Korben
    - Cédric
    - Patrick

    Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 158 - Cédric et sa Pebble        

APPS :

ZAPPS :

NEWS:

  • Rumeurs Apple (iRadio, iPhone 6, iTV... iRing??)
  • Facebook Home

LIENS :

Et les animateurs :
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 157 - Le coin trolliphere        

APPS :

  • Nexus 7 et iPad Mini, impressions de Patrick
  • ProCutX (iPad / Gratuit & 21€+)

ZAPPS :

NEWS:

  • Rumeurs sur le standard gamepad Apple

LIENS :

Et les animateurs :
- Jérôme
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 156 - On lui suce la sève        

APPS :

ZAPPS :

NEWS:

  • Mini tests Nexus 4 et HTC one
  • La mise à jour de l'app podcast d'Apple

LIENS :

Et les animateurs :
- Jérôme
- Cédric

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 155 - KickAsser        

APPS :

ZAPPS :

NEWS:

  • L'annonce du Samsung Galaxy S4

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 154 - Une app toute douce        

APPS :

  • HDR (iOS / 1,79€)
  • Nokia NFC Writer (WP8 / Gratuit)
  • Algoid (Android / Gratuit)
  • Any.DO (Android & iPhone / Gratuit)

ZAPPS :

NEWS:

  • Débat sur les Google Glass... et la politesse.

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 153 - Mailbox change la vie        

APPS :

ZAPPS :

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 152 - Le bruit de la récompense        

APPS :

ZAPPS :

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 151 - Un Zapp ou une Zapp        

APPS :

ZAPPS :

NEWS:

  • Microsoft perd 2,5 Milliard $ sans Office sur iOS
  • Surface Pro
  • La montre Apple en vue ?

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 150 - Atchoum !        

APPS :

ZAPPS :

  • MyLed (iOS / ~15$ ?)
  • Tweakker (Android / Gratuit)
  • 02 minutes d'attente (iPhone / Gratuit)

LIENS :

Et les animateurs :
- Jérôme
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 149 - Les limites limitantes        

APPS :

ZAPPS :

NEWS:

  • Nokia music à 4€/mois
  • La fin du minidisc
  • RIM est mort, vive Blackberry !

LIENS :

Et les animateurs :
- Jérôme
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 148 - Tu siffles et ça la coupe        

APPS :

ZAPPS :

LIENS :

Et les animateurs :
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 147 - Explosion de belle mère        

APPS :

ZAPPS :

LIENS :

Et les animateurs :
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 146 - Une zénitude féerique        

APPS :

ZAPPS :

NEWS :

  • Déballage et test flash du Lumia 620

LIENS :

Et les animateurs :
- Jérôme
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 145 - Combien faut-il de Nowatcheurs pour visser une ampoule ?        

APPS :

ZAPPS :

NEWS :

  • Ubuntu Mobile arrive
  • Nvidia lance sa console portable : Project Shield

LIENS :

Et les animateurs :
- Jérôme
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 144 - 2012 l'année des Apps !        

APPS :

Mini débat : Vos prédictions Mobilité pour 2013 ?

LIENS :

Et les animateurs :
- Korben
- Cédric
- Jérôme
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 143 - Google retrouve le chemin de la pomme        

APPS :

ZAPPS :

  • Google Drive Spreadsheet edit ! (iOS / Gratuit)
  • Photobeamer (WP8 Nokia / Gratuit)
  • ExynosAbuse (Android / Gratuit)
  • Printer Pro (iPad / 5,99€) & (iPhone / 4,49€)

NEWS :

  • Instagram vend vos photos

LIENS :

Et les animateurs :
- Korben
- Cédric
- Jérôme
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 142 - Foldify ta life        

APPS :

NEWS :

  • The Daily disparait le 15 décembre.
  • Ca bouge (encore) dans les forfaits mobiles : Free, Virgin, Joe...

LIENS :

Et les animateurs :
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 141 - Mon Fils il part dans la forêt...        


APPS :

ZAPPS :

NEWS :

  • Internet des objets : répandre ses données, la nouvelle tendance

LIENS :

Et les animateurs :
- Jérôme
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 140 - Patrick is really a nice guy        


APPS :

  • Rockmate (2,69€ / iPad)
  • Bing Traduction (Gratuit / WP8)
  • Camera+ (0,89€ (x2) / iOS)

ZAPPS :

NEWS :

  • Le paradoxe du bilan d'utilisation mobile après Thanksgiving : iOS majoritaire.
  • WP7.8: des infos très bientôt... dispo dès mercredi ??
  • Nexus 4 : les expéditions vont reprendront

LIENS :

Et les animateurs :
- Jérôme
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          Appload 139 - Les LED à l'appel c'est moche...        

APPS :
  • Circa News (Gratuit / iPhone)
  • Porte feuille (Gratuit / WP7)
  • Mention (Gratuit & + /Android)
  • Magegauntlet (2,69€ / iOS)
  • ZAPPS :
    • Utiliser la LED pour signaler les appels, dans les paramètres d’accessibilité (Gratuit / iPhone)
    • NFC publisher (Gratuit / WP8)
    • Mini Test iPad mini (339€-674€)
    • NEWS :
      • Qeexo : une nouvelle interface de touch
      • Google et Apple à la table des négociations ?
      • Problème de Reboots et de batteries sur les Windows Phone 8 ?

      LIENS :

      Et les animateurs :
      - Jérôme
      - Cédric
      - Korben
      - Patrick

      Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 138 - Surface, finger in the nose        

APPS:

  • Angry Birds Star Wars (surface) (3,99€ / Multi)
  • Send Anywhere (Gratuit & 2,49€ / Android)
  • Google Search, reconnaissance vocale amélioré (Gratuit / iOS & Android)

ZAPPS:

NEWS:

  • Une XBox Surface ?
  • Documents iWork bientôt éditables en ligne ?
  • Sinofsky quitte Microsoft
  • Nexus 4 vente record ou stocks minuscule ?

LIENS :

Et les animateurs :
- Jérôme
- Cédric
- Korben

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 137 - Ein gross partie news        

APPS:

ZAPPS:

(grosse partie) NEWS:

  • Nouvelles machines :

    • Windows Surface RT: premières reviews
    • Nouveautés Windows Phone 8
    • Google Nexus : 4, 7, 10
    • iPad mini, premiers tests
  • Un tour chez Apple:

    • iTunes 11 décalé au mois de Novembre
    • Les prix des apps augmentent
    • Scott Forstall "quitte" Apple

LIENS :

Et les animateurs :
- Jérôme
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 136 - Live lancement WP8        

Live du lancement de Windows Phone 8 !
Aussi disponible sur le site de NoWatch en vidéo.

LIENS :

Et les animateurs :
- Alexandre
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 135 - Star Wars, de George Luquau        

APPS:

ZAPPS:

LIENS :

Et les animateurs :
- Jérome
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 134 - 0 to 100 borgueries        

APPS:

ZAPPS:

NEWS:

  • Office 2013 arrive sur mobiles en mars ?
  • iPad mini annoncé le 23 octobre ?
  • Témoignage de Jérôme sur l'anodisation
  • XBox Music est officialisé

LIENS :

Et les animateurs :
- Jérome
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 133 - Paris 4D        

APPS:

ZAPPS:

LIENS :

Et les animateurs :
- Jérome
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 132 - Doofenshmirtznator        

APPS:

ZAPPS:

NEWS:

  • Patrick VS Cedric
  • Les excuses de Tim Cook
  • Invitation "iPad mini" le 10 Octobre ?
  • Cyanogen Mod 10
  • Que nous cache Windows 8 ?

LIENS :

Et les animateurs :
- Jérome
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 131 - Du catch de la boue et l'iPhone 5        

APPS:

ZAPPS:

  • Nokia Lecture à jour ! (Gratuit / WP7) Exclusivité Lumia
  • 1tap eraser (Gratuit / Android)

Megatest:

  • Test iPhone 5 Jérôme

NEWS:

  • Moo Card NFC
  • HTC 8X

LIENS :

Et les animateurs :
- Jérome
- Korben
- Cédric

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 130 - Quand t'es pauvre, t'es sage        

APPS:

ZAPPS:

  • YouTube (Gratuit / iPhone)
  • Torrent Player (Gratuit / iOS JB)
  • MAJ Kingdom rush (0,79€ / iPad)

NEWS:

  • iPhone 5 : 2 millions vendus en 24 heures
  • Le "ConnectorGate" ?
  • Les fonctionnalité iOS6 dispo en France
  • Galaxy S4 annoncé en février (5" OLED)
  • Les gentils trolls

LIENS :

Et les animateurs :
- Jérome
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          Appload 129 - Keynote Apple iPhone 5        

Live de la Keynote Apple iPhone 5, avec Jérôme, Cédric, Florence, Stéphane et Patrick.

LIENS :

  • La Data Visualisation des 99 premiers AppLoads (tirée du passionnant et impressionnant sujet "dataviz" de l'excellente Mentine sur le forum. A consulter !
  • Le générique d'AppLoad a été créé par Daniel Beja.

Et les animateurs :
- Jérome
- Florence
- Cédric
- Stéphane
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 128 - Des usages pour toutes les tailles        


APPS :

ZAPPS :

NEWS:

  • Amazon App Store en France
  • Nouveaux Kindles
  • Nexus 7 en France
  • Procès Apple contre Samsung
  • Et tout plein de nouveautés à l'IFA

LIENS :

Et les animateurs :
- Jérome
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 127 - Les nazis du bitume        


APPS :

ZAPPS :

NEWS:

  • Des lives en septembre ?


LIENS :

Et les animateurs :
- Jérome
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 126 - La mer, c'est assez grand        


APPS :

ZAPPS :

NEWS:

  • La manette bientôt sur vos tablettes ?
  • Le sondage de Tap!


LIENS :

Et les animateurs :
- Jérome
- Korben
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 125 - Jérôme endort les bébés        


APPS :

  • Alien Blue free (Gratuit / iOS) & HD ( 2,99€ / iPad)
  • Relax Melodies (Gratuit & 2,39€ / iOS)

ZAPPS :


LIENS :

Et les animateurs :
- Jérome
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 124 - Combinisons les émotions        


APPS :

ZAPPS :

NEWS :

  • Bye bye Nexus Q
  • Resumé A vs S dans le RDV Tech
  • Archos G10 annoncé avant fin Aout
  • Keynote Apple le 12 Sept

Et les résultats concours "Le Puits des Mémoires".


LIENS :

Et les animateurs :
- Jérome
- Cédric
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 123 - Le podcast 100% tablettes        

APPS :
  • Win8 tablette
  • ScanBizCard Napoleon - iPhone version - History - Quelle Histoire
ZAPPS :
  • Banque (de la caisse d'épargne) Nokia Drive 3.0
  • M6 InstaCRT - Martin Ström et W9  InstaCRT - Martin Ström


LIENS :

Et les animateurs :
- Jérome
- Will
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 122 - AppLoad à poil        

APPS :
  • Concours "Le Puits des Mémoires" (eBook / 11,99€) Kindle Podcasts - Apple
    • Pour participer au concours, téléchargez l'extrait gratuit depuis votre appareil (ou sur le web depuis le Google Play Store) et essayez de deviner quel personnage jouait Patrick. Laissez ensuite un commentaire sur l'article de l'épisode en expliquant votre choix (en une ligne seulement). Et n'oubliez pas d'utiliser une adresse mail valide, nous l'utiliserons pour vous contacter si vous gagnez !
  • Retour d'expérience sur le Samsung Galaxy SIII
  • Fieldrunner 2 (iOS / 2,99€) Napoleon - iPhone version - History - Quelle Histoire
ZAPPS :
  • Chrome (iOS / Gratuit) InstaCRT - Martin Ström
  • Klout (iPhone / Gratuit) InstaCRT - Martin Ström
  • OpenVPN (Android / Gratuit) 
NEWS :
  • Le Samsung Galaxy SIII et le Nexus 7 se vendent par camions entiers !


LIENS :

Et les animateurs :
- Jérome
- Will
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 121 - On a de la place dans le canap'        

APPS :

ZAPPS :

NEWS :

  • Un téléphone Amazon ?
  • Un iPad mini ?
  • Fin de "The Daily" ?


LIENS :

Et les animateurs :
- Jérome
- Will
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 120 - ComicCon - Le car à vannes        

Un épisode spécial, enregistré en direct et en public, à ComicCon Paris !

APPS :

ZAPPS :

NEWS :

  • Résumé des keynotes des constructeurs de mobiles de ce mois de juin (Apple, Google, Microsoft... et Microsoft).

La vidéo :


LIENS :

Et les animateurs :
- Jérome
- Cédric Bonnet
- Will
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 119 - Live Google I/O        

Un épisode spécial en live pour la keynote de la conférence Google I/O !


LIENS :

Et les animateurs :
- Jérome
- Cédric Bonnet
- Will
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 118 - Live Windows Phone Summit        

Un épisode spécial en live (encore !) pour le Windows Phone Summit.


LIENS :

Et les animateurs :
- Jérome
- Cédric Bonnet
- Cédric Mercet
- Alexandre
- Will
- Timothée
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 117 - Live WWDC Apple 2012        

Un épisode spécial en live pour la keynote de la WWDC Apple 2012 !


LIENS :

Et les animateurs :
- Jérome
- Cédric
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 116 - Korben (n') est (pas) un ninja        

Au programme de cet épisode :

NEWS :

  • AppLoad live le 11 juin à 18h40 (Keynote Apple WWDC)

APPS :

ZAPPS :

LIENS :

 

Et les animateurs :
- Jérome
- Cédric
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 115 - 123 les gens        

Au programme de cet épisode :

NEWS :

APPS :

ZAPPS :

  • Recollect (iOS / Gratuit)
  • Nokia Lecture (Reading) (WP7 / Gratuit)
  • Ology (Android / Gratuit)
  • iFighter 2 (iOS / Gratuit)

LIENS :

 

Et les animateurs :
- Jérome
- Cédric
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 114 - Utilitas comme p...        

Au programme de cet épisode :

NEWS:

  • Prochain iPhone, le point sur les rumeurs.

APPS :

ZAPPS :


LIENS:

Et les animateurs :
- Jérome
- Cédric
- Korben

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 113 - Y'a du Sushi à se faire        

Au programme de cet épisode :

NEWS:

  • Korben, impressions sur l'iPad

APPS:


ZAPPS:


LIENS:

Et les animateurs :
- Jérome
- Cédric
- Korben

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 112 - Soufflez sur les pissenlits        

Au programme de cet épisode :

NEWS:


  • Le topo complet sur le Galaxy S3

APPS:


ZAPPS:


LIENS:

Et les animateurs :
- Jérome
- Cédric
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 111 - Ils se tirent les coudes        

Au programme de cet épisode :

NEWS:


  • Les parts de marché des tablettes Android
  • Des millions d'iPad
  • Apple, le Godzilla des Godzilla

APPS:

ZAPPS:


LIENS:

Et les animateurs :
- Jérome
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 110 - Plus Tegra, plus t'es lourd        

Au programme de cet épisode :

NEWS:

  • Le Samsung Galaxy S3 arrive !

APPS:

ZAPPS:


LIENS:

Et les animateurs :
- Jérome
- Cédric
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 109 - Le paradoxe rebelle        

Au programme de cet épisode :

NEWS:

  • L'invasion des smartwatch ! (Sony / Pebble)
  • Instagram, l'app à 1 milliard de dollars.

APPS:

ZAPPS:


LIENS:

Et les animateurs :
- Jérome
- Cédric
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 108 - La cloclonexion        

Au programme de cet épisode :

NEWS:

  • Les soucis de Free Mobile

APPS:

ZAPPS:

LIENS:

Et les animateurs :
- Jérome
- Cédric
- Korben
- Patrick

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 107 - Plus cher, mais moins pourri        

Au programme de cet épisode :

NEWS:

  • Google play, la tablette ?
  • HTC One X

APPS:

ZAPPS:


LIENS:

Et les animateurs :
- Patrick
- Jérome
- Cédric
- Korben

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 106 - CRS Hotel Club        

Au programme de cet épisode :

NEWS:

  • NFC en France
  • Microsoft et Nokia font des apps
  • Smoked by... Android

APPS:

ZAPPS:


LIENS:

Et les animateurs :
- Patrick
- Jérome
- Cédric
- Korben

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          AppLoad 105 - La réalité n'a pas de limite        

Au programme de cet épisode :

NEWS:

  • iPad 3 hands on, ventes "records", Tegra 3 vs A5X.
  • Apple: dividendes et rachats d'actions.
  • Nexus Tablet ‡ 150 ~ 200$ ?
  • Windows 8 hands on.

APPS:

          AppLoad 104 - Je te ramène un kangourou        

Au programme de cet épisode :

NEWS:

  • Petites news du Mobile World Congres

APPS:

          AppLoad 102 - Le podcast des silences gênés        

Au programme de cet épisode :

NEWS:

APPS:

ZAPPS:

LIENS:

Et les animateurs :
- Patrick
- Jérome
- Cédric

Retrouvez cette émission et beaucoup d'autres sur le site NoWatch.net.


          Stop pretending data visualisation is easy – bring distributed skills together        
I spent a great day at LocalGovCamp in Birmingham last Saturday, an unconference for anyone interested in how social media and digital technology relates to local authorities and improving public services. Photo by Glenn Wood Toby Blume, of Urban Forum and Paul Evans ran a session on data visualisation and visualising policy (more on that …
          Weekly Small Business Events In Orange County Inland Empire May 8 - 12, 2017         

Weekly Small Business Events In Orange County Inland Empire May 8 - 12, 2017

Small Business Events in Orange County Inland Empire - shared by oGoing

Small Business Events on Monday, May 8:

  • Become a Better Business Manager with Better Information (Orange, 6p-9p) 
  • "We’ll start by looking at how you can use Microsoft Excel with PowerPivot as well as Microsoft PowerBI to give your business better reporting and data visualization. This workshop is presented by Eddie Bader, Eric Klauss, Manish Bhardia and Bryan Iinuma."
  • Creating Web Pages that Sell  (Webinar, 11a-1p)
  • "Are you a business owner or marketing professional looking to gain a better understanding of How to create web pages that sell? This course is perfect for you."
  • Developing Your Public Speaking Presence  (Palm Desert, 6p-8p)
  • "In this highly insightful and interactive session, you’ll learn the secrets to effective verbal communications. You’ll discover the power behind knowing your audience; maintaining eye contact; using notes; eliminating verbal “junk food” and more."

Business Owner Workshops on Tuesday, May 9:

  • Tech & Marketing - Season 3  (Mission Viejo, 5:30p-8:30p)
  • "Session #1: Using Marketing Automation Tools to Expand Your Reach in the Digital World"
  • "Session #2: “The Purple Cow Secret of Successful Businesses” – Defining Your Unique Selling Proposition"
  • Drive New Business with Social Media + Measuring Your Marketing  (Rancho Santa Margarita, 6p-9p)
  • "You know you should be on social media for your business or non-profit. You know it can help you drive more new and repeat business for your organization. But, do you know how and why?"
  • Marketing is War!  (Newport Beach, 6p-8:45p)
  • How The Little Guy Beats The Big Guy? By Being Different and Better!
  • What You Will Learn? 1. Proven strategies on how smaller competitors outperform bigger competitors 2. How to focus on a few critical things 3. What to advertise, what to sell and what makes you money
  • QuickBook Basics  (Redlands, 9a-11a)
  • "With the most popular bookkeeping software, you can learn to do your own bookkeeping and control your finances in-house. This workshop will explain all the basics so you can handle daily tasks the easy way."
  • How to Start a Small Business  (San Bernardino, 9a-11a)
  • "If you are a new or aspiring business owner, this workshop is a must! Discuss the steps to take towards starting your first business. The “How to Start a Small Business” workshop is presented by an SBDC Business Consultant"
  • Business Idea & Feasibility Studies  (Palm Desert, 6p-8p)
  • "Want to know if your idea is “Business Viable”? Learn how to perform a feasibility study before you invest valuable time, energy and money into an idea that may not be truly viable. Discover what to investigate to determine the “business viability” of your idea."
  • Increase Revenue and Profits with Electronic Payment Technology  (Webinar, 10a-11a)
  • "Increased revenue, more customers, cost savings and strengthened security are just some of the benefits of accepting electronic payments."

Startup Business Workshops on Wednesday, May 10:

  • Are You Ready to Exit? Business Succession Planning for Small Business Owners  (Mission Viejo, 1:30p-4:30p)
    "Planning for your future starts now. Succession planning, exit strategies and maximizing your value before retirement is worth taking time out of your busy day and will add to your peace of mind!"
  • Secrets to Buying a Franchise  (La Habra, 5:30p-8:30p)
    "You will learn how to evaluate starting a franchise vs. an independent business vs. buying an existing business; the best types of franchises to open in today’s economy..."
  • Getting Started with Social Media  (Chino, 1p-3p)
    "Things you can do right now to be seen, be heard and remembered. the new or busy business owner, Social Media marketing can be challenging and even downright hard work."
  • How to Start a Small Business  (Palm Desert, 9a-11a)
    "Thinking about starting your own small business but unsure about where to begin? In this workshop, you will learn about the different forms of business ownership,..."
  • Legal Do's and Don'ts for Business  (Santa Ana, 6p-8:30p)
    "The workshop covers basic legal issues faced by small businesses in California. You will learn the most common costly legal mistakes that most small business owners make..."
  • Marketing Your Small Business  (Ontario, 9a-11a)
    "Are you looking to increase your bottom line? Attend this workshop and learn how an effective marketing plan can help you to increase your sales and profitability while efficiently managing your marketing dollar."
  • Writing a Winning Business Plan  (Riverside, 3p-5p)
    "Have the best plan for your business! A well-written business plan can help focus your planning efforts and give potential lenders or investors a positive picture of your goals."
  • Microsoft Word: Working with Multi-Page Documents  (Colton, 6p-8p)

Small Business Operations Events on Thursday, May 11:

  • Buy/Sell & Valuation of a Business  (Laguna Woods, 6p-9p)
    "What’s your business worth? You’ll learn how to develop a valuation estimate that is defensible whether you are buying, selling or starting a business."
  • Marketing & Promotion: Finding Your Niche  (Yorba Linda, 6p-8:30p)
    "Learn how to identify and attract customers to your product or service by finding their real wants and needs. Learn how to get your message to those target customers."
  • The Best Customer Service...How To...The Basics  (Fullerton, 6p-9p)
    "If the employers treat their people right - hire the right people, train them right, reward them right, hold them accountable, etc. - then their people will always go the extra distance for the customer and their employer."
  • Writing a Winning Business Plan  (Riverside, 9a-11a)
    "A "living" business plan is an important management tool of every business owner. Know where you are going! Develop a realistic business plan as your roadmap to growth and financial viability."
  • State Payroll Tax  (Riverside, 9a-1p)
    "Calculate the taxes for a quarter of payroll. You will then complete a DE 88, a DE 9 and a DE 9C As an employer, it is to your advantage to know your obligations and understand the State and Federal payroll reporting requirements."
  • Optimize Your Business & Personal Life  (Palm Desert, 6p-8p)
    "Discover the real power behind defining your life’s purpose, clarifying your values, and adhering to both during any endeavor."
  • Planning for Taxes, Healthcare, and Payroll in a Period of Uncertainty  (Webinars, 10a-11a)
    "With a new landscape in Washington, many rules are unsettled. Nonetheless your business must make budgets, determine staffing, and take other actions now."
  • Power Negotiating Techniques for Licensing Your Products and Services  (Webinar, Noon-1p)
    "Licensing is one of the most potent opportunities you may have to acquire distribution, new markets, and additional revenues for your goods and services."

Small Business Training on Friday, May 12:

  • The Right Start  (Irvine, 12:30p-4p)
    "The Myths and Paradoxes of Entrepreneurship: This initial workshop explores the folklore, stereotypes and paradoxes often associated with entrepreneurial success and entrepreneurs themselves."
  • Safeguard Your Business: Microsoft Security Tools  (Webinar, Noon-1p)
    "Businesses today face a constantly evolving set of potential threats, from data security breaches to downtime from unexpected events. Businesses are asking questions like..."
  • Microsoft Cybersecurity (Webinar, Noon-1p)
    "According to the Wall Street Journal, over 34,000 computer security incidents occur every day, and 62 percent of those incidents involve breaches of small and medium-sized business!"

Source: SBA News bulletin   https://content.govdelivery.com/accounts/USSBA/bulletins/198428a


          Senior UX Designer (Remote)        
Are you looking to transition into the VR industry? This is a great opportunity for a flexible, outside-the-box thinker who wants to come explore the wild west of virtual reality with our design-led team at Osso VR.

We're looking for someone who can independently identify design problems to solve and craft user-centered solutions, as well as conduct their own research and usability studies.

We value diversity of thought and perspective. Women, PoC, LGBTQ+ and other underrepresented groups in tech are strongly encouraged to apply.

You are:

- User-centered first and foremost, willing to diplomatically make a case for user needs and goals
- Familiar with the landscape of VR hardware/software
- Able to show us a portfolio that demonstrates your seasoned design process
- Educated, whether that means you have a Bachelor's degree or equivalent life experience (School of Hard Knocks)
- Proficient with Adobe Illustrator, Sketch, Principle, or other 2D prototyping tools
- Able to design and conduct user research at least somewhat independently, analyze the results and make recommendations (you will have help with this)
- Able to iterate quickly on tight deadlines and trade perfection for speed
- Autonomous and self-directed with empathetic communication skills
- OK with regularly looking at graphic depictions of surgery, whether it’s pictures, videos, or in VR

The products we make focus on direct hands-on interactions with physically realistic virtual environments. Our team is constantly sharing resources and talking about the latest VR design practices. We welcome you to share your skills and wisdom with the rest of the team, and learn from us in return.

Nice to haves:

- At least one VR project in your portfolio
- Familiarity with Unity and the concepts of 3D geometry, collision physics, meshes, textures etc
- Experience with VR prototyping, 3D modeling for VR, hand-tracked interactions etc
- Experience designing for 2D data visualization and/or analytics dashboards
- A background in healthcare, medical illustration, or any other relevant medical field

We want you to own the process of translating product strategy & user requirements into full-fledged user experiences. You'll be designing experiences across both 2D and 3D interfaces as user needs require it.

You will be asked to document and present your designs using whatever communicates them best: flowcharts, sketches, storyboards, gifs, video clips, prototypes, pantomiming over Skype, or maybe even inventing new ways of communicating.

You will not be expected to write code or make 3D models. (Experience in those areas is welcome, but not necessary.)

We'll provide:

- A respectable salary
- Stock options
- Health insurance (80% of your premiums are covered by us)
- Flexible work hours and location
- VR hardware and a VR-capable work computer
- Unlimited vacation/sick time
- An environment that fosters learning and growing your skills
- Career support and mentoring to write about your process and/or speak at industry events so you get credit for your hard work

We are a fully remote team distributed across three countries and five timezones. We hope that you're comfortable working remotely for now, with the option to relocate in the future once we find a city to call home.

Please include your current resume and a link to your portfolio when applying.
          Delegates from World Bank Group Visit IUPUI        
On Wednesday, December 14, delegates from the World Bank Group visited IUPUI as guests of The Polis Center and the Office of the Vice Chancellor of Research (OVCR). The group included officers and specialists from the South Asia Disaster Risk Management team and the Global Facility for Disaster Reduction and Recovery (GFDRR).The World Bank team came to IUPUI specifically to learn more about The Polis Center’s risk analysis and disaster mitigation planning. It was especially interested in how Polis engages communities in mitigation planning, as well as in the center’s experience with innovative technology approaches and the federally sponsored Hazus-MH modeling software for natural disasters. The agenda featured various experts with the State of Indiana, Purdue University, and Indiana University discussing mitigation successes in Indiana, data resources, global food security projects, flood risk modeling, and complex data visualization."We are pleased to welcome this World Bank team to IUPUI," said Dr. David Bodenhamer, Polis Center executive director.  "The World Bank goals to help reduce poverty and support development with financial aid, policy advice, and analytical and technical expertise are important ones. The Polis Center is deeply committed to the development of partnerships and the practical use of advanced technologies to strengthen communities, so sharing our know-how and that of our colleagues with the World Bank is a natural extension of this effort. We would like nothing more than for our capabilities to contribute to a meaningful improvement in the quality of life for populations in developing countries.""We are delighted to learn about The Polis Center’s wide-ranging activities on disaster risk management and mitigation planning and for the insightful discussions we had with a range of domain experts during our visit," said Deepak Singh, who led the visiting World Bank delegation. "I thank the center for making this visit possible and for the warm hospitality. We look forward to staying in touch and exploring ideas on how the latest research on risk management can be applied to developing country contexts."The Polis Center at IUPUI, a unit of the IU School of Liberal Arts, was established in 1989 to link two types of expertise—academic and practical—for the benefit of communities   in Indiana and elsewhere.  It specializes in providing place-based research tools to transform data into usable information for more effective local decision-making. This includes the SAVI community information system (http://www.savi.org/) to help nonprofits, academia, government, and health organizations assess trends and conditions, identify service gaps, and better target areas of concern based on the social, economic, and other demographic realities. The Polis Center is deeply committed to the development of partnerships and the practical use of advanced technologies. Its collaborative and entrepreneurial nature results in a unique and diverse range of applied projects to strengthen communities in Indiana and beyond.IUPUI’s Office of the Vice Chancellor for Research provides services to and works directly with faculty, research staff, and students, in an effort to foster excellence in research and creative activities.  OVCR provides consultation, proposal development services, internal seed funding programs, and establishes strategic research initiatives that address emerging local, national and global challenges and opportunities.For more information on the World Bank, visit: http://www.worldbank.org/en/topic/disasterriskmanagement, http://www.worldbank.org/en/topic/disasterriskmanagement/x/sar, and https://www.gfdrr.org/www.gfdrr.org 
          Dynamix: dynamic visualization by automatic selection of informative tracks from hundreds of genomic datasets        
Abstract
Motivation: Visualization of genomic data is fundamental for gaining insights into genome function. Yet, co-visualization of a large number of datasets remains a challenge in all popular genome browsers and the development of new visualization methods is needed to improve the usability and user experience of genome browsers.Results: We present Dynamix, a JBrowse plugin that enables the parallel inspection of hundreds of genomic datasets. Dynamix takes advantage of a priori knowledge to automatically display data tracks with signal within a genomic region of interest. As the user navigates through the genome, Dynamix automatically updates data tracks and limits all manual operations otherwise needed to adjust the data visible on screen. Dynamix also introduces a new carousel view that optimizes screen utilization by enabling users to independently scroll through groups of tracks.Availability and Implementation: Dynamix is hosted at http://furlonglab.embl.de/Dynamix.Contact:charles.girardot@embl.deSupplementary information:Supplementary dataSupplementary data are available at Bioinformatics online.

          Beautiful Evidence by Edward Tufte        
This week I had the pleasure of seeing Edward Tufte speak at a one day course in Los Angeles. Heralded by The New York Times as “the Leonardo da Vinci of data,” Tufte is an innovator in information design and data visualization, specializing in transparency and clarity. The emphasis is on disseminating unbiased, accurate content […]
          Oracle Delivers Next-Generation Cloud Applications         
Press Release

Oracle Delivers Next-Generation Cloud Applications

Innovations across Oracle Cloud Applications extend industry’s broadest, deepest, and fastest growing suite of cloud applications

Redwood Shores, Calif.—Aug 2, 2017


To help organizations around the world grow faster, differentiate from competitors, and better serve their customers, Oracle today announced significant new capabilities and enhancements to Oracle Cloud Applications. With the introduction of Oracle Cloud Applications Release 13, Oracle is further extending the industry’s broadest, deepest, and fastest growing suite of cloud applications. Innovations in the new release enhance the user experience and empower business users across the organization including customer experience, finance, HR, and supply chain professionals.

“We are committed to helping organizations of all sizes transform critical business functions to drive their growth and stay competitive,” said Steve Miranda, executive vice president of applications development, Oracle. “With the latest release of Oracle Cloud Applications, we are introducing hundreds of new innovations. The latest updates include major enhancements to our supply chain management suite that will help customers create intelligent, connected, and customer-centric supply chains. In addition, we are introducing a brand new solution that enriches the customer experience by bridging the gap between sales and customer service. The new release also includes further advancements to the user experience and customer-driven changes for human resources and finance.”

Oracle Cloud Applications provide a complete and fully integrated suite of applications that allow organizations to increase business agility and reduce costs. The latest release includes new capabilities and enhancements across Oracle Supply Chain Management (SCM) Cloud, Oracle Customer Experience (CX) Cloud Suite, Oracle Enterprise Resource Planning (ERP) Cloud and Oracle Human Capital Management (HCM) Cloud. In addition, Oracle has enhanced the user experience across Oracle Cloud Applications to help customers personalize their experience and further improve productivity, insight, and collaboration.

Oracle SCM Cloud

Oracle SCM Cloud delivers the end-to-end visibility, insights, and capabilities that organizations need to create intelligent supply chains. Oracle SCM Cloud Release 13 extends the most comprehensive SCM suite in the cloud with the introduction of more than 200 major features and six new products that cover Sales and Operation Planning, Demand Management, Supply Planning, Collaboration, Quality Management and Maintenance. The new innovations help organizations transform their operating models to meet rapidly changing business demands by evolving from traditional supply chain systems to connected, comprehensive, agile, and customer-oriented supply chain management capabilities.

Oracle CX Cloud Suite

Oracle CX Cloud Suite empowers organizations to take a smarter approach to customer experience management and business transformation initiatives by providing a trusted business platform that connects customer data, experiences, and outcomes. Oracle CX Cloud Suite Release 13 introduces new innovations to Oracle Sales Cloud, which include enhanced mobile and data visualization capabilities, as well as a range of new capabilities that increase sales rep productivity. In addition, Oracle has extended Oracle CX Cloud Suite with the introduction of Oracle Engagement Cloud. The new solution combines sales and service capabilities to enable organizations to increase customer satisfaction, loyalty, and up-sell opportunities.

Oracle ERP Cloud

Oracle ERP Cloud is the industry’s leading and most complete, modern, and secure financial platform delivered seamlessly through the Oracle Cloud. Oracle ERP Cloud helps organizations drive innovation and business transformation by increasing business agility, lowering costs, and reducing IT complexity. Oracle ERP Cloud Release 13 builds upon the industry’s broadest and most integrated public cloud. Extended depth and breadth across Financials, Procurement, and Project Portfolio Management (PPM) help organizations accelerate the pace of innovation via deeper domain functionality including Dynamic Discounting and Multi-Funding. In addition, industry coverage for higher education, financial services, and manufacturing, as well as expanded country localizations for India and Brazil, enable organizations of all sizes, and from different industries and geographies, to quickly and easily take advantage of the new release.

Oracle HCM Cloud

Oracle HCM Cloud provides organizations with modern HR technologies that enable collaboration, optimize talent management, provide complete workforce insights, increase operational efficiency, and make it easy for everyone to connect on any device. Oracle HCM Cloud Release 13 extends Oracle’s commitment to customer success with 80 percent of enhancements being customer driven. Release 13 enhances Oracle’s complete, end-to-end solution for all HCM processes by introducing expanded user experience personalization and branding and additional Tier 1 localization support. It also includes improved capabilities to support the needs of customers with unionized workforces, such as retail and healthcare with flexible work models.


Contact Info
Simon Jones
PR for Oracle
+1.415.856.5155
sjones@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 


Talk to a Press Contact

Simon Jones

  • +1.415.856.5155

          Senior Business Analyst Consultant munkakörbe keresünk munkatársat. | Feladatok: Design and pre...        
Senior Business Analyst Consultant munkakörbe keresünk munkatársat. | Feladatok: Design and prepare customer reports, dashboards and analyses; create prototypes; turn models into results • Analyse, validate and visualize business data, create value-added solutions; • Analyse business questions and deliver business answers; translate technical data findings for non-technical teams and vice versa • Become a data visualization expert to empower the clients understanding their data • Provide insights and recommendations for clients based on the analysis • Daily contact with custumers, fill the role of a consultant • Ability to bring several client needs together, recommend solutions and ideas before client interest • Strong understanding of client organizations • Provide mentoring within the team and actively participate in internal knowledge-sharing forums; conduct trainings. | Mit ajánlunk: Permanent job in a modern office in the company of many skilled associates; • Regular training and development opportunities; • Use of high valued applications; • Bilingual work environment; • Competitive salary and benefits; • Recreation area; • Relocation opportunities; • International projects. | Elvárások: Bachelor?s/Master?s Degree in Information technology or Economics with a strong IT attitude • Work experience in an international enterprise size/servicing company • 5+ years of relevant professional experience reporting, data analysis, data manipulation techniques • Excellent communication skills, ability to work closely to clients • Fluent English is a must, both verbal and written • Self-motivated personality and excellent problem-solving skills, can-do attitude • Excellent time management skills, ability to work effectively with strict deadlines • Strong understanding of the lastest BI trends • Experience and willingness to manage and mentor junior colleguaes | További infó és jelentkezés itt: www.profession.hu/allas/1050572
          Comment on How Nonprofits Can Earn News Coverage Using Data Visualization by rozliczenie pit przez internet        
I do believe all of the concepts you have presented on your post. They are very convincing and will definitely work. Still, the posts are too brief for newbies. May just you please prolong them a bit from next time? Thanks for the post.
          Senior data management expert / team leader munkakörbe keresünk munkatársat. | Feladatok: Coord...        
Senior data management expert / team leader munkakörbe keresünk munkatársat. | Feladatok: Coordinate the business development of the data management practice at PwC Hungary; • Develop new business opportunities and value-enhancing data management / big data solutions; • Contribute to client portfolio planning and cooperate with other business development initiatives; • Management of data management team staffing, training and project allocation; • Create the frameworks for value-enhancing data management and data intelligence solutions for PwC clientele and internal projects; • Oversee the data management activities and provide guidance translating business objectives into analytic procedures; • Dissemination of data management procedures and technique, providing training materials and hold trainings for external and internal parties; • Professional review of analyses, reports, deliverables provided by junior colleagues.. | Mit ajánlunk: A challenging work environemnt in which you will face diverse and unique problems and situations; • A professional, team-oriented and dynamic workplace; • Up-to-date technologies and methodologies of our international network; • Professional development and training opportunities; • A competitive salary and benefits package cafeteria, laptop, mobile phone, and other benefits. | Elvárások: 4-5 years of experience with data analytics and/or business intelligence; • University/College degree is a must; • In-depth knowledge of data visualization solutions Tableau, PowerBI preferred; • Experience with using audit-focused data analytics tools ACL, IDEA would be preferred; • Familiarity and hands-on experience with SQL; • Fluency in programming languages preferably in Python and R would be an advantage; • Strong English language skills additional language skills would be an advantage; • Significant experience with leading ERP solutions SAP, Dynamics AX, Oracle • Experience in leading projects and manage group dynamics; • A proactive approach to business development and building client relationships; • A strong desire for continuous improvement and client facing responsibilities. • Demonstrated ability to think abstractly, solve problems and deal with ambiguity. | További infó és jelentkezés itt: www.profession.hu/allas/1055995
          Breaking Defaults        
Visualizing data is relatively easy to do these days considering the wide variety of tools at our disposal. Visual.ly, a great data viz blog, names Excel, Photoshop, Illustrator, Tableau, Google Public Data, Many Data, and Stat Silk as just a few tools to visualize data. While some programs require a steeper learning curve to efficiently use, any user with a data set and some basic knowledge of Excel can produce a wide range of visualization types including bar graphs, pie charts, area charts, and scatterplots. Guest author to The Why Axis, Jon Schwabish, takes a look at a Bureau of Labor Statistics (BLS) visualization, done in Excel, for job openings in November 2012. While the visualization passes for use of appropriate chart type, it fails in its details. Because the BLS utilized default settings from Excel, the true story of the data is lost. Schwabish takes us through minor changes, all done in Excel, to create a visualization that more effectively tells the story of job openings in November 2012. Schwabish explains the things he finds appealing about the visualization: sourcing, a left-aligned title, and values measured in thousands to name a few; however, the default coloring, automatic spacing, and ordering of the bars and industries are a few things to be improved upon. His first of a series of changes is a quick sort on industry by descending values which helps to give more order to the graph. A change in colors helps to make the most recent data value stand out against the previous months while creating a more cohesive visualization. Schwabish goes on to show more suggested changes to better the storytelling of Job Openings in November 2012. The take away here is not exclusive to Excel, but all data visualization programs. Simple design elements including descriptive text, color, font, and order are important to telling a story and, often times, the default settings for telling that story are not optimal. Read the full blog post here and make sure to examine the transition from one version of the visualization to the next.
          BI on your terms with SQL Server 2016        

The last few years Microsoft's strategy was all about cloud first (or cloud only?), releasing new BI products and updates to existing products to the cloud in high pace without almost any investments in on-premises BI. In 2015 Microsoft seems to change its course, they now aim more on the enabling of hybrid scenarios, investing a lot in both cloud (Power BI/Azure) and on-premises with SQL Server 2016.
Microsoft’s message regarding BI for 2015/2016 is:  â€œBI on your terms”.

BI on your terms means leveraging up-to-date possibilities for one or a combination (hybrid) of the following architectures:

  • Cloud with Azure and Power BI

  • On-Premises with SQL Server 2016

  • Server driven or Self-Service

To be able to offer quality hybrid architectures Microsoft invests a lot in the on-premises BI suite with SQL Server 2016 and they have announced to keep investing in it the coming years. So not only cloud first like we have seen in previous years, but more on hybrid possibilities, and if you desire on-premises only.

For the first time in many years an exciting version of SQL Server is coming in terms of BI. The main topics are:

  • Hybrid BI (Cloud/On-Premises)

  • Modern Reports

  • Enhanced Analysis

  • Mobile BI


Below is an overview of the new BI related features per SQL Server 2016 service or product. As the length of this list shows, SQL Server 2016 will be a massive BI version!!

Analysis Services Tabular

  • Enhanced modeling capabilities in the semantic layer

    • Many-to-many relationships

    • BI Directional cross filtering. This means you can not only filter on the 1 side of a 1 to many relationship in your tabular model, but also on the many side. For example, two connected tables, Sales à Product:

      • Product: product, product category

      • Sales: sales date, connection to product table

        Now select products sold filtering on sales date(many side) while also filtering on product category (1 side). This is not possible in today’s version of SSAS tabular.

  • Time intelligence

    • Date/time columns are automatically converted to rich date/time tables starting from the column’s MIN date till the MAX date found

  • New DAX functions

    • A lot of new functions that at the moment require quite complex formulas like present time, date difference, percentile, product, geomean, median, etc.

  • Performance improvements

    • For end users

      • Query engine optimized

    • For developers

      • Metadata operations; modeling related operations are much faster

    • For data processing

      • Parallel partition processing

  • Expose on-premises tabular models in the cloud (hybrid) à Power BI feature, possible already today with SQL Server 2012.


Analysis Services Dimensional

  • Netezza as a Data Source (Netezza Data Warehouse | IBM - NDM Technologies)

  • Performance improvements

    • Unnatural hierarchies

    • Distinct counts

    • Other performance improvements in areas where multidimensional is not performant at the moment

  • DBCC (DataBase Check Consistency) support. Checks the logical and physical integrity of objects in the specified database.

  • Expose on-premises multidimensional cubes in the cloud with Power BI (hybrid)


SQL Server Database Engine

  • Integration of R analytical engine, predictive analytic capabilities via T-SQL queries

  • PolyBase available without the need of PDW, makes it possible to query both structured relational SQL, and unstructured Hadoop data through T-SQL statements

  • Data encryption for stored data and data in motion

  • Row-level security

  • Updates to the in-memory OLTP engine, for example updateable in-memory nonclustered columnstore indexes

  • Parsing and storing native JSON data

  • XEvents-based monitoring in Management Studio


Reporting Services

  • New look and feel and possibility to apply themes and branding using CSS

  • New visualizations, chart types like tree maps and sun bursts

  • Improved flexible parameter panel with support for:

    • Autocomplete

    • Search

    • Hierarchical tree display

  • Runs in all modern browsers on both desktops as tablets (any device)

  • Integration of R analytical engine

  • Power Query as a data source

  • Pin on-premises SSRS reports to Power BI Dashboards (hybrid)


Integration Services

  • High Availability support

  • Power Query integration

  • Azure Data Factory integration (hybrid)

    • Execute on-premises SSIS packages from Azure Data Factory

    • Azure Data Factory data flow task

    • Azure storage connector

    • Azure commandlets

  • OData 4.0 support

  • Hadoop File System (HDFS) support

  • JSON support

  • New Oracle/Teradata connector (4.0)

  • Incremental deployment options

  • Custom logging levels

  • SSIS package templates to reuse ETL code


Mobile BI

  • In the cloud with Power BI

    • Power BI App for Windows Phone (coming soon) and iOS

  • On-premises with Datazen Server

    • Now available for free for SQL Enterprise Edition customers (2008 or later)

    • All major platforms: Windows Phone, Android, iOS

    • Beautiful interface and data visualizations

    • Optimizable for Phone, Tablet and Laptop


SharePoint vNext integration

  • Edit Mode of PowerPivot Excel workbooks in browser

  • Support for Excel vNext (Office 2016) DAX functions


Master Data Services

  • Improved performance for large models

  • Row-level compression per entity

  • Improved user interface

  • Configurable retention settings

  • Enhanced security possibilities for read, write, delete and create operations and support for multiple system administrators with specific permissions

  • Excel Add-in is 15 times faster and is updated to support bulk entity based staging operation


Visual Studio

  • Database and BI project types merged into one Visual Studio

  • New scripting language for tabular models. Currently tabular models are wrapped into multidimensional constructs and when you deploy it will be reverse engineered to the tabular model. The new native language for tabular will be easy to understand, modify and deploy.

  • SSIS designer supports previous versions of SQL Server


Of course there is still also a lot of exiting news coming from the cloud side of Microsoft BI, for example the Azure Data Lake is announced, following the principles of my blogpost about the relational data lake. You can expect a post about the Azure Data Lake on this blog soon!


P.S. Don’t forget to suggest and vote for feature requests for SQL Server yourself at:  http://aka.ms/SqlBiUserVoice

 


          Digital Humanities Berkeley Summer Institute, Aug 14-18        
Don't miss out on this annual series of digital humanities discussions and workshops!

In the spirit of encouraging "thoughtful application of digital tools and methodologies to humanistic inquiry," Berkeley's very own Digital Humanities Summer Institute invites you to participate in a host of workshops and open-to-the-public conversations over the course of the week.

The schedule of events is heavily interdisciplinary, offering events that run the gamut from workshops on geospatial data visualization, to discussions on using digital tools to examine "whiteness" in American novels.

The amateur Digital Humanist need not be excluded; the summer program offers introductory workshops.

Check out the schedule of events and save your seat through the DHBSI website! http://digitalhumanities.berkeley.edu/digital-humanities-berkeley-summer-institute-2017-workshops-august-14th-18th
          130 RR Data Visualization with Aja Hammerly        
Aja Hammerly talks to the Rogues about distilling data into a graphical representation that communicates the meaning and message of your data.
          A test case for phylogenetic methods and stemmatics: the Divine Comedy        

In a previous post I gave an outline of stemmatics, and briefly touched on the adoption and advantages of phylogenetic methods for textual criticism (On stemmatics and phylogenetic methods). Here I present the results of an empirical investigation I have been conducting, in which such methods are used to study some philological dilemmas of a cornerstone work in textual criticism, Dante Alighieri's Divine Comedy. I am reproducing parts of the text and the results of a paper still under review; the NEXUS file for this research is available on GitHub.


Before describing the analysis, I discuss the work and its tradition, as well as some of the open questions concerning its textual criticism. This should not only allow the main audience of this blog to understand (and perhaps question) my work, but it is also a way to familiarize you with the kind of research conducted in stemmatics. After all, the first step is the recensio, a deep review of all information that can be gathered about a work.

The Divine Comedy

The Divine Comedy is an Italian medieval poem, and one of the most successful and influential medieval works. It is written in a rigid structure that, when compared to other works, guaranteed it a certain resistance to copy errors, as most changes would be immediately evident. Composed of three canticas (Inferno, Purgatory, and Paradise), the first of its 100 cantos were written in 1306-07, with the work completed not long before the death of the author in 1321. Written mostly during Dante's exile from his home city, Florence (Tuscany), like many works of the time it was published as the author wrote it, and not only upon completion. In fact, it is even possible, while not proven, that the author changed some cantos and published revisions, thus being himself the source of unresolvable differences.

No original manuscript has survived, but scholarship has traced the development of the tradition from copies and historical research. The poem is one of the most copied works of the Middle Ages, with more than 600 known complete copies, besides 200 partial and fragmentary witnesses. For of comparison, there are around 80 copies of Chaucer's Canterbury Tales,which is itself a successful work by medieval standards

Commercial enterprises soon developed to attend the market demand of its success. In terms of geographical diffusion, quantitative data suggests that, before the Black Death that ravaged the city of Florence in 1348, scribal activity was more intense in Tuscany than in Northern Italy, where the author had died. Among the hypotheses for its textual evolution, the results of my investigation support the widespread hypothesis that Dante published his work with Florentine orthography in Northern Italy. That is, the first copies adopted Northern orthographic standards, which would then revert to Tuscan customs, with occasional misinterpretations, when the work found its way back to Florence. These essentials of the transmission must be considered when curating a critical edition, as the less numerous Northern manuscripts, albeit with an adapted orthography, can in general be assumed to be closer to the archetype (if there ever was one to speak of) than Florentine ones.

The tradition is characterized by intentional contamination, as the work soon became a focus of politics and grammar prescriptivism. Errors and contamination have already been demonstrated in the earliest securely dated manuscript, the Landiano of 1336 (cf. Shaw, 2011), and can be already identified in the first commentaries dating from the 1320s (such as in the one by Jacopo Alighieri, the author's son).

Critical studies

Here are some details about previous studies. I have included considerable stemmatic information, but I include a biological analogy to help make sense for non-experts.

The first critical editions date from the 19th century, but a stemmatic approach would only be advanced at the end of that century, by Michele Barbi. Facing the problem of applying Lachmann's method to a long text with a massive tradition, in 1891 Barbi proposed his list of around 400 loci (samples of the text), inviting scholars to contribute the readings in the manuscripts they had access to. His project, which intended to establish a complete genealogy without the need for a full collatio, had disappointing results, with only a handful of responses. Mario Casella would later (1921) conduct the first formal stemmatic study on the poem, grouping some older manuscripts in two families, α and β, of unequal number of witnesses but equal value for the emendatio. His two families are not rooted at a higher level, but he observed that they share errors supporting the hypothesis of a common ancestor, likely copied by a Northern scribe.

Casella's stemma, reproduced from Shaw (2011).

Forty years later, Giorgio Petrocchi proposed to overcome the large stemma by employing only witnesses dating from before the editorial activity of Giovanni Boccaccio, as his alterations and influence were considered to be too pervasive. Petrocchi defended a cut-off date of 1355 as being necessary for a stemmatic approach that would otherwise have been impossible, given the level of contamination of later copies. The restriction in the number of witnesses was contrasted by his expansion of the collatio to the entire text, criticizing Barbi's loci as subjective selections for which there was no proof of sufficiency.

Making use of analogies with biology, we may say that Barbi proposed to establish a tree from a reduced number of "proteins" for all possible "taxa". Casella considered this to be impracticable and, selecting a few representative "fossils", built a tree from a large number of phenotypic characteristics. Finally, Petrocchi produced a network while considering the entire "genome" for all "fossils" dated from before an event that, while well-supported in theory (we could compare its effects to a profound climate change), was nonetheless arbitrary.

Petrocchi's stemma, reproduced from Shaw (2011).

Questions about Petrocchi's methodology and assumptions were soon raised, particularly regarding the proclaimed influence of Boccaccio, without quantitative proofs either that his editions were as influential as asserted or that all later witnesses were superfluous for stemmatics. Later research focused on questioning his stemma. For example, the absence of consensus about the relationship between the Ash and Ham manuscripts, the supposedly weak demonstration of the polytomy of Mad, Rb, and Urb (the "Northern manuscripts"), and the dating of Gv (likely copied fifty to a hundred years after Petrocchi's assumption). Evidence was presented that Co, a key manuscript in his stemma, could not be an ancestor of Lau (its copyist was still active in the 15th century), and that Ga contained disjunctive errors not found in its supposed decedents. Abusing once more of the biological analogy, the dating of his "fossils" was in some cases plainly wrong.

Federico Sanguineti presented an alternative stemma in 2001, arguing that a rigorous application of stemmatics would evidence errors in Petrocchi. To that end, he decided to resurrect Barbi's loci and trace the first complete genealogy, without arbitrary and a priori decisions about the usefulness of the textual witnesses. Sanguineti defended the suggestion that, after this proper recensio, a small number of manuscripts (which he eventually set to seven) would be sufficient for emendation. His stemma, described as "optimistic in its elegance and minimalism" (Shaw 2011), resulted in a critical edition that heavily relied in a single manuscript, Urb, the only witness of his β family (as Rb was displaced from the proximity it had in Petrocchi's stemma, and Mad was excluded from the analysis). Keeping with the biological analogy, he proposed building a tree from an extremely reduced number of "proteins", but for all "taxa". In the end, however, the reduced number of "proteins" was considered only for seven "taxa", selected mostly due to their age.

Sanguineti's stemma, reproduced from Shaw (2011).

The edition of Sanguineti was attacked by critics, who confronted the limited number of manuscripts used in the emendatio, the position of Rb, the high value attributed to LauSC, and the unparalleled importance of Urb, all resulting in an unexpected Northern coloring to the language of a Florentine writer. Regarding his methodology, reviewers pointed out that stemmatic principles had not been followed strictly, as the elimination was not restricted to descripti, but extendied to branches that were considered to be too contaminated

The digital edition of Prue Shaw (2011) was developed as a project for phylogenetic testing of Sanguineti's assumptions. Her edition includes complete manuscript transcriptions, and the transcriptions include all of the layers of revision of each manuscript (original readings and corrections by later hands), and are complemented by high-quality reproductions of the manuscripts. After testing the validity of Sanguineti's method and stemma, Shaw concluded that his claims do not "stand up to close scrutiny", and that the entire edition is compromised, because Rb "is shown unequivocally to be a collaterale of Urb, and not a member of α as [Sanguineti] maintains".

Applying phylogenetic methods

With the goal of following and, to a large part, replicating Shaw (2011), I have analyzed signals of phylogenetic proximity for validating stemmatic hypotheses, produced both a computer-generated and a computer-assisted phylogeny (equivalent to a stemma), and evaluated the performance of suchphylogenies with methods of ancestral state reconstruction.

I wanted to investigate the proximity of witnesses and the statistical support for the published stemmas. After experiments with rooted graphs, I made a decision to use NeighborNets, in which splits are indicative of observed divergences and edge lengths are proportional to the observed differences. These unrooted split networks were preferable because they facilitated visual investigation, and also provided results for the subsequent steps. These involved exploring the topology and evaluating potential contaminations, guiding the elimination of taxa whose data would be redundant for establishing prior hypotheses on genealogical relationships. Analyses were conducted using all manuscript layers and critical editions, both with and without bootstrapping, thus obtaining results supported in terms of inferred trees as well as of character data.

NeighborNet of the manuscripts and revisions from my data, generated with SplitsTree
(Huson & Bryant 2006)

The analysis confirmed most of the conclusions of Shaw (2011) — there are no doubts about the proximity and distinctiveness of Ash and Ham, with Sanguineti's hypothesis (in which they are collaterals) better supported than Petrocchi's hypothesis (in which the first is an ancestor of the second). The proximity of Mart and Triv was confirmed; but the position of the ancestors postulated by Petrocchi and Sanguineti should be questioned in face of the signals they share with LauSC, perhaps because of contamination. The most important finding, in line with Shaw and in contrast with the fundamental assumption of Sanguineti, is the clear demonstration of the relationship between Rb and Urb.

The relationship analyses allowed the generation of trees for further evaluation. Despite the goal of a full Bayesian tree-inference, I discarded that option because, without a careful and demanding selection of priors, it would yield flawed results. As such, I made the decision to build trees using both stochastic inference and user design (ie. manually). This postponed more complex topology analyses for future research, but generated the structures needed by the subsequent investigation steps; both trees are included in the datafile.

The second tree (shown below), allowing polytomies and manually constructed by myself, tries to combine the findings of Petrocchi and Sanguineti by resolving their differences with the support of the relationship analyses. Using Petrocchi's edition as a gold standard, and considering only single hypothesis reconstructions, parsimonious ancestral state reconstruction agree with 9,016 characters (79.9%). When considering multiple hypotheses, instead, reconstructions agree with 10,226 characters (90.7%). Cases of disagreement were manually analyzed and, as expected, most resulted from readings supported by the tradition but refuted by Petrocchi on exegetic grounds.

My proposed tree for the manuscripts selected by Sanguineti,
generated with PhyD3 (Kreft et al., 2017).

This tree suggests that, in general, Petrocchi's network is better supported than the tree by Sanguineti, as phylogenetic principles lead us to expect — the first was built considering statistical properties and using all of available data, while the second relied in many intuitions and hypothesis never really tested. In particular, it supports the findings of Shaw and, as such, allows us to indicate the critical edition of Petrocchi as the best one. Even more important, however, it is a further evidence of the usefulness of phylogenetic methods, when appropriately used, in stemmatics.

References

Alagherii, Dantis (2001) Comedìa. Edited by Federico Sanguineti. Firenze: Edizioni del Galluzzo.

Alighieri, Dante (1994) La Commedia Secondo L’antica Vulgata: Introduzione. Edited by Giorgio Petrocchi. Opere di Dante Alighieri v. 1. Firenze: Le Lettere.

Huson, Daniel H.; Bryant, David (2006) Application of phylogenetic networks in evolutionary studies. Molecular Biology and Evolution 23: 254–267.

Inglese, Giorgio (2007) Inferno, Revisione del testo e commento. Roma: Carocci.

Kreft, Lukasz; Botzki, Alexander; Coppens, Frederik; Vandepoele, Klaas; Van Bel, Michiel (2017) PhyD3: a Phylogenetic Tree Viewer with Extended PhyloXML Support for Functional Genomics Data Visualization. BioRxiv. Doi: 10.1101/107276.

Leonardi, Anna M.C. (1991) Introduzione. In: La Divina Commedia, by Dante Alighieri. Milano: Arnoldo Mondadori Editore.

Shaw, Prue (2011) Commedia: a Digital Edition. Birmingham: Scholarly Digital Editions.

Trovato, Paolo (2016) Metodologia editoriale per la Commedia di Dante Alighieri. Ferrara. https://www.youtube.com/watch?v=BfKUOAR9PXA. Date of access: March 19, 2017.


          Data Visualization Consultant - Neustar, Inc. - McLean, VA        
Neustar, Inc., complies with applicable state and local laws prohibiting discrimination in employment and provides reasonable accommodation to qualified...
From NeuStar, Inc. - Thu, 10 Aug 2017 16:59:15 GMT - View all McLean, VA jobs
          moar Fusion!        
Fusion Tables is neat. Google describes it as ‘an experimental data visualization web application to gather, visualize, and share larger data tables’. Last year I tried out their API which was mostly based on sending SQL statements to create and manipulate tables. Recently, I looked at Fusion Tables again as part of an imminent upgrade […]
          Journalism 360 grant winners announced        

While advances in immersive storytelling—360 video, virtual reality, augmented reality and drones—have the potential to make journalism richer and more engaging, it can be challenging for journalists to adopt and embrace these new tools. In 2016, the Google News Lab, the John S. and James L. Knight Foundation and the Online News Association created Journalism 360, a coalition of hundreds of journalists from around the world to build new skills required to tell immersive stories. Today, the coalition announced the 11 winners of its first grant challenge, which will fund projects to tackle some of the most critical challenges facing the advancement of immersive journalism: the need for better tools and trainings, the development of best practices, and new use cases.

Here’s a bit more about the winning projects:

  • Aftermath VR app: New Cave Media, led by Alexey Furman in Kyiv, Ukraine.
    An app that applies photogrammetry, which uses photography to measure and map objects, to recreate three-dimensional scenes of news events and narrate what happened through voiceover and archival footage.

  • AI-generated Anonymity in VR Journalism: University of British Columbia, led by Taylor Owen, Kate Hennessy and Steve DiPaol in Vancouver, Canada.
    Helps reporters test whether an emotional connection can be maintained in immersive storytelling formats when a character’s identity is hidden.

  • Community and Ethnic Media Journalism 360: City University of New York, led by Bob Sacha in New York. 
    Makes immersive storytelling more accessible to community media (local broadcasters, public radio and TV, etc.) and ethnic media through hands-on training and access to equipment. The team also aims to produce a “how to” guide for using immersive storytelling to cover local events such as festivals.

  • Dataverses: Information Visualization into VR Storytelling: The Outliers Collective, led by Oscar Marin Miro in Barcelona, Spain.
    Makes it easier to integrate data visualizations into immersive storytelling through a platform that includes virtual reality videos, photos and facts. For example, a user could show a map of the Earth highlighting places without water access, and link each area to a virtual reality video that explores the experience of living there.

  • Facing Bias: The Washington Post, led by Emily Yount in Washington, D.C. 
    Develops a smartphone tool that will use augmented reality to analyze a reader's facial expressions while they view images and statements that may affirm or contradict their beliefs. The aim is to give readers a better understanding of their own biases.

  • Spatial and Head-Locked Stereo Audio for 360 Journalism: NPR, led by Nicholas Michael in Washington, D.C.
    Develops best practices for immersive storytelling audio by producing two virtual reality stories with a particular focus on sound-rich scenes. The project will explore, test and share spatial audio findings from these experiments.


  • Immersive Storytelling from the Ocean Floor:  MIT Future Ocean Lab, led by Allan Adams in Cambridge, Massachusetts.
    Creates a camera and lighting system to produce immersive stories underwater and uncover the hidden experiences that lie beneath the ocean’s surface.


  • Location-Based VR Data Visualization: Arizona State University, Cronkite School of Journalism, led by Retha Hill in Tempe, Arizona.
    Helps journalists and others easily create location-based data visualizations in a virtual reality format. For example, users could explore crime statistics or education data on particular neighborhoods through data overlays on virtual reality footage of these areas.


  • Voxhop by Virtual Collaboration Research Inc.:  Ainsley Sutherland in Cambridge, Massachusetts.
    Makes it easy to craft audio-driven virtual reality stories through a tool that would allow journalists to upload, generate or construct a three-dimensional environment and narrate the scene from multiple perspectives. For example, a reporter could construct a three-dimensional crime scene and include voiceovers detailing accounts of what transpired in the space.


  • Scene VR: Northwestern University Knight Lab, led by Zach Wise in Evanston, Illinois.
    Develops a tool that would make it easier for journalists and others to create virtual reality photo experiences that include interactive navigation, using their smartphone or a camera.


  • The Wall: The Arizona Republic and USA TODAY Network, led by Nicole Carroll in Phoenix, Arizona.
    Uses virtual reality, data and aerial video, and documentary shorts to bring the story of the proposed border wall between the United States and Mexico to life.

Over the course of the next year, the project leads will share their learnings on the Journalism 360 blog. Because this is all about building community, the recipients will also gather at the Online News Association’s annual conference in Washington, D.C. this September to discuss their projects, answer questions and share their progress. In early 2018, they will present their finished projects.

To learn more about Journalism 360, follow the blog or on Twitter. You can learn more about the Google News Lab’s work in immersive journalism on our website.


          Google News Lab powers digital journalism training for Africa        

For journalists, recent advances in digital technology present compelling new opportunities to discover, tell and share stories—like this one from the Mail & Guardian that uses Google My Maps to highlight top water wasters in metro areas during the drought. But learning how to use new digital tools for reporting can be intimidating or even daunting. This is particularly true in Africa, where digital integration in news and storytelling often remains a challenge. Few journalism institutions offer training programs in digital tools, and news organizations often lack the capability to use new digital technologies in their reporting.

That’s why we’re supporting a new initiative that will offer journalists across Africa training in skills like mobile reporting, mapping, data visualization, verification, and fact checking. In partnership with the World Bank and Code For Africa, this project aims to train more than 6,000 journalists by February 2018, in 12 major African cities: Abuja, Cape Town, Casablanca, Dakar, Dar es Salaam, Durban, Freetown, Johannesburg, Kampala, Lagos, Nairobi and Yaounde. By providing the instruction and support to better use available digital tools available, we hope to empower journalists across Africa to produce cutting-edge and compelling reporting.

Training will take place in three formats:

  • Beginning June 15, we’ll hold in-person training sessions on topics ranging  from displaying data with an interactive map to effective reporting with a mobile device. In each city, we’ll conduct trainings in three newsrooms and hold trainings twice a month for the duration of the initiative.
  • In August, a massive open online course (MOOC) will be made freely available online, covering a range of web concepts and practices for digital journalists.
  • We will also hold monthly study groups in collaboration with Hacks/Hackers (a global meetup organization) to provide more focused, in-person instruction. These monthly meetings will take place in Cameroon, Kenya, Morocco, Nigeria, Senegal, Sierra Leone, South Africa, Tanzania, and Uganda.

In 2016, we announced our commitment to train 1 million African youth on digital skills during the year to help them create and find jobs. We hope this new initiative also helps contribute to the continued growth of Africa’s digital economy.

Please visit www.academy.codeforafrica.org to learn more and to register.


          Make your own data gifs with our new tool        

Data visualizations are an essential storytelling tool in journalism, and though they are often intricate, they don’t have to be complex. In fact, with the growth of mobile devices as a primary method of consuming news, data visualizations can be simple images formatted for the device they appear on.

Enter data gifs.

trends_BatmanSuperman.gif

These animations can be used for a variety of sophisticated storytelling approaches among data journalists: one example is Lena Groeger, who has become *the* expert in working with data gifs.

Today we are releasing Data Gif Maker, a tool to help journalists make these visuals, which show share of search interest for two competing topics.

trends_PBJ.gif

Data Gif Maker works like this:

1. Enter two data points

Trend_Step4.png

We typically use the tool to represent competing search interest, but it can show whatever you want it to—polling numbers, sales figures, movie ratings, etc. If you want to show search interest, you can compare two terms in the Google Trends explore tool, which will give you an average number (of search interest over time) for each term. Then input those two numbers in Data Gif Maker.

2. Add your text

Trend_Step3.png

3. Choose your colors

Trend_Step2.png

4. Choose your explanatory text

Trend_Step1.png

5. Hit “Launch Comparisons” and “Download as Gif”

Trend_Step5.png

And there you go—you’ve made your first animated data gif. Pro-tip #1: the high resolution download takes longer but it’s better quality for social sharing. Pro-tip #2: Leave the window open on your desktop while it’s creating the gifs as it will do so quicker.

If you want the visual, but not the gif, hit “Launch Comparisons” and it will open in your browser window. Just hit space to advance through the views (it’s set up to show five pieces of data, one after the other).

Find the tool useful? We’d love to see what you do with it. Email us at newslabtrends@google.com.


          Data Journalism Awards 2017: Call for submissions        

With trust in journalism under attack, data journalism has never been more vital. And this year, for the sixth consecutive year, we’re proud to support the 2017 Data Journalism Awards.

But you need to get your skates on: The deadline is fast approaching for the only global awards recognizing work that brings together data, visualization and storytelling to produce some of the most innovative journalism out in the world today.

It’s a part of our commitment to supporting innovative journalism both in Europe and around the world.
Data Journalism Awards SS

Past winners of the $1,801 prizes include the New York Times, Buzzfeed, FiveThirtyEight, Quartz and IndiaSpend. 2017 hopefuls don’t have long: the deadline for this year’s awards is April 7, 2017 at midnight GMT.

And if you’re wondering why the prize is $1,801? That’s because in 1801 William Playfair invented the pie chart.

Aimed at newsrooms and journalists in organizations of all sizes—big and small—the #DJA2016 awards will recognize the best work in key categories, including:

  • Data visualisation of the year

  • Investigation of the year

  • News data app of the year

  • Data journalism website of the year

  • The Chartbeat award for the best use of data in a breaking news story, within first 36 hours

  • Open data award

  • Small newsrooms (one or more winners)

  • Student and young data journalist of the year

  • Best individual portfolio

The competition is organized by the Global Editors Network: a cross-platform community of editors-in-chief and media innovators committed to high-quality journalism, with the support of Google and the Knight Foundation. For Google, the Data Journalism Awards offer another way for foster innovation through partnership with the news industry, in addition to our efforts through the Digital News Initiative and the work of the Google News Lab teams around the world.

Data journalists, editors and publishers are encouraged to submit their work for consideration by joining the GEN community via this form by April 7 at midnight GMT. A jury of peers from the publishing community, including new jury members Esra Doğramacı from Deutsche Welle and Data Journalism China’s Yolanda Ma will choose the winners, which will be announced during a gala dinner at the Global Editors Network Summit in Vienna on June 22.

Good luck!

Simon Rogers is Data Editor at Google’s News Lab and Director of the Data Journalism Awards


          Data Visualization Made Easy with Chartio and Treasure Data        
As the old saying goes, a picture is worth a thousand words. What if you could create a clear, coherent picture with your data? That is the goal of many companies, yet it sometimes proves difficult to achieve. Chartio helps companies quickly generate stunning visualizations and beautiful, easy-to-understand charts. Treasure Data’s Customer Data Platform now […]
          Infinite Skills' "Learning to Visualize Data with D3.js Tutorial" Teaches Essentials of Using Web's Major Graphic Driver        

Software training firm Infinite Skills Inc. releases its "Learning to Visualize Data with D3.js Tutorial," a 3.75 hour course designed for new and experienced developers, particularly those with a basic understanding of JavaScript, that teaches the essentials of using the D3.js library to create interactive data visualizations.

(PRWeb November 04, 2014)

Read the full story at http://www.prweb.com/releases/2014/11/prweb12298877.htm


          Winners of the Big Data Visualization Contest Are Announced at NJIT‘s Innovation Day        

NJIT's first annual Big Data Visualization Contest – a competition that immersed undergraduates in the world of mergers and acquisitions (M&A) and challenged them to use S&P Capital IQ's cutting-edge research, analytics, and data visualization tools to make hypothetical pitches for high-stakes acquisition deals – concluded in a photo finish at Innovation Day this week with the winning team narrowly edging out close competitors.

Tagged: college of computing sciences, school of management, ccs, som, uri, michael ehrlich, marek rusinkiewicz, entrepreneurship, undergraduate research and innovation, innovation day, big data, s&p capital iq, big data visualization contest, lou eccleston, robert coppola, mcgraw hill financial



          S&P Capital IQ and NJIT to Announce Winners of Big Data Visualization Contest on April 11        

S&P Capital IQ, a business unit of McGraw Hill Financial, Inc. (NYSE:MHFI), and New Jersey Institute of Technology (NJIT) are co-sponsoring the first annual Big Data Visualization Contest -- a competition that immerses undergraduates in the high-stakes world of mergers and acquisitions (M&A) by challenging them to pitch hypothetical acquisition targets using cutting-edge research, analytics, and data visualization tools available on S&P Capital IQ's desktop platform.

Tagged: college of computing sciences, school of management, ccs, som, uri, michael ehrlich, marek rusinkiewicz, undergraduate research and innovation, innovation day, entrepreneurship, lou eccleston, big data visualization contest, s&p capital iq, robert coppola



          American panorama: A nation of overlapping diasporas [data visualization]        
[0] Discussion by nimh on 11/05/16 6:17 PM Replies: 0 Views: 518
Tags: History, Immigration, United States, Culture, Migration
Last Post by nimh on 11/05/16 6:17 PM
          Barry M. Lasker Data Science Fellowship        

The Space Telescope Science Institute (STScI) in Baltimore, Maryland, announces the initiation of the Barry M. Lasker Data Science Postdoctoral Fellowship. The Lasker Fellowship is a STScI-funded program designed to provide up to three years of support for outstanding postdoctoral researchers conducting innovative astronomical studies that involve the use or creation of one or more of the following: large astronomical databases, massive data processing, data visualization and discovery tools, or machine-learning algorithms. The first recipient of the fellowship is Dr. Gail Zasowski of the Johns Hopkins University (JHU) in Baltimore, Maryland. The fellowship is named in honor of STScI astronomer Barry M. Lasker (1939-1999).


          Netflix To Show How To Make Data-Driven Movies        

The Data Visualization Summit is bringing some of the biggest names in data and visualization to the Westin St.Francis, discussing the latest trends and developments in the field.

(PRWeb March 16, 2017)

Read the full story at http://www.prweb.com/releases/2017/03/prweb14154148.htm


          Going to Data Visualization School        

With the never-ending expansion of online possibilities for quality learning, I got to the can’t-see-the-woods-for-trees phase quite a while ago and tend to…

The post Going to Data Visualization School appeared first on René Clausen Nielsen.


          (USA-UNITED STATES) Clinical Data Manager II        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Contributes to the development process for new substances and development and promotion of drugs on the market by providing expertise, expectations, direction and oversight for Clinical Data Management (CDM) deliverables. Takes a lead role with internal and external partners and represents the company at meetings with clinical investigators and in the interaction with Contract Research Organizations (CROs) and external vendors in all aspects of data management for assigned trial(s). May provide input into CDM standards and process developments. Assumes one or more of the following roles demonstrating the required expertise and capabilities as: + Trial Data Manager (TDM) + Central Monitor (CM) + Or developing expertise and capabilities under supervision as: + Project Data Manager (PDM), as an associate PDM or supporting a local submission (e.g. in Japan or China) + Risk Based Quality Management (RBQM) Business Partner (BP), e.g. as an associate RBQM BP or for a trial of low complexity As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees **Duties & Responsibilities:** + In the role of a Trial Data Manager (TDM) for clinical trials led in-house or using business process outsourcing (BPO) + Key liaison / Data Management lead to establish, align and confirm data management expectations for assigned trial(s), this requires regular interaction with other internal and external partners e.g. TCM, TSTAT, TPROG, TMCP, BPO partners. + Responsible for CDM trial level oversight. Builds effective relationships with CROs/ vendor partners. Review protocols and identifies requirements for proper data capture including electronic Case Report Form design and processing of clinical data ensuring accuracy, consistency and completeness + Oversee the design, creation and UAT Plan and testing of clinical study databases along with development of edit check specifications and manual data listings as required. + Define or reviews creation and maintenance of all essential data management documentation including CRF specifications, eCRFs, annotated eCRF, eCRF completion guidelines, Data Management Plans (detailing complete data management processes throughout clinical studies), Data Transfer specifications and Data Review Guidelines, in accordance with the protocol, BI and project data standards. + Integrates external data (non-CRF data) from vendors or other internal departments into the clinical trial database. + Initiates and compiles Trial Master File (TMF) relevant documentation containing the necessary CDM / Biostatistics & Data Sciences (BDS) documentation for a trial together with other members of the trial team as appropriate. Therein ensures appropriate quality, scientific content, organization, clarity, accuracy, format, consistency and compliance with regulatory guidelines. + Establishes conventions and quality expectations for clinical data and plans and tracks the content, format, completeness, quality and timing of the trial data collection process and other CDM deliverables via data analytics throughout the conduct of a trial. + Throughout the trial, the function holder either performs or leads the respective trial level activities in the context of business process outsourcing (BPO) in CDM. + Collaborates with the trial team to ensure that the database can be locked according to the planned timelines and quality. Responsible for the database lock and accountable for the integrity of the database. + Ensures that SDTM (Study Data Tabulation Model) compliant data is available for analyses together with the Project Data Manager (PDM) and the SDTM programmer at the CRO (in the context of the BPO). + Leads and facilitates the Medical and Quality Review (MQR) process and other trial team meetings. Presents and trains at trial team, CRA at investigator meetings. + Ensures real-time inspection readiness of all CDM deliverables for a trial and participates in regulatory agency and BI internal audits as necessary. + Identifies and communicates lessons learned and best practices at the trial level within CDM. Identifies and participates in DM related process, system, and tool improvement initiatives within CDM/BDS. + In the role of a Trial Data Manager (TDM) for fully outsourced trials, supervises and instructs the CRO in performing the above TDM tasks and leads trial level oversight, including planned timelines and fulfillment of quality expectations. + Sets expectations for and defines specifications for data transmission with the CRO. Integrates the data from the CRO into the BI clinical trial database. Ensures that SDTM compliant data is available for analyses together with the responsible Project Data Manager (PDM). + In the role of a Central Monitor (CM) for clinical trials + Executes and manages the Risk Based Quality Management (RBQM) processes as described in the monitoring framework, this requires regular interaction with other internal and external functions e.g. clinical monitors, CRAs/site monitors, data managers, biostatistics, site personnel. + Conduct root cause analysis on the risk signals of aggregated site and trial data (pulled from various sources) using risk reports. Identifies and investigates potential risks and trends with subject protection and reliability of trial results and compliance with the investigational plan for impact on site/country/trial activities. + Provides direction to site monitors for additional remote and on-site monitoring activities for risk sites, within the scope of the trial monitoring plan. + Oversees potential issues and findings requiring further review and follow-up and ensures appropriate actions are taken by the trial team members to investigate, resolve and document potential risks identified, including adequate documentation of resolution. + Provides a regular and efficient mechanism of trial communication for the trial team including documentation and leads oversight meetings. + Ensures real-time inspection readiness of responsible RBQM deliverables for a trial and participates in regulatory agency and BI internal audits as necessary, in conjunction with the RBQM BP. + Identifies and communicates lessons learned and best practices at the trial level and with other CMs. Identifies and participates in CM related process, system, and tool improvement initiatives within CDM/BDS. Performs user acceptance testing and supports the development and maintenance of RBQM tools. + In the role of a Project Data Manager (PDM), the function holder performs (selected) PDM tasks under the supervision of an experienced PDM, e. g. as an associate PDM or for a project of low complexity where existing standards, material and documentation can be re-used and built upon. + Accountabilities include the definition, leadership and oversight of data management processes and deliverables for clinical projects (with one project comprising multiple trials in a substance in one indication) such as establishing expectations for CRF-based/external dataset content and structure, definition of project standards (e.g. SDTM, CRF, specifications such as for MQR, data cleaning, data transmission), review and acceptance of project level database elements, programming and validation of the project database (PDB), preparation and creation of CDM deliverables for regulatory submission and support of safety updates. + Alternatively, the function holder may be responsible for the specific CDM deliverables and support for a local regulatory submission (e.g. in Japan or China). + In the role of a Risk Based Quality Management (RBQM) Business Partner (BP), the function holder performs (selected) RBQM BP tasks under the supervision of an experienced RBQM BP, e.g. as an associate RBQM BP or for a trial of low complexity. + Takes a leadership role with the project / trial team to establish, align and confirm RBQM expectations for assigned trial(s). The function holder performs (selected) RBQM BP tasks in the definition, leadership and oversight of Risk Based Quality Management (RBQM) processes and deliverables for one or multiple clinical trials such as guiding the project and trial team through the process of identifying and assessing risks at the beginning of a trial, initiating and facilitating RBQM risk review and assessment meetings, facilitating the implementation of required RBQM documentation and tools, authoring the quality report and assisting with any risk related questions that arise. **Requirements:** + Bachelor’s degree or Master’s degree from an accredited institution (e.g. MBA, MSc) with major/focus in Life Sciences, Computer Science, Statistics, or similar preferred. + Experience in clinical research including data management and/or clinical trial management required. Initial experience within the pharmaceutical industry, CROs or academic sites: >=3 years. + No leadership experience required. + **Technical / Analysis / Medicine** : + Any of the following skills: data visualization/reporting, analytics; i.e. able to interpret integrated data displays and metrics, identify and communicate trends. + Experiences with Electronic Data Capture (EDC) processes + Knowledge in and experience with any of the following: Data review in JReview, Risk Management Tools, Statistical Analysis Software (SAS) programming + Ability to adapt to new technologies. + Critical thinker and able to discern risks. Must be precise and able to detect subtle inconsistencies in data / structures. + **Planning / Organization:** + Excellent organizational skills, problem solving abilities, negotiation skills, time management skills and initiative. + Must be able to work independently as well as part of a team. + Able to effectively manage multiple assignments and adapt flexibly to changing priorities. + Able to produce robust timelines and action plans, regularly review and follow up on progress and take decisive action in terms of follow up activities with local and global trial/project teams. Ensures work is completed effectively. + **Communication** : + Strong communication skills with the ability to simply summarize complex information. Ability to use a wide range of communication techniques and media (written and verbal). Confident and persuasive communicator to ensure that the message is clear and well understood. + Ability to work collaboratively on multi-disciplinary project teams and to pro-actively manage relationships with external vendors. + Mindful of local, global, internal and external cultures to ensure that messages are received positively and effectively. + Good written and oral communication skills in the English language. + Ability to lead and facilitate meetings. + Ability to develop and deliver (technical) training. + Responsible for the clinical trial database and the data collected within a clinical trial and/or for the identification, detection and assessment of risks in a clinical trial. + Knowledge and experience in and continuing education of clinical trial designs, data standards, clinical trial conduct and methodology (International Conference on Harmonization (ICH) regulations. + Good Clinical Practice (GCP), major regulatory authorities and relevant directives/regulations) are required. Internal and external negotiation skills are required. + Ensures all tasks are carried out in accordance with respective applicable BI Standard Operating Procedures (SOPs), BI and regulatory guidelines and BI working instructions. + Ensures that all interactions and engagements are carried out with the highest ethical and professional standards and that all work is accomplished with quality and in accordance with BI values. **Eligibility Requirements:** + Must be legally authorized to work in the United States without restriction. + Must be willing to take a drug test and post-offer physical (if required) + Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is a different kind of pharmaceutical company, a privately held company with the ability to have an innovative and long term view. Our focus is on scientific discoveries that improve patients' lives and we equate success as a pharmaceutical company with the steady introduction of truly innovative medicines. Boehringer Ingelheim is the largest privately held pharmaceutical corporation in the world and ranks among the world's 20 leading pharmaceutical corporations. At Boehringer Ingelheim, we are committed to delivering value through innovation. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Vetmedica Inc. and Boehringer Ingelheim Fremont, Inc. is an equal opportunity employer - Minority/Female/Protected Veteran/Person with a Disability Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment verifications and drug screenings **Organization:** _US-BI Pharma/BI USA_ **Title:** _Clinical Data Manager II_ **Location:** _Americas-United States_ **Requisition ID:** _179190_
          (USA-UNITED STATES) Principal Clinical Data Manager        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Contributes to the development process for new substances and development and promotion of drugs on the market by providing expertise, expectations, direction and oversight for Clinical Data Management (CDM) deliverables at project / trial level. Takes a lead role with internal and external partners and represents the company at meetings with regulatory authorities, clinical investigators and in the interaction with Contract Research Organizations (CROs) and external vendors in all aspects of data management for assigned project /trial(s). Provides input into CDM standards and process developments. Assumes primary responsibilities in one or more of the following roles demonstrating the required expertise and capabilities as: + Trial Data Manager (TDM) for complex trials or as subject matter expert of CDM responsibilities and processes e.g. TMCP process expert + Project Data Manager (PDM) + Risk Based Quality Management (RBQM) Business Partner (BP) As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees **Duties & Responsibilities:** + In the role of a Trial Data Manager (TDM) for complex trials or as subject matter expert of TDM responsibilities and processes for clinical trials led in-house or using business process outsourcing (BPO) + Takes a leadership role as subject matter expert for CDM responsibilities and processes in global projects/working groups and provides mentoring for less experienced Data Managers (DMs). + Takes a lead role in the specific setting of special trials, like mega trials or complex TMCP trials. Existing SOPs, guidelines and WIs do not cover these and the trial CDM has to make sure that the processes are developed according to the trial’s needs but adheres to the principles of GCP and other regulations like FDA guidance / regulations and documented in SOP variations as necessary. + Takes a lead role with internal and external partners to establish, align and confirm data management expectations for assigned trial(s). + Responsible for CDM trial level oversight. + Builds effective relationships with vendor partners. + Review protocols and identifies requirements for proper data capture. + In the role of a Trial Data Manager (TDM): Continued… + Oversee the design, creation and UAT Plan and testing of clinical study databases along with development of edit check specifications and manual data listings as required. + Define or reviews creation and maintenance of all essential data management documentation including CRF specifications, eCRFs, annotated eCRF, eCRF completion guidelines, Data Management Plans (detailing complete data management processes throughout clinical studies), Data Transfer specifications and Data Review Guidelines, in accordance with the protocol, BI and project data standards. + Integrates external data (non-CRF data) from vendors or other internal departments into the clinical trial database. + Initiates and compiles Trial Master File (TMF) relevant documentation containing the necessary CDM / Biostatistics & Data Sciences (BDS) documentation for a trial together with other members of the trial team as appropriate. Therein ensures appropriate quality, scientific content, organization, clarity, accuracy, format, consistency and compliance with regulatory guidelines. + Establishes conventions and quality expectations for clinical data and plans and tracks the content, format, completeness, quality and timing of the trial data collection process and other CDM deliverables via data analytics throughout the conduct of a trial. + Throughout the trial, the function holder leads the respective trial level activities in the context of business process outsourcing (BPO) in CDM. + Collaborates with the trial team to ensure that the database can be locked according to the planned timelines and quality. Responsible for the database lock and accountable for the integrity of the database. + Ensures that SDTM (Study Data Tabulation Model) compliant data is available for analyses together with the Project Data Manager (PDM) and the SDTM programmer at the CRO (in the context of the BPO). + Leads and facilitates the Medical and Quality Review (MQR) process and other trial team meetings. Presents and trains at trial team, CRA and investigator meetings. + Ensures real-time inspection readiness of all Clinical Data Management deliverables for a trial and participates in regulatory agency and BI internal audits as necessary. + Identifies and communicates lessons learned and best practices at the trial level and within CDM. Identifies and participates in DM related process, system, and tool improvement initiatives within CDM/BDS. + Leads trial data managers in support of their trial in aspects of the data management work. + In the role of a Project Data Manager (PDM), the function holder performs PDM tasks for multiple early stage project e.g. PDM TMCP or for an international development project that has gone beyond the stage of Proof of Clinical Principal (PoCP) and involves complex and large international phase III trials. + Builds effective relationships with CROs/ vendor partners utilized within the project. + Gives input to the core clinical trial protocol (CTP). + Defines, reviews and approves key Clinical Data Management Project level deliverables, including: Core Case Report Form (CRF) design, instructions for CRF completion, Project data management plan; e.g. database specifications including derivations, edit check specifications, data cleaning plan, Electronic data transmission agreements in accordance with the core protocol , BI, TA Level data standards and Project needs. + Initiates and compiles PDMAP documentation containing the necessary CDM / Biostatistics & Data Sciences (BDS) documentation for a project together with other members of the trial team as appropriate. Therein ensures appropriate quality, scientific content, organization, clarity, accuracy, format, consistency and compliance with regulatory guidelines. Ensures that SDTM (Study Data Tabulation Model) compliant data is available for analyses together with the Trial Data Manager (TDM) and the SDTM programmer at the CRO (in the context of the BPO). + The PDM sets up, maintains and validates the project database consistent with the latest industry, BI and project standards. Ensures that the SDTM project database is compliant with the requirements from the project statistical analyses plan and collaborates with the PSTAT, PPROG on a regular basis. + Establishes conventions and quality expectations for clinical data and plans and tracks the content, format, completeness, quality and timing of the project database via data analytics throughout the conduct of a project. + Compiles and ensures compliance of all elements of the electronic submission deliverables: (e.g.): datasets, trial level SDTM Reviewers Guide and define.xml. + As part of inspection readiness, the PDM ensures that the TDMAP documentation of the pivotal trials is complete and consistent and communicates with trial data managers during conduct of these trials to set expectations. + Identifies and communicates lessons learned and best practices at the project level and within CDM. Identifies and participates in DM related process, system, and tool improvement initiatives within CDM. + Leads / mentors project/trial data managers that support the project in aspects of the data management work. + In the role of a Risk Based Quality Management (RBQM) Business Partner (BP), the function holder performs RBQM BP tasks for one or multiple clinical trials. + Leads the project or trial team through the process of identifying and assessing risks at the beginning of a trial. + Initiate and facilitates the RBQM risk review and assessment meetings, + Develop and maintain trial specific Risk Based Quality Management (RBQM) documentation and assists with any risk related questions that arise. + Authors the quality statement / Quality Report at the conclusion of the trial - for the Clinical Trial Report. + RBQM mentor / trainer for Central Monitors (CM), new RBQM BPs and other trial team members e.g. CRAs, CMLs + RBQM BP may also perform CM tasks as needed. + Supports the development, and maintenance of RBQM tools **Requirements:** + Bachelor’s degree or Master’s degree from an accredited institution (e.g. MBA, MSc) with major/focus in Life Sciences, Computer Science, Statistics, or similar preferred. + Experience in clinical research including data management and/or clinical trial management required. Initial experience within the pharmaceutical industry, CROs or academic sites: >=6 years + International exposure in daily business: more than 50% of international business/customers/staff over more than four (4) years. + **Technical / Analysis / Medicine** : + Technical expertise including: industry data structure knowledge (e.g. CDASH/CDISC); EDC use and database specification experience. + Experience with data visualization/reporting, analytics; i.e. able to interpret integrated data displays and metrics identify and communicate trends. + Experience using Statistical Analysis Software (SAS) programming and Risk Management Tools, including data review in JReview + Ability to adapt to new technologies. + Critical thinker and able to discern risks. Must be precise and able to detect subtle inconsistencies in data / structures. + **Planning / Organization:** + Excellent organizational skills, problem solving abilities, negotiation skills, time management skills and initiative. + Must be able to work independently as well as part of a team. + Able to effectively manage multiple assignments and adapt flexibly to changing priorities. + Able to produce robust timelines and action plans, regularly review and follow up on progress and take decisive action in terms of follow up activities with local and global trial/project teams. Ensures work is completed effectively. + **Communication** : + Strong communication skills with the ability to simply summarize complex information. + Ability to use a wide range of communication techniques and media (written and verbal). Confident and persuasive communicator to ensure that the message is clear and well understood. + Ability to work collaboratively on multi-disciplinary project teams and to proactively manage relationships with external vendors. + Mindful of local, global, internal and external cultures to ensure that messages are received positively and effectively. + Good written and oral communication skills in the English language. + Ability to lead and facilitate meetings. + Ability to develop and deliver (technical) training. + Responsible for the project collection standards, database and submission deliverables within a substance/project. + Knowledge and experience in and continuing education of clinical trial designs, data standards, clinical trial conduct and methodology (International Conference on Harmonization (ICH) regulations. + Good Clinical Practice (GCP), major regulatory authorities and relevant directives/regulations) are required. + Strong project management skills and internal and external negotiation skills are required. + Ensures all tasks are carried out in accordance with respective applicable BI Standard Operating Procedures (SOPs), BI and regulatory guidelines and BI working instructions. + Ensures that all interactions and engagements are carried out with the highest ethical and professional standards and that all work is accomplished with quality and in accordance with BI values. **Eligibility Requirements:** + Must be legally authorized to work in the United States without restriction. + Must be willing to take a drug test and post-offer physical (if required) + Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is a different kind of pharmaceutical company, a privately held company with the ability to have an innovative and long term view. Our focus is on scientific discoveries that improve patients' lives and we equate success as a pharmaceutical company with the steady introduction of truly innovative medicines. Boehringer Ingelheim is the largest privately held pharmaceutical corporation in the world and ranks among the world's 20 leading pharmaceutical corporations. At Boehringer Ingelheim, we are committed to delivering value through innovation. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Vetmedica Inc. and Boehringer Ingelheim Fremont, Inc. is an equal opportunity employer - Minority/Female/Protected Veteran/Person with a Disability Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment verifications and drug screenings **Organization:** _US-BI Pharma/BI USA_ **Title:** _Principal Clinical Data Manager_ **Location:** _Americas-United States_ **Requisition ID:** _179066_
          (USA-UNITED STATES) Clinical Data Manager II        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Contributes to the development process for new substances and development and promotion of drugs on the market by providing expertise, expectations, direction and oversight for Clinical Data Management (CDM) deliverables. Takes a lead role with internal and external partners and represents the company at meetings with clinical investigators and in the interaction with Contract Research Organizations (CROs) and external vendors in all aspects of data management for assigned trial(s). May provide input into CDM standards and process developments. Assumes one or more of the following roles demonstrating the required expertise and capabilities as: + Trial Data Manager (TDM) + Central Monitor (CM) + Or developing expertise and capabilities under supervision as: + Project Data Manager (PDM), as an associate PDM or supporting a local submission (e.g. in Japan or China) + Risk Based Quality Management (RBQM) Business Partner (BP), e.g. as an associate RBQM BP or for a trial of low complexity As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees **Duties & Responsibilities:** + In the role of a Trial Data Manager (TDM) for clinical trials led in-house or using business process outsourcing (BPO) + Key liaison / Data Management lead to establish, align and confirm data management expectations for assigned trial(s), this requires regular interaction with other internal and external partners e.g. TCM, TSTAT, TPROG, TMCP, BPO partners. + Responsible for CDM trial level oversight. Builds effective relationships with CROs/ vendor partners. Review protocols and identifies requirements for proper data capture including electronic Case Report Form design and processing of clinical data ensuring accuracy, consistency and completeness + Oversee the design, creation and UAT Plan and testing of clinical study databases along with development of edit check specifications and manual data listings as required. + Define or reviews creation and maintenance of all essential data management documentation including CRF specifications, eCRFs, annotated eCRF, eCRF completion guidelines, Data Management Plans (detailing complete data management processes throughout clinical studies), Data Transfer specifications and Data Review Guidelines, in accordance with the protocol, BI and project data standards. + Integrates external data (non-CRF data) from vendors or other internal departments into the clinical trial database. + Initiates and compiles Trial Master File (TMF) relevant documentation containing the necessary CDM / Biostatistics & Data Sciences (BDS) documentation for a trial together with other members of the trial team as appropriate. Therein ensures appropriate quality, scientific content, organization, clarity, accuracy, format, consistency and compliance with regulatory guidelines. + Establishes conventions and quality expectations for clinical data and plans and tracks the content, format, completeness, quality and timing of the trial data collection process and other CDM deliverables via data analytics throughout the conduct of a trial. + Throughout the trial, the function holder either performs or leads the respective trial level activities in the context of business process outsourcing (BPO) in CDM. + Collaborates with the trial team to ensure that the database can be locked according to the planned timelines and quality. Responsible for the database lock and accountable for the integrity of the database. + Ensures that SDTM (Study Data Tabulation Model) compliant data is available for analyses together with the Project Data Manager (PDM) and the SDTM programmer at the CRO (in the context of the BPO). + Leads and facilitates the Medical and Quality Review (MQR) process and other trial team meetings. Presents and trains at trial team, CRA at investigator meetings. + Ensures real-time inspection readiness of all CDM deliverables for a trial and participates in regulatory agency and BI internal audits as necessary. + Identifies and communicates lessons learned and best practices at the trial level within CDM. Identifies and participates in DM related process, system, and tool improvement initiatives within CDM/BDS. + In the role of a Trial Data Manager (TDM) for fully outsourced trials, supervises and instructs the CRO in performing the above TDM tasks and leads trial level oversight, including planned timelines and fulfillment of quality expectations. + Sets expectations for and defines specifications for data transmission with the CRO. Integrates the data from the CRO into the BI clinical trial database. Ensures that SDTM compliant data is available for analyses together with the responsible Project Data Manager (PDM). + In the role of a Central Monitor (CM) for clinical trials + Executes and manages the Risk Based Quality Management (RBQM) processes as described in the monitoring framework, this requires regular interaction with other internal and external functions e.g. clinical monitors, CRAs/site monitors, data managers, biostatistics, site personnel. + Conduct root cause analysis on the risk signals of aggregated site and trial data (pulled from various sources) using risk reports. Identifies and investigates potential risks and trends with subject protection and reliability of trial results and compliance with the investigational plan for impact on site/country/trial activities. + Provides direction to site monitors for additional remote and on-site monitoring activities for risk sites, within the scope of the trial monitoring plan. + Oversees potential issues and findings requiring further review and follow-up and ensures appropriate actions are taken by the trial team members to investigate, resolve and document potential risks identified, including adequate documentation of resolution. + Provides a regular and efficient mechanism of trial communication for the trial team including documentation and leads oversight meetings. + Ensures real-time inspection readiness of responsible RBQM deliverables for a trial and participates in regulatory agency and BI internal audits as necessary, in conjunction with the RBQM BP. + Identifies and communicates lessons learned and best practices at the trial level and with other CMs. Identifies and participates in CM related process, system, and tool improvement initiatives within CDM/BDS. Performs user acceptance testing and supports the development and maintenance of RBQM tools. + In the role of a Project Data Manager (PDM), the function holder performs (selected) PDM tasks under the supervision of an experienced PDM, e. g. as an associate PDM or for a project of low complexity where existing standards, material and documentation can be re-used and built upon. + Accountabilities include the definition, leadership and oversight of data management processes and deliverables for clinical projects (with one project comprising multiple trials in a substance in one indication) such as establishing expectations for CRF-based/external dataset content and structure, definition of project standards (e.g. SDTM, CRF, specifications such as for MQR, data cleaning, data transmission), review and acceptance of project level database elements, programming and validation of the project database (PDB), preparation and creation of CDM deliverables for regulatory submission and support of safety updates. + Alternatively, the function holder may be responsible for the specific CDM deliverables and support for a local regulatory submission (e.g. in Japan or China). + In the role of a Risk Based Quality Management (RBQM) Business Partner (BP), the function holder performs (selected) RBQM BP tasks under the supervision of an experienced RBQM BP, e.g. as an associate RBQM BP or for a trial of low complexity. + Takes a leadership role with the project / trial team to establish, align and confirm RBQM expectations for assigned trial(s). The function holder performs (selected) RBQM BP tasks in the definition, leadership and oversight of Risk Based Quality Management (RBQM) processes and deliverables for one or multiple clinical trials such as guiding the project and trial team through the process of identifying and assessing risks at the beginning of a trial, initiating and facilitating RBQM risk review and assessment meetings, facilitating the implementation of required RBQM documentation and tools, authoring the quality report and assisting with any risk related questions that arise. **Requirements:** + Bachelor’s degree or Master’s degree from an accredited institution (e.g. MBA, MSc) with major/focus in Life Sciences, Computer Science, Statistics, or similar preferred. + Experience in clinical research including data management and/or clinical trial management required. Initial experience within the pharmaceutical industry, CROs or academic sites: >=3 years. + No leadership experience required. + **Technical / Analysis / Medicine** : + Any of the following skills: data visualization/reporting, analytics; i.e. able to interpret integrated data displays and metrics, identify and communicate trends. + Experiences with Electronic Data Capture (EDC) processes + Knowledge in and experience with any of the following: Data review in JReview, Risk Management Tools, Statistical Analysis Software (SAS) programming + Ability to adapt to new technologies. + Critical thinker and able to discern risks. Must be precise and able to detect subtle inconsistencies in data / structures. + **Planning / Organization:** + Excellent organizational skills, problem solving abilities, negotiation skills, time management skills and initiative. + Must be able to work independently as well as part of a team. + Able to effectively manage multiple assignments and adapt flexibly to changing priorities. + Able to produce robust timelines and action plans, regularly review and follow up on progress and take decisive action in terms of follow up activities with local and global trial/project teams. Ensures work is completed effectively. + **Communication** : + Strong communication skills with the ability to simply summarize complex information. Ability to use a wide range of communication techniques and media (written and verbal). Confident and persuasive communicator to ensure that the message is clear and well understood. + Ability to work collaboratively on multi-disciplinary project teams and to pro-actively manage relationships with external vendors. + Mindful of local, global, internal and external cultures to ensure that messages are received positively and effectively. + Good written and oral communication skills in the English language. + Ability to lead and facilitate meetings. + Ability to develop and deliver (technical) training. + Responsible for the clinical trial database and the data collected within a clinical trial and/or for the identification, detection and assessment of risks in a clinical trial. + Knowledge and experience in and continuing education of clinical trial designs, data standards, clinical trial conduct and methodology (International Conference on Harmonization (ICH) regulations. + Good Clinical Practice (GCP), major regulatory authorities and relevant directives/regulations) are required. Internal and external negotiation skills are required. + Ensures all tasks are carried out in accordance with respective applicable BI Standard Operating Procedures (SOPs), BI and regulatory guidelines and BI working instructions. + Ensures that all interactions and engagements are carried out with the highest ethical and professional standards and that all work is accomplished with quality and in accordance with BI values. **Eligibility Requirements:** + Must be legally authorized to work in the United States without restriction. + Must be willing to take a drug test and post-offer physical (if required) + Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is a different kind of pharmaceutical company, a privately held company with the ability to have an innovative and long term view. Our focus is on scientific discoveries that improve patients' lives and we equate success as a pharmaceutical company with the steady introduction of truly innovative medicines. Boehringer Ingelheim is the largest privately held pharmaceutical corporation in the world and ranks among the world's 20 leading pharmaceutical corporations. At Boehringer Ingelheim, we are committed to delivering value through innovation. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Vetmedica Inc. and Boehringer Ingelheim Fremont, Inc. is an equal opportunity employer - Minority/Female/Protected Veteran/Person with a Disability Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment verifications and drug screenings **Organization:** _US-BI Pharma/BI USA_ **Title:** _Clinical Data Manager II_ **Location:** _Americas-United States_ **Requisition ID:** _179187_
          (USA-CT-RIDGEFIELD) Clinical Data Manager II        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Contributes to the development process for new substances and development and promotion of drugs on the market by providing expertise, expectations, direction and oversight for Clinical Data Management (CDM) deliverables. Takes a lead role with internal and external partners and represents the company at meetings with clinical investigators and in the interaction with Contract Research Organizations (CROs) and external vendors in all aspects of data management for assigned trial(s). May provide input into CDM standards and process developments. Assumes one or more of the following roles demonstrating the required expertise and capabilities as: + Trial Data Manager (TDM) + Central Monitor (CM) + Or developing expertise and capabilities under supervision as: + Project Data Manager (PDM), as an associate PDM or supporting a local submission (e.g. in Japan or China) + Risk Based Quality Management (RBQM) Business Partner (BP), e.g. as an associate RBQM BP or for a trial of low complexity As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees **Duties & Responsibilities:** + In the role of a Trial Data Manager (TDM) for clinical trials led in-house or using business process outsourcing (BPO) + Key liaison / Data Management lead to establish, align and confirm data management expectations for assigned trial(s), this requires regular interaction with other internal and external partners e.g. TCM, TSTAT, TPROG, TMCP, BPO partners. + Responsible for CDM trial level oversight. Builds effective relationships with CROs/ vendor partners. Review protocols and identifies requirements for proper data capture including electronic Case Report Form design and processing of clinical data ensuring accuracy, consistency and completeness + Oversee the design, creation and UAT Plan and testing of clinical study databases along with development of edit check specifications and manual data listings as required. + Define or reviews creation and maintenance of all essential data management documentation including CRF specifications, eCRFs, annotated eCRF, eCRF completion guidelines, Data Management Plans (detailing complete data management processes throughout clinical studies), Data Transfer specifications and Data Review Guidelines, in accordance with the protocol, BI and project data standards. + Integrates external data (non-CRF data) from vendors or other internal departments into the clinical trial database. + Initiates and compiles Trial Master File (TMF) relevant documentation containing the necessary CDM / Biostatistics & Data Sciences (BDS) documentation for a trial together with other members of the trial team as appropriate. Therein ensures appropriate quality, scientific content, organization, clarity, accuracy, format, consistency and compliance with regulatory guidelines. + Establishes conventions and quality expectations for clinical data and plans and tracks the content, format, completeness, quality and timing of the trial data collection process and other CDM deliverables via data analytics throughout the conduct of a trial. + Throughout the trial, the function holder either performs or leads the respective trial level activities in the context of business process outsourcing (BPO) in CDM. + Collaborates with the trial team to ensure that the database can be locked according to the planned timelines and quality. Responsible for the database lock and accountable for the integrity of the database. + Ensures that SDTM (Study Data Tabulation Model) compliant data is available for analyses together with the Project Data Manager (PDM) and the SDTM programmer at the CRO (in the context of the BPO). + Leads and facilitates the Medical and Quality Review (MQR) process and other trial team meetings. Presents and trains at trial team, CRA at investigator meetings. + Ensures real-time inspection readiness of all CDM deliverables for a trial and participates in regulatory agency and BI internal audits as necessary. + Identifies and communicates lessons learned and best practices at the trial level within CDM. Identifies and participates in DM related process, system, and tool improvement initiatives within CDM/BDS. + In the role of a Trial Data Manager (TDM) for fully outsourced trials, supervises and instructs the CRO in performing the above TDM tasks and leads trial level oversight, including planned timelines and fulfillment of quality expectations. + Sets expectations for and defines specifications for data transmission with the CRO. Integrates the data from the CRO into the BI clinical trial database. Ensures that SDTM compliant data is available for analyses together with the responsible Project Data Manager (PDM). + In the role of a Central Monitor (CM) for clinical trials + Executes and manages the Risk Based Quality Management (RBQM) processes as described in the monitoring framework, this requires regular interaction with other internal and external functions e.g. clinical monitors, CRAs/site monitors, data managers, biostatistics, site personnel. + Conduct root cause analysis on the risk signals of aggregated site and trial data (pulled from various sources) using risk reports. Identifies and investigates potential risks and trends with subject protection and reliability of trial results and compliance with the investigational plan for impact on site/country/trial activities. + Provides direction to site monitors for additional remote and on-site monitoring activities for risk sites, within the scope of the trial monitoring plan. + Oversees potential issues and findings requiring further review and follow-up and ensures appropriate actions are taken by the trial team members to investigate, resolve and document potential risks identified, including adequate documentation of resolution. + Provides a regular and efficient mechanism of trial communication for the trial team including documentation and leads oversight meetings. + Ensures real-time inspection readiness of responsible RBQM deliverables for a trial and participates in regulatory agency and BI internal audits as necessary, in conjunction with the RBQM BP. + Identifies and communicates lessons learned and best practices at the trial level and with other CMs. Identifies and participates in CM related process, system, and tool improvement initiatives within CDM/BDS. Performs user acceptance testing and supports the development and maintenance of RBQM tools. + In the role of a Project Data Manager (PDM), the function holder performs (selected) PDM tasks under the supervision of an experienced PDM, e. g. as an associate PDM or for a project of low complexity where existing standards, material and documentation can be re-used and built upon. + Accountabilities include the definition, leadership and oversight of data management processes and deliverables for clinical projects (with one project comprising multiple trials in a substance in one indication) such as establishing expectations for CRF-based/external dataset content and structure, definition of project standards (e.g. SDTM, CRF, specifications such as for MQR, data cleaning, data transmission), review and acceptance of project level database elements, programming and validation of the project database (PDB), preparation and creation of CDM deliverables for regulatory submission and support of safety updates. + Alternatively, the function holder may be responsible for the specific CDM deliverables and support for a local regulatory submission (e.g. in Japan or China). + In the role of a Risk Based Quality Management (RBQM) Business Partner (BP), the function holder performs (selected) RBQM BP tasks under the supervision of an experienced RBQM BP, e.g. as an associate RBQM BP or for a trial of low complexity. + Takes a leadership role with the project / trial team to establish, align and confirm RBQM expectations for assigned trial(s). The function holder performs (selected) RBQM BP tasks in the definition, leadership and oversight of Risk Based Quality Management (RBQM) processes and deliverables for one or multiple clinical trials such as guiding the project and trial team through the process of identifying and assessing risks at the beginning of a trial, initiating and facilitating RBQM risk review and assessment meetings, facilitating the implementation of required RBQM documentation and tools, authoring the quality report and assisting with any risk related questions that arise. **Requirements:** + Bachelor’s degree or Master’s degree from an accredited institution (e.g. MBA, MSc) with major/focus in Life Sciences, Computer Science, Statistics, or similar preferred. + Experience in clinical research including data management and/or clinical trial management required. Initial experience within the pharmaceutical industry, CROs or academic sites: >=3 years. + No leadership experience required. + **Technical / Analysis / Medicine** : + Any of the following skills: data visualization/reporting, analytics; i.e. able to interpret integrated data displays and metrics, identify and communicate trends. + Experiences with Electronic Data Capture (EDC) processes + Knowledge in and experience with any of the following: Data review in JReview, Risk Management Tools, Statistical Analysis Software (SAS) programming + Ability to adapt to new technologies. + Critical thinker and able to discern risks. Must be precise and able to detect subtle inconsistencies in data / structures. + **Planning / Organization:** + Excellent organizational skills, problem solving abilities, negotiation skills, time management skills and initiative. + Must be able to work independently as well as part of a team. + Able to effectively manage multiple assignments and adapt flexibly to changing priorities. + Able to produce robust timelines and action plans, regularly review and follow up on progress and take decisive action in terms of follow up activities with local and global trial/project teams. Ensures work is completed effectively. + **Communication** : + Strong communication skills with the ability to simply summarize complex information. Ability to use a wide range of communication techniques and media (written and verbal). Confident and persuasive communicator to ensure that the message is clear and well understood. + Ability to work collaboratively on multi-disciplinary project teams and to pro-actively manage relationships with external vendors. + Mindful of local, global, internal and external cultures to ensure that messages are received positively and effectively. + Good written and oral communication skills in the English language. + Ability to lead and facilitate meetings. + Ability to develop and deliver (technical) training. + Responsible for the clinical trial database and the data collected within a clinical trial and/or for the identification, detection and assessment of risks in a clinical trial. + Knowledge and experience in and continuing education of clinical trial designs, data standards, clinical trial conduct and methodology (International Conference on Harmonization (ICH) regulations. + Good Clinical Practice (GCP), major regulatory authorities and relevant directives/regulations) are required. Internal and external negotiation skills are required. + Ensures all tasks are carried out in accordance with respective applicable BI Standard Operating Procedures (SOPs), BI and regulatory guidelines and BI working instructions. + Ensures that all interactions and engagements are carried out with the highest ethical and professional standards and that all work is accomplished with quality and in accordance with BI values. **Eligibility Requirements:** Must be legally authorized to work in the United States without restriction. Must be willing to take a drug test and post-offer physical (if required) Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is one of the world’s top 20 pharmaceutical companies and operates globally with approximately 50,000 employees. Since our founding in 1885, the company has remained family-owned and today we are committed to creating value through innovation in three business areas including human pharmaceuticals, animal health and biopharmaceutical contract manufacturing. Since we are privately held, we have the ability to take an innovative, long-term view. Our focus is on scientific discoveries and the introduction of truly novel medicines that improve lives and provide valuable services and support to patients and their families. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. We are also deeply committed to our communities and our employees create and engage in programs that strengthen the neighborhoods where we live and work. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Animal Health USA, Inc., Merial Barceloneta, LLC and Boehringer Ingelheim Fremont, Inc. is an equal opportunity and affirmative action employer committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race; color; creed; religion; national origin; age; ancestry; nationality; marital, domestic partnership or civil union status; sex, gender identity or expression; affectional or sexual orientation; disability; veteran or military status, including protected veteran status; domestic violence victim status; atypical cellular or blood trait; genetic information (including the refusal to submit to genetic testing) or any other characteristic protected by law. Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment verifications and drug screenings. **Organization:** _US-BI Pharma/BI USA_ **Title:** _Clinical Data Manager II_ **Location:** _Americas-United States-CT-Ridgefield_ **Requisition ID:** _175443_
          (USA-CT-RIDGEFIELD) Principal Clinical Data Manager        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Contributes to the development process for new substances and development and promotion of drugs on the market by providing expertise, expectations, direction and oversight for Clinical Data Management (CDM) deliverables at project / trial level. Takes a lead role with internal and external partners and represents the company at meetings with regulatory authorities, clinical investigators and in the interaction with Contract Research Organizations (CROs) and external vendors in all aspects of data management for assigned project /trial(s). Provides input into CDM standards and process developments. Assumes primary responsibilities in one or more of the following roles demonstrating the required expertise and capabilities as: + Trial Data Manager (TDM) for complex trials or as subject matter expert of CDM responsibilities and processes e.g. TMCP process expert + Project Data Manager (PDM) + Risk Based Quality Management (RBQM) Business Partner (BP) As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees **Duties & Responsibilities:** + In the role of a Trial Data Manager (TDM) for complex trials or as subject matter expert of TDM responsibilities and processes for clinical trials led in-house or using business process outsourcing (BPO) + Takes a leadership role as subject matter expert for CDM responsibilities and processes in global projects/working groups and provides mentoring for less experienced Data Managers (DMs). + Takes a lead role in the specific setting of special trials, like mega trials or complex TMCP trials. Existing SOPs, guidelines and WIs do not cover these and the trial CDM has to make sure that the processes are developed according to the trial’s needs but adheres to the principles of GCP and other regulations like FDA guidance / regulations and documented in SOP variations as necessary. + Takes a lead role with internal and external partners to establish, align and confirm data management expectations for assigned trial(s). + Responsible for CDM trial level oversight. + Builds effective relationships with vendor partners. + Review protocols and identifies requirements for proper data capture. + In the role of a Trial Data Manager (TDM): Continued… + Oversee the design, creation and UAT Plan and testing of clinical study databases along with development of edit check specifications and manual data listings as required. + Define or reviews creation and maintenance of all essential data management documentation including CRF specifications, eCRFs, annotated eCRF, eCRF completion guidelines, Data Management Plans (detailing complete data management processes throughout clinical studies), Data Transfer specifications and Data Review Guidelines, in accordance with the protocol, BI and project data standards. + Integrates external data (non-CRF data) from vendors or other internal departments into the clinical trial database. + Initiates and compiles Trial Master File (TMF) relevant documentation containing the necessary CDM / Biostatistics & Data Sciences (BDS) documentation for a trial together with other members of the trial team as appropriate. Therein ensures appropriate quality, scientific content, organization, clarity, accuracy, format, consistency and compliance with regulatory guidelines. + Establishes conventions and quality expectations for clinical data and plans and tracks the content, format, completeness, quality and timing of the trial data collection process and other CDM deliverables via data analytics throughout the conduct of a trial. + Throughout the trial, the function holder leads the respective trial level activities in the context of business process outsourcing (BPO) in CDM. + Collaborates with the trial team to ensure that the database can be locked according to the planned timelines and quality. Responsible for the database lock and accountable for the integrity of the database. + Ensures that SDTM (Study Data Tabulation Model) compliant data is available for analyses together with the Project Data Manager (PDM) and the SDTM programmer at the CRO (in the context of the BPO). + Leads and facilitates the Medical and Quality Review (MQR) process and other trial team meetings. Presents and trains at trial team, CRA and investigator meetings. + Ensures real-time inspection readiness of all Clinical Data Management deliverables for a trial and participates in regulatory agency and BI internal audits as necessary. + Identifies and communicates lessons learned and best practices at the trial level and within CDM. Identifies and participates in DM related process, system, and tool improvement initiatives within CDM/BDS. + Leads trial data managers in support of their trial in aspects of the data management work. + In the role of a Project Data Manager (PDM), the function holder performs PDM tasks for multiple early stage project e.g. PDM TMCP or for an international development project that has gone beyond the stage of Proof of Clinical Principal (PoCP) and involves complex and large international phase III trials. + Builds effective relationships with CROs/ vendor partners utilized within the project. + Gives input to the core clinical trial protocol (CTP). + Defines, reviews and approves key Clinical Data Management Project level deliverables, including: Core Case Report Form (CRF) design, instructions for CRF completion, Project data management plan; e.g. database specifications including derivations, edit check specifications, data cleaning plan, Electronic data transmission agreements in accordance with the core protocol , BI, TA Level data standards and Project needs. + Initiates and compiles PDMAP documentation containing the necessary CDM / Biostatistics & Data Sciences (BDS) documentation for a project together with other members of the trial team as appropriate. Therein ensures appropriate quality, scientific content, organization, clarity, accuracy, format, consistency and compliance with regulatory guidelines. Ensures that SDTM (Study Data Tabulation Model) compliant data is available for analyses together with the Trial Data Manager (TDM) and the SDTM programmer at the CRO (in the context of the BPO). + The PDM sets up, maintains and validates the project database consistent with the latest industry, BI and project standards. Ensures that the SDTM project database is compliant with the requirements from the project statistical analyses plan and collaborates with the PSTAT, PPROG on a regular basis. + Establishes conventions and quality expectations for clinical data and plans and tracks the content, format, completeness, quality and timing of the project database via data analytics throughout the conduct of a project. + Compiles and ensures compliance of all elements of the electronic submission deliverables: (e.g.): datasets, trial level SDTM Reviewers Guide and define.xml. + As part of inspection readiness, the PDM ensures that the TDMAP documentation of the pivotal trials is complete and consistent and communicates with trial data managers during conduct of these trials to set expectations. + Identifies and communicates lessons learned and best practices at the project level and within CDM. Identifies and participates in DM related process, system, and tool improvement initiatives within CDM. + Leads / mentors project/trial data managers that support the project in aspects of the data management work. + In the role of a Risk Based Quality Management (RBQM) Business Partner (BP), the function holder performs RBQM BP tasks for one or multiple clinical trials. + Leads the project or trial team through the process of identifying and assessing risks at the beginning of a trial. + Initiate and facilitates the RBQM risk review and assessment meetings, + Develop and maintain trial specific Risk Based Quality Management (RBQM) documentation and assists with any risk related questions that arise. + Authors the quality statement / Quality Report at the conclusion of the trial - for the Clinical Trial Report. + RBQM mentor / trainer for Central Monitors (CM), new RBQM BPs and other trial team members e.g. CRAs, CMLs + RBQM BP may also perform CM tasks as needed. + Supports the development, and maintenance of RBQM tools **Requirements:** + Bachelor’s degree or Master’s degree from an accredited institution (e.g. MBA, MSc) with major/focus in Life Sciences, Computer Science, Statistics, or similar preferred. + Experience in clinical research including data management and/or clinical trial management required. Initial experience within the pharmaceutical industry, CROs or academic sites: >=6 years + International exposure in daily business: more than 50% of international business/customers/staff over more than four (4) years. **Technical / Analysis / Medicine** : + Technical expertise including: industry data structure knowledge (e.g. CDASH/CDISC); EDC use and database specification experience. + Experience with data visualization/reporting, analytics; i.e. able to interpret integrated data displays and metrics identify and communicate trends. + Experience using Statistical Analysis Software (SAS) programming and Risk Management Tools, including data review in JReview + Ability to adapt to new technologies. + Critical thinker and able to discern risks. Must be precise and able to detect subtle inconsistencies in data / structures. **Planning / Organization:** + Excellent organizational skills, problem solving abilities, negotiation skills, time management skills and initiative. + Must be able to work independently as well as part of a team. + Able to effectively manage multiple assignments and adapt flexibly to changing priorities. + Able to produce robust timelines and action plans, regularly review and follow up on progress and take decisive action in terms of follow up activities with local and global trial/project teams. Ensures work is completed effectively. **Communication** : + Strong communication skills with the ability to simply summarize complex information. + Ability to use a wide range of communication techniques and media (written and verbal). Confident and persuasive communicator to ensure that the message is clear and well understood. + Ability to work collaboratively on multi-disciplinary project teams and to proactively manage relationships with external vendors. + Mindful of local, global, internal and external cultures to ensure that messages are received positively and effectively. + Good written and oral communication skills in the English language. + Ability to lead and facilitate meetings. + Ability to develop and deliver (technical) training. + Responsible for the project collection standards, database and submission deliverables within a substance/project. + Knowledge and experience in and continuing education of clinical trial designs, data standards, clinical trial conduct and methodology (International Conference on Harmonization (ICH) regulations. + Good Clinical Practice (GCP), major regulatory authorities and relevant directives/regulations) are required. + Strong project management skills and internal and external negotiation skills are required. + Ensures all tasks are carried out in accordance with respective applicable BI Standard Operating Procedures (SOPs), BI and regulatory guidelines and BI working instructions. + Ensures that all interactions and engagements are carried out with the highest ethical and professional standards and that all work is accomplished with quality and in accordance with BI values. **Eligibility Requirements:** Must be legally authorized to work in the United States without restriction. Must be willing to take a drug test and post-offer physical (if required) Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is one of the world’s top 20 pharmaceutical companies and operates globally with approximately 50,000 employees. Since our founding in 1885, the company has remained family-owned and today we are committed to creating value through innovation in three business areas including human pharmaceuticals, animal health and biopharmaceutical contract manufacturing. Since we are privately held, we have the ability to take an innovative, long-term view. Our focus is on scientific discoveries and the introduction of truly novel medicines that improve lives and provide valuable services and support to patients and their families. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. We are also deeply committed to our communities and our employees create and engage in programs that strengthen the neighborhoods where we live and work. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Animal Health USA, Inc., Merial Barceloneta, LLC and Boehringer Ingelheim Fremont, Inc. is an equal opportunity and affirmative action employer committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race; color; creed; religion; national origin; age; ancestry; nationality; marital, domestic partnership or civil union status; sex, gender identity or expression; affectional or sexual orientation; disability; veteran or military status, including protected veteran status; domestic violence victim status; atypical cellular or blood trait; genetic information (including the refusal to submit to genetic testing) or any other characteristic protected by law. Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment verifications and drug screenings. **Organization:** _US-BI Pharma/BI USA_ **Title:** _Principal Clinical Data Manager_ **Location:** _Americas-United States-CT-Ridgefield_ **Requisition ID:** _175445_
          (USA-CT-RIDGEFIELD) Clinical Data Manager II        
Boehringer Ingelheim is an equal opportunity global employer who takes pride in maintaining a diverse and inclusive culture. We embrace diversity of perspectives and strive for an inclusive environment which benefits our employees, patients and communities. **Description:** Contributes to the development process for new substances and development and promotion of drugs on the market by providing expertise, expectations, direction and oversight for Clinical Data Management (CDM) deliverables. Takes a lead role with internal and external partners and represents the company at meetings with clinical investigators and in the interaction with Contract Research Organizations (CROs) and external vendors in all aspects of data management for assigned trial(s). May provide input into CDM standards and process developments. Assumes one or more of the following roles demonstrating the required expertise and capabilities as: + Trial Data Manager (TDM) + Central Monitor (CM) + Or developing expertise and capabilities under supervision as: + Project Data Manager (PDM), as an associate PDM or supporting a local submission (e.g. in Japan or China) + Risk Based Quality Management (RBQM) Business Partner (BP), e.g. as an associate RBQM BP or for a trial of low complexity As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees **Duties & Responsibilities:** + In the role of a Trial Data Manager (TDM) for clinical trials led in-house or using business process outsourcing (BPO) + Key liaison / Data Management lead to establish, align and confirm data management expectations for assigned trial(s), this requires regular interaction with other internal and external partners e.g. TCM, TSTAT, TPROG, TMCP, BPO partners. + Responsible for CDM trial level oversight. Builds effective relationships with CROs/ vendor partners. Review protocols and identifies requirements for proper data capture including electronic Case Report Form design and processing of clinical data ensuring accuracy, consistency and completeness + Oversee the design, creation and UAT Plan and testing of clinical study databases along with development of edit check specifications and manual data listings as required. + Define or reviews creation and maintenance of all essential data management documentation including CRF specifications, eCRFs, annotated eCRF, eCRF completion guidelines, Data Management Plans (detailing complete data management processes throughout clinical studies), Data Transfer specifications and Data Review Guidelines, in accordance with the protocol, BI and project data standards. + Integrates external data (non-CRF data) from vendors or other internal departments into the clinical trial database. + Initiates and compiles Trial Master File (TMF) relevant documentation containing the necessary CDM / Biostatistics & Data Sciences (BDS) documentation for a trial together with other members of the trial team as appropriate. Therein ensures appropriate quality, scientific content, organization, clarity, accuracy, format, consistency and compliance with regulatory guidelines. + Establishes conventions and quality expectations for clinical data and plans and tracks the content, format, completeness, quality and timing of the trial data collection process and other CDM deliverables via data analytics throughout the conduct of a trial. + Throughout the trial, the function holder either performs or leads the respective trial level activities in the context of business process outsourcing (BPO) in CDM. + Collaborates with the trial team to ensure that the database can be locked according to the planned timelines and quality. Responsible for the database lock and accountable for the integrity of the database. + Ensures that SDTM (Study Data Tabulation Model) compliant data is available for analyses together with the Project Data Manager (PDM) and the SDTM programmer at the CRO (in the context of the BPO). + Leads and facilitates the Medical and Quality Review (MQR) process and other trial team meetings. Presents and trains at trial team, CRA at investigator meetings. + Ensures real-time inspection readiness of all CDM deliverables for a trial and participates in regulatory agency and BI internal audits as necessary. + Identifies and communicates lessons learned and best practices at the trial level within CDM. Identifies and participates in DM related process, system, and tool improvement initiatives within CDM/BDS. + In the role of a Trial Data Manager (TDM) for fully outsourced trials, supervises and instructs the CRO in performing the above TDM tasks and leads trial level oversight, including planned timelines and fulfillment of quality expectations. + Sets expectations for and defines specifications for data transmission with the CRO. Integrates the data from the CRO into the BI clinical trial database. Ensures that SDTM compliant data is available for analyses together with the responsible Project Data Manager (PDM). + In the role of a Central Monitor (CM) for clinical trials + Executes and manages the Risk Based Quality Management (RBQM) processes as described in the monitoring framework, this requires regular interaction with other internal and external functions e.g. clinical monitors, CRAs/site monitors, data managers, biostatistics, site personnel. + Conduct root cause analysis on the risk signals of aggregated site and trial data (pulled from various sources) using risk reports. Identifies and investigates potential risks and trends with subject protection and reliability of trial results and compliance with the investigational plan for impact on site/country/trial activities. + Provides direction to site monitors for additional remote and on-site monitoring activities for risk sites, within the scope of the trial monitoring plan. + Oversees potential issues and findings requiring further review and follow-up and ensures appropriate actions are taken by the trial team members to investigate, resolve and document potential risks identified, including adequate documentation of resolution. + Provides a regular and efficient mechanism of trial communication for the trial team including documentation and leads oversight meetings. + Ensures real-time inspection readiness of responsible RBQM deliverables for a trial and participates in regulatory agency and BI internal audits as necessary, in conjunction with the RBQM BP. + Identifies and communicates lessons learned and best practices at the trial level and with other CMs. Identifies and participates in CM related process, system, and tool improvement initiatives within CDM/BDS. Performs user acceptance testing and supports the development and maintenance of RBQM tools. + In the role of a Project Data Manager (PDM), the function holder performs (selected) PDM tasks under the supervision of an experienced PDM, e. g. as an associate PDM or for a project of low complexity where existing standards, material and documentation can be re-used and built upon. + Accountabilities include the definition, leadership and oversight of data management processes and deliverables for clinical projects (with one project comprising multiple trials in a substance in one indication) such as establishing expectations for CRF-based/external dataset content and structure, definition of project standards (e.g. SDTM, CRF, specifications such as for MQR, data cleaning, data transmission), review and acceptance of project level database elements, programming and validation of the project database (PDB), preparation and creation of CDM deliverables for regulatory submission and support of safety updates. + Alternatively, the function holder may be responsible for the specific CDM deliverables and support for a local regulatory submission (e.g. in Japan or China). + In the role of a Risk Based Quality Management (RBQM) Business Partner (BP), the function holder performs (selected) RBQM BP tasks under the supervision of an experienced RBQM BP, e.g. as an associate RBQM BP or for a trial of low complexity. + Takes a leadership role with the project / trial team to establish, align and confirm RBQM expectations for assigned trial(s). The function holder performs (selected) RBQM BP tasks in the definition, leadership and oversight of Risk Based Quality Management (RBQM) processes and deliverables for one or multiple clinical trials such as guiding the project and trial team through the process of identifying and assessing risks at the beginning of a trial, initiating and facilitating RBQM risk review and assessment meetings, facilitating the implementation of required RBQM documentation and tools, authoring the quality report and assisting with any risk related questions that arise. **Requirements:** + Bachelor’s degree or Master’s degree from an accredited institution (e.g. MBA, MSc) with major/focus in Life Sciences, Computer Science, Statistics, or similar preferred. + Experience in clinical research including data management and/or clinical trial management required. Initial experience within the pharmaceutical industry, CROs or academic sites: >=3 years. + No leadership experience required. + **Technical / Analysis / Medicine** : + Any of the following skills: data visualization/reporting, analytics; i.e. able to interpret integrated data displays and metrics, identify and communicate trends. + Experiences with Electronic Data Capture (EDC) processes + Knowledge in and experience with any of the following: Data review in JReview, Risk Management Tools, Statistical Analysis Software (SAS) programming + Ability to adapt to new technologies. + Critical thinker and able to discern risks. Must be precise and able to detect subtle inconsistencies in data / structures. + **Planning / Organization:** + Excellent organizational skills, problem solving abilities, negotiation skills, time management skills and initiative. + Must be able to work independently as well as part of a team. + Able to effectively manage multiple assignments and adapt flexibly to changing priorities. + Able to produce robust timelines and action plans, regularly review and follow up on progress and take decisive action in terms of follow up activities with local and global trial/project teams. Ensures work is completed effectively. + **Communication** : + Strong communication skills with the ability to simply summarize complex information. Ability to use a wide range of communication techniques and media (written and verbal). Confident and persuasive communicator to ensure that the message is clear and well understood. + Ability to work collaboratively on multi-disciplinary project teams and to pro-actively manage relationships with external vendors. + Mindful of local, global, internal and external cultures to ensure that messages are received positively and effectively. + Good written and oral communication skills in the English language. + Ability to lead and facilitate meetings. + Ability to develop and deliver (technical) training. + Responsible for the clinical trial database and the data collected within a clinical trial and/or for the identification, detection and assessment of risks in a clinical trial. + Knowledge and experience in and continuing education of clinical trial designs, data standards, clinical trial conduct and methodology (International Conference on Harmonization (ICH) regulations. + Good Clinical Practice (GCP), major regulatory authorities and relevant directives/regulations) are required. Internal and external negotiation skills are required. + Ensures all tasks are carried out in accordance with respective applicable BI Standard Operating Procedures (SOPs), BI and regulatory guidelines and BI working instructions. + Ensures that all interactions and engagements are carried out with the highest ethical and professional standards and that all work is accomplished with quality and in accordance with BI values. **Eligibility Requirements:** Must be legally authorized to work in the United States without restriction. Must be willing to take a drug test and post-offer physical (if required) Must be 18 years of age or older **Our Culture:** Boehringer Ingelheim is one of the world’s top 20 pharmaceutical companies and operates globally with approximately 50,000 employees. Since our founding in 1885, the company has remained family-owned and today we are committed to creating value through innovation in three business areas including human pharmaceuticals, animal health and biopharmaceutical contract manufacturing. Since we are privately held, we have the ability to take an innovative, long-term view. Our focus is on scientific discoveries and the introduction of truly novel medicines that improve lives and provide valuable services and support to patients and their families. Employees are challenged to take initiative and achieve outstanding results. Ultimately, our culture and drive allows us to maintain one of the highest levels of excellence in our industry. We are also deeply committed to our communities and our employees create and engage in programs that strengthen the neighborhoods where we live and work. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Animal Health USA, Inc., Merial Barceloneta, LLC and Boehringer Ingelheim Fremont, Inc. is an equal opportunity and affirmative action employer committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race; color; creed; religion; national origin; age; ancestry; nationality; marital, domestic partnership or civil union status; sex, gender identity or expression; affectional or sexual orientation; disability; veteran or military status, including protected veteran status; domestic violence victim status; atypical cellular or blood trait; genetic information (including the refusal to submit to genetic testing) or any other characteristic protected by law. Boehringer Ingelheim is firmly committed to ensuring a safe, healthy, productive and efficient work environment for our employees, partners and customers. As part of that commitment, Boehringer Ingelheim conducts pre-employment verifications and drug screenings. **Organization:** _US-BI Pharma/BI USA_ **Title:** _Clinical Data Manager II_ **Location:** _Americas-United States-CT-Ridgefield_ **Requisition ID:** _175441_
          eVision Software Releases a New Product: Smart Vision; Improved Decision Making Across the Business        

eVision released a new software system called Smart Vision. The system improves decision making across the business as it supplies clients with interactive dashboards with powerful data visualization to provide real-time insights for every management level on risk and process.

(PRWeb November 27, 2013)

Read the full story at http://www.prweb.com/releases/2013/11/prweb11305147.htm


          Episode 28: Interview with Meli Lewis        
This week Christie chats with Meli Lewis (@iff_or), data editor for the Oregonian about data visualization, ham radio, emergency preparedness, and much more. Enjoy! Show notes
          Everybody has different things they loved about Thanksgiving        
by SodaHead.Learn about data visualization tools.
          Visual analysis of AdWords data: a primer        

Columnist David Fothergill shows how data visualization can unlock insights about AdWords performance you may not have gleaned otherwise. The post Visual analysis of AdWords data: a primer appeared first on Search Engine Land. Please visit Search Engine Land for...

The post Visual analysis of AdWords data: a primer appeared first on PJ Web Technologies.


          Data Analyst - Tableau Expert        
CA-Newport Beach, RESPONSIBILITIES: Kforce has a client that is searching for a Data Analyst - Tableau Expert in Newport Beach, California (CA). REQUIREMENTS: Hands on expert in Tableau development that can build and deploy complex dashboards and visualizations Experience consulting with developmental groups to build and enhance data visualization or analytical applications; able to define solutions that are consis
          Graphic Designer / Data Visualisation Expert - Cronos International - Brussels        
SAVVYTAS (Cronos company) is looking for a Talented Graphic Designer/ Data visualisation Expert to join his European Institution team based in Brussels. TASKS Creation of accurate designs to fit the client's needs, respecting the existing visual identity of the Institutions or specific guide style Creation of new strong brand identities Creation of static and/or dynamic infographics, data visualisations and motion graphics ( motion graphic is a nice to have) Adaptation of...
          So Many Choices, but Only One of Me! PASS Summit 2011 Picks        

It’s that time of year again when the anticipation of the upcoming PASS Summit begins to build. My first attempt at using the Summit Schedule Builder by selecting the various topics of interest to me resulted in extreme overbooking. Sigh… Now I really have to do some serious thinking about which sessions I want to see and how that fits around the various obligations that I have to participate in other events besides the sessions that I’m presenting:

Preconference: MDX, DAX, and DMX: An Introduction to the Languages of BI (Tuesday, Oct 11, 8:30 am – 4:30 pm)

Half-day session: So How Does the BI Workload Impact the Database Engine? (Thursday, Oct 13, 1:30 pm – 4:30 pm). Denny Cherry and I are co-presenting this session and provided a sneak preview for 24 Hours of Pass Fall 2011 which you can view here (see Session 9).

Panel session: Are You a Linchpin? Career management lessons to help you become indispensable (Friday, Oct 14, 2:30 pm – 3:45 pm)

Of course, I’d love for you to come to all of my sessions, but there are plenty of other speakers to see, too! If you’re a business intelligence beginner, I recommend you try to see these sessions:

If you’ve already got some BI experience, the following sessions are must-see:

Then, of course, the big news at PASS this year is the upcoming Denali release. There are plenty of Denali-focused BI sessions to see, but I’ll leave it to you to find the sessions that are right for you!

Overlap? Unfortunately, yes – so we’re all going to have to make some hard choices. But if your budget permits, then the best way out of this conundrum is to buy the conference DVD! Then you have the benefit of reviewing presentations later and don’t have to worry about trying to remember everything that was discussed during Summit, because I guarantee it’s going to be an information overload kind of week!


          Ready for the Rally? SQLRally 2011, Here I Come!        

When I was a kid, living in New Jersey at the time, my dad was very involved in sports car racing and motorcycles. Our garage was never used to actually park a car that we rode around in. Instead, it was strewn with all kinds of half-built engines and body parts. Car body parts, that is. And much to our neighbors' chagrin, I'm sure, we often had cars in the backyard that my dad would use to snag some part that he needed for his current racing machine (TR5 at one point, later Formula V). Many weekends were spent traveling to the latest Sports Car Club of America (SCCA) race somewhere in New England (I don't remember ever hearing about NASCAR in those days) or we were off to see the professionals in the Grand Prix at Watkins Glen. Frankly, being a spectator at these events wasn't particularly fun as a kid, nor was it particularly memorable except for the time that I got to see Paul Newman at a race when he was just getting started in the sport. What always seemed much more fun to me were the rally events - I guess because there was some sort of puzzle involved or some type of navigation required that let me participate with my dad, even if I wasn't much help. Of all the things from those experiences that persist to this day in my life, I am very good at using maps! The rest didn't rub off so much on me - at least not the car-related stuff. I did inherit half of my computer-savvy genes from him!

So for me, the word rally has very positive connotations and fond memories of being with my dad. And I get to add to those warm and fuzzies by attending the very first PASS SQLRally 2011 in Orlando this May! I feel very honored to be part of the speaker line-up because the SQL Server community decided who the speakers would be, and there was quite an exceptional field of contenders. The session I'll be presenting is Data Visualizations in SSRS 2008 R2. While using data visualizations in reports can be an effective way to communicate information, there are good ways to do that and bad ways to do that which I'll be sharing with attendees. My focus will be on Reporting Services 2008 R2, but even attendees who haven't migrated to the latest version will learn some useful tips in this session.

Now I won't just be presenting my session and hiding out in my hotel room the rest of the time. I'll be checking out other speakers' sessions, hanging out with people that I usually only get to see at PASS, and meeting new people, too! The "people" part of conferences is just as much fun and invaluable as the learning opportunity.

I'm going to rev up my SQL Rally experience by attending a pre-conference session by Grant Fitchey (blog | twitter). I'm a business intelligence kinda gal so normally I throw data into a cube to get really good performance, but sometimes that's not an option. And I didn't get into business intelligence by starting out as a DBA, so my relational performance tuning skills for SQL Server are pretty basic. I'm looking forward to adding some new skills to my repertoire.

Although I'd like to be able to sit in on everyone's session - I'm also a learning kinda gal - not everything is applicable to what I do, and some things I already know how to do. But there are still a few sessions that I really want to see:

I'm really looking forward to this event. If you're going to be there too, please be sure to look for me and say hi!


          Synthesizing Data Visualization and User Experience        

Do you ever find yourself making a data visualization but you’re not sure which graph or chart to use? Do you look at a visualization and wonder what you’re supposed to be looking for? What...

The post Synthesizing Data Visualization and User Experience appeared first on GroupVisual.io.


          R Programming Tool For Data Science - Simplilearn Americas Inc. , Online         
The Data Science with R training course has been designed to impart an in-depth knowledge of the various data analytics techniques which can be performed using R. The course is packed with real-life projects, case studies, and includes R CloudLabs for practice.

Mastering R language: The course provides an in-depth understanding of the R language, R-studio, and R packages. You will learn the various types of apply functions including DPYR, gain an understanding of data structure in R, and perform data visualizations using the various graphics available in R.

Mastering advanced statistical concepts:   The course also includes the various statistical concepts like linear and logistic regression, cluster analysis, and forecasting. You will also learn hypothesis testing.

As a part of the course, you will be required to execute real-life projects using CloudLab. The compulsory projects are spread over four case studies in the domains of healthcare, retail, and Internet. R CloudLab has been provided to ensure a practical and hands-on ; Additionally, we have four more projects for further practice.

Cost:

Certified


          R Programming Tool For Data Science - Simplilearn Americas Inc. , Online         
The Data Science with R training course has been designed to impart an in-depth knowledge of the various data analytics techniques which can be performed using R. The course is packed with real-life projects, case studies, and includes R CloudLabs for practice.

Mastering R language: The course provides an in-depth understanding of the R language, R-studio, and R packages. You will learn the various types of apply functions including DPYR, gain an understanding of data structure in R, and perform data visualizations using the various graphics available in R.

Mastering advanced statistical concepts:   The course also includes the various statistical concepts like linear and logistic regression, cluster analysis, and forecasting. You will also learn hypothesis testing.

As a part of the course, you will be required to execute real-life projects using CloudLab. The compulsory projects are spread over four case studies in the domains of healthcare, retail, and Internet. R CloudLab has been provided to ensure a practical and hands-on ; Additionally, we have four more projects for further practice.

Cost:

Certified


          HR Analytics - Manipal Executive Education , United Arab Emirates, Dubai         
Topics Covered
  • Introduction
  • Configure the right kind of metrics to measure results across the HR value chain (Understanding the HR Domain)
  • Build a strong foundation to champion and implement analytics (Introduction to HR Analytics)
  • Foundation for analytical thinking in situations of uncertainty (Core Statistical Learning)
  • Introduction to Hypothesis Testing(Core Statistical Learning)
  • Apply critical thinking to frame hypothesis and deploy data visualizations  (Data Analysis)
  • Uncover and communicate insights to provide tangible benefits (Driving Insights)
  • Introduction to Predictions(Core statistical learning)
  • Build a data driven HR organization by embedding analytics  (Implementing Analytics)
  • Panel Discussions
  • Assessments

 

Cost:

Certified


          Microsoft MCSE: Business Intelligence - My Training Academy , Online         

Overview

Microsoft MCSE: Business Intelligence Solution Expert certification proves to employers that you have the skills and techniques needed to design, build and deploy solutions that deliver more data to more people across the organisation. Earning an MCSE Business Intelligence certification will qualify you for a position as a BI and reporting engineer.

Course Description

The MCSE (Microsoft Certified Solutions Expert): Business Intelligence course is perfect for business intelligence professionals who need to deliver large quantities of data to more people across their organisation.

You will learn how to create SQL Server 2012 reports, create databases in a multidimensional format, perform business intelligence analysis and create data visualisations. The course will impart the skills and techniques required to design, create and deploy effective SQL Server business intelligence solutions, as well as how to manage, maintain and troubleshoot them successfully.

The great thing about our MCSE Business Intelligence Training Course Bundle is that you are in charge of your start and finish date, with no deadline pressures!

We train our students to the very best standards, offering expert instructor-led training via our state of the art eLearning platform.

By achieving the certifications within the MCSE Business Intelligence training course bundle you will create more career opportunities and be better positioned when applying for work.

Enrol today and prepare for a future you deserve.

The course has been designed to give you real-world knowledge that you can put to usefrom day one. It's highly flexible, so you can set your own timetable and study at your ownpace.

We train our students to the very best standards, offering expert instructor-led training viaour state of the art eLearning platform.

Who Is This Course For?

Ideal for IT professionals looking to earn the MCSE Business Intelligence certification.

Requirements

Ideal for IT professionals looking to earn the MCSE Business Intelligence certification.

Career Path

We've put together a list of relevant job titles you can apply for after achieving the MCSE Business Intelligence certification (note that some careers may require further study, training and/or work experience):
  • Business Intelligence and Reporting Engineer
  • Business Intelligence Analyst
  • Business Intelligence Developer
  • Senior SQL Database Administrator
  • IT Support Analyst

Cost:


          Market Researcher - Contractor - San Jose        
TiVo Inc., the leading digital video recording company, has an immediate opening for a key member of the growing Research team. He/she will be responsible for conducting and commissioning primary research, reviewing secondary research, analyzing data, creating presentations and working in cross-functional environments with Sales, Product Management, Marketing, User Interface, Business Development and Engineering. Here is what you will get to do: Designing and conducting market research on market size, messaging, consumer profile, advertising, and concept testing new feature ideas. Writing discussion guide and surveys, as well as analyzing and presenting data. Programming surveys using internal survey tool Presenting findings and making concrete recommendations to clients and internal teams both verbally and in written form. Key point of contact for client research professionals in design, delivery, and interpretation of research projects. Reviewing and analyzing secondary research and presenting it internally in an actionable format. What you will need to be successful: Expertise in statistics, research design, and quantitative methodologies. 3+ years experience with conducting quantitative research and performing data analysis using SPSS. Thorough working knowledge of market research methodologies (qualitative and quantitative) and skills (e.g. statistical analysis, interpretation of results, data visualization). 3+ years experience managing client driven research projects. Experience working in related industry (software, consumer electronics, advertising) preferred. Ability to work independently, handle multiple tasks simultaneously and meet deadlines. Interested in TV and in improving the TV experience A bachelor degree in related field (marketing, market research, social science, communications, etc.). Advanced degree preferred. TiVo is unique. We're successful because of diversity of thought, skill sets, and experience and pure talent. San Jose, CA Job Type: Contractor Requisition Number: 869
          Tableau 10 Desktop, Online, Server Online Training - Excelr Solutions , Online         

ExcelR offers an in-depth understanding of Tableau Desktop 10 Associate Certification training for Tableau developers and complete Tableau Server training for Tableau administrators. 

Training includes 30 hours of hands-on exposure to ensure that you are left will a feeling of being an expert at the Tableau tool usage. We have considered the industry requirement & devised the course to ensure that you have the practical exposure required to swim through the interviews with ease. The case studies explained towards the end will only reinforce the practice learning to make you complete to face the real world projects & problems which are solved using Tableau. The datasets chosen ensures that you learn every option completely. With a lot of industry connects you get to know the job opportunities which none would otherwise. Mock interview questions & the final project which help you establish as an adroit in the space of data visualization. Learning the leading data visualization principles will ensure that you always work with a combo of "Data Visualisation Dos & Dont's + Tableau Tool".

  • Tableau is in the leaders quadrant of data visualization according to Gartner's magic quadrant. Key differentiators of Tableau over other business intelligence tools are:
  • Tableau connects to a lot of other native databases & servers
  • Tableau has a lot of analytics capability
  • Tableau connects with most of the leading Big Data tools
  • Tableau is designed for end users so that customers directly make changes as required
  • Tableau has varied licensing cost for different uses of different customers
  • Tableau Server for managing security & managing the reports sharing
  • Tableau Desktop for developers to develop reports, dashboard & story maps
  • Tableau Online for customers who want to view visualizations from anywhere
  • Tableau Mobile for the users using pad (iPad, notepad, etc.)
  • Tableau Public for basic users in trying to connect to excel workbook
  • Tableau Reader for users who want to read the Tableau developed visualizations

Who Should do Tableau Certification Training
  • Professionals who should pursue Tableau Certification Training includes:
  • Business intelligence professionals
  • Data Reporting professionals
  • Content Management Professionals
  • Senior management who provide reports to leadership teams
  • Leadership team who presents reports to customers
  • Media folks who create visualisations for leading magazines
  • Freshers who want to kick start their careers in IT/Software industry
  • Database administrators who always manage data
  • Data scientists who work on data to build prediction models
Though there are umpteen number of data visualisation tools which follow the data visualisation principles. Tableau holds No.1 position in the leaders quadrant for data visualisation. Hence data visualisation training offered by ExcelR is exclusively on Tableau Desktop. 


Cost:

Certified


          Business Analytics Online Course - Excelr Solutions , Online         

What is the No.1 profession of 21st century? What is the profession to be termed as sexiest in 21st century? Which profession provides salaries like never seen before? Which profession is most (all) companies hunting for in full throttle? Which profession ensures that your salaries grow exponentially with the experience?

Answer to all the above questions is the word "DATA SCIENTIST", which is also termed differently as Data Analytics or Business Analytics. All it takes to become a successful data scientist is working knowledge of 5 core concepts - Statistical Analysis, Forecasting, Data Mining, Data Visualisation & Text Mining. Excelr provides hands-on training using live case studies being implemented in industry for 50 hours. In addition participants are provided with assignments, mini-projects, quizzes, case studies & a final capstone project to ensure that you are ready to crack any interview immediately after the last day of training.

As part of Statistical Analysis training we start from very basics & move on to discuss about very advanced concepts including Linear, Logistic, Poisson, Binomial, Negative Binomial, Zero Inflated regression techniques, Imputation etc. These core concepts provide you an edge over other aspirants who are trained else where. Aspirants can also opt to consider only statistical analysis training & thereby get statistical analysis certification. This will provide more confidence to the employers.

As part of Forecasting training you will learn about the various time series techniques which includes Auto Regression (AR) , Moving Average (MA), Exponential Smoothing (ES), ARMA, ARIMA, ARCH & GARCH.

Data Mining training includes two streams - Unsupervised data mining & Supervised data mining. As part of this machine learning training you will be exposed to various techniques within Unsupervised learning which will help you perform clustering, build recommender system, perform network analysis etc.,

Data Visualisation training is a must have for any data analyst. You will be exposed to Tableau which is arguably number one data visualisation tool with a lot of analytical capabilities. 

Text Mining is the most sort after skill in a data scientist. Reason being, 80% of the unstructured data is textual. Data is getting generated in social media in form of tweets, posts, etc., e-commerce website in form of review comments etc.


Cost:

Certified


          Big Data        


Squeaking this update in on the last day of summer. Things ramped up quickly this September, I have some exciting news I am waiting to announce on a national poster campaign. More to come on that in the coming months! I started teaching again this semester at Sheridan, I work with third and fourth year students on courses related to information-based illustration. I have asked my fourth year students to consider where art and science intersect, and the workshop has been dubbed 'big data'. We are surrounded by big data and live in an age of information. Understanding and being able to communicate knowledge and ideas visually is critical in business and media. This workshop explores illustration, science, observation, and data visualization through a series of research-driven exercises. Students are encouraged to foster creativity and curiosity, defined accurately and effectively through different media. I took it upon myself to illustrate some technology-driven collages. How do we relate to our digital devices? How does it serve us and/or control us? All of our likes, searches, purchases and comments are tracked and quantified, packaged and traded. At what point does social media reward us for our patience and commitment? More and more, the fruits of our labour feels tainted. The new iPhone 7 was launched while I was working on these. Not much of a wave, or movement, or ripple. Just another costly upgrade. Here are some images from the series.





Here's a little doodle that sums things up nicely: Concept Engine




          Data Visualization Talks Online        
Lately, I have been collecting links to videos of talks related to Data Visualization. I found multiple talks for some people and so have categorized them accordingly. I have also tried to provide some context to the individual/group. I think the first TED talk by Hans Rosling (@hansrosling) got a lot of media attention and […]
          Certificate in Accounting by Excel - Exceligent Academy , Cairo, Giza         
Certificate Objective:
This Certificate provides participants with the knowledge and skills to how to work with Microsoft Excel, and Practice the new features and advanced functions in Microsoft Excel to enhance your business productivity skills
Also, it enhances participants 'Accounting Excel Skills to design and develop a full accounting System, starting with entries ending with the financial statements.

Who Should Attend? 
This Certificate is ideal for those who would like to be able to work comfortably with Microsoft® Excel, and start from zero knowledge, begins with an introduction to the Excel environment and ends with use various advanced features in excel and design accounting system starting from general journal till income statement and balance sheet.

Contents
Module 1: Getting start with Excel 2016.
Module 2: Managing the workbook environment
Module 3: Working with cells and Formatting cells
Module 4: Working with formulas and functions
Module 5: Summarizing data visually by charts, images and Sparkline
Module 6: Analyzing and organizing data
Module 7: Building and Preparing Accounting Module by Excel as: Design General Journal, General Ledger, Trail balance, Income Statement, Balance Sheet.
 
Key Benefits of Attending
- Learn Excel by real business case studies to help you apply your new skills directly to daily tasks as Expert Excel User.
- Learn and Practice Advanced Skills in Excel 2016 for Daily Accounting & Business Tasks
- Design and Develop Accounting Module Using Excel 2016
- Boost your Data Analysis Skills to support Financial decision making
- Enhance your Business Productivity Skills Using Excel 2016

Training Quality Guarantees:
-  International Curriculum.
-  Certified Instructors with 10+ years of experience in Accounting, Information technology and Training.
 
Certification
  • Certificate Accredited by Exceligent Solutions - Microsoft Learning Partner and Microsoft Certified Trainer & Learning Consultant
  • Certificate of Achievement will only be awarded to those delegates who attend, complete the course and passing this certificate's Project.

 

Cost:

Certified


          Professional Diploma in Microsoft Office - Exceligent Academy , Cairo, Giza         
Diploma Objective:
This Diploma provides participants with the knowledge and skills to how to work with Microsoft Office Applications Such as; Excel, Word, PowerPoint and Outlook. At this Diploma, We are deliver Microsoft Curriculum with practical way to enhance job candidates’ skills and Maximizes productivity and efficiency for anyone who uses Microsoft Office as a vital part of their job functions.
 
 
Key Benefits:
  • Learn how to be master in Excel 2016 for daily business tasks.
  • Learn how to master their business writing skills using word 2016.
  • Learn how to enhance Presentations Skills using PowerPoint 2016.
  • Learn how to master Email Communication & manage their mind with effective task management.
 
Diploma Contents:
Microsoft Excel 
  • Getting start with Microsoft Excel
  • Working with cells and Formatting cells
  • Working with formulas and functions
  • Summarizing data visually by charts, images and Sparkline
  • Analyzing and Organizing data
  • Printing worksheets and charts
 
Microsoft Word 
  • Getting start with Microsoft Word
  • Enter, edit, and proofread text
  • Modify the structure and appearance of text
  • Insert and modify diagrams ,charts and visual elements
  • Work with mail merge
  • Preview, print, and distribute documents
 
Microsoft PowerPoint
  • Getting Start with Microsoft PowerPoint
  • Create Presentations
  • Work with slides and Slide Text
  • Add animations, audio, and videos & visual enhancements
  • Review and deliver presentations
  • Share and Review presentations
  • Create custom Presentation Elements
           
Microsoft Outlook
  • Get started with Microsoft Outlook
  • Send and Receive Email Messages
  • Organize your Inbox
  • Manage Your Calendar & scheduling
  • Create & Track Tasks
  • Customize Outlook & Manage email settings
         
Certification :
Certificate Accredited by Exceligent Solutions - Microsoft Learning Partner and Microsoft Certified Trainer & Learning Consultant.
Certificate of Achievement will only be awarded to those participates who attend, complete the course and passing this diploma's Project.
 
Duration : 42 Hours
Language : English & Arabic

Cost:

Certified


          Bookmarks for 2 ott 2013 through 22 ott 2013        
These are my links for 2 ott 2013 through 22 ott 2013: Digital Attack Map – Digital Attack Map is a live data visualization of DDoS attacks around the globe, built through a collaboration between Google Ideas and Arbor Networks. The tool surfaces anonymous attack traffic data to let users explore historic trends and find … Continue reading "Bookmarks for 2 ott 2013 through 22 ott 2013"
          Expand Your Content Marketing Strategy Using Keyword Data        
This week I had the opportunity to give a webinar for competitive marketing data tool SEMrush titled Using Keyword Data To Crush Your Competition. The webinar focused on some key parts of a content marketing strategy, including; top-level competitor analysis, content gap analysis, data visualisation reasoning, content creation techniques and  basic advice on researching key demographics. […]
          Dataviz of the Week: Map It Yourself        
FortiusOne, a Washington-area digital mapping company, has released something called Maker!. It’s a roll-your-own data visualization tool that allows you to mash up maps with a database and produce something that makes Google Maps look like they were produced by a computer running Windows386. Below is a Flash-based map that takes a database of funding […]
          Solve Your NAV Reporting Challenges at NAVUG Focus        

We know that one of the biggest challenges for many companies using Microsoft Dynamics NAV is Reporting

To help you learn how to work smarter, not harder, attend our NAVUG Focus conferenceheld May 10-11, 2017 in St. Louis, MO, for in-person training delivered by industry experts who have 'been there, solved that'.

NAVUG Focus grew from the desire of Dynamics users to dig deeper into the topics most pertinent for success in their role/industry. The objective of NAVUG Focus is for attendees to learn how to use Microsoft Dynamics NAV at a strategic and operational level to drive their business forward. Attendees can expect advanced discussions about business processes and use of the application.

Here is a preview of some of the sessions that cover Reporting topics:

  • How I Learned to Love our Data
  • A 12-Step Program for your Financial Reporting
  • It's a New World of Reporting in Dynamics NAV

Click here to browse the NAVUG Focus agenda, comprised of 90-minute, deep-dive sessions concentrating on three specific tracks: CFO/Controller, Developer, and NAV Administrator.

Let the Dynamics NAV User Group (NAVUG) community solve your challenges and save your sanity. Don't forget to register before the Early Bird pricing deadline on March 22!

 

 

Want access to Power BI Deep-Dive Sessions?

The Power BI tool helps organize data visually to create dashboards, spot trends & produce interactive reports. For access to Power BI education, attend PBIUG Focus sessions, held concurrently with NAVUG Focus.

Originally posted on NAVUG.com. 


          Data Visualisation Summit        
Start Date: Thu, 02 Nov 2017
End Date: Fri, 03 Nov 2017
City: #
Description:

The Data Visualisation Summit is the perfect opportunity to share insights and best practices with leaders and experts from across various industries in an interactive environment.

With an impressive line-up confirmed, this event will provide the ideal platform for a deep analysis of the need for organisations to not only invest in but understand data visualisation, from breaking down big data and how this is represented.

Topics to be covered include:

  • Visualising Open Data
  • Storytelling with Data
  • Charting and Visual Tools
  • The Impact of Data Visualisation on Decision Making
For More information please contact Roy Asterley

By Email: rasterley@theiegroup.com 

By Phone: +44 203-868-0033


          Big Data & Analytics Innovation Summit        
Start Date: Wed, 25 Apr 2018
End Date: Thu, 26 Apr 2018
City: #
Description:

The summit brings together business leaders and innovators from the industry for an event acclaimed for its interactive format; combining keynote presentations, interactive breakout sessions and open discussion. 

Make sure to check back regularly for schedule additions and changes. Click the box on your right to view the full agenda.

Topics covered include:

  • Data Automation
  • Data Visualisation
  • Data Analytics & Customer Centric, Data Centric 
  • Algorithmic Risk Assessment 
  • Big Data & eCommerce 
  • Logical Data Warehousing
  • Real Time Analytics
  • Artificial Intelligence & Machine Learning 
  • IoT & Cloud, Modelling
  • Enterprise Architecture, Fin Tech
  • Cyber Security 


          Open Data Innovation Summit        
Start Date: Mon, 12 Jun 2017
End Date: Tue, 13 Jun 2017
City: #
Description:

The Open Data Innovation Summit agenda will cover areas on:

  • Finding and utilising open-source information
  • Open data visualisation and public engagement
  • Data privacy and security
  • Open data for decision making
  • Shared insight between public and private sector organisations for mutual benefit.

With an impressive line-up confirmed, this is the perfect opportunity to share ideas and best practices with leaders and experts from across various industries in an interactive environment.

Agenda to be released early March - for more information, please contact Jordan Charalampous at jc@theiegroup.com or +44 203 868 0306



          DigiTech Festival        
Start Date: Mon, 12 Jun 2017
End Date: Tue, 13 Jun 2017
City: #
Description:

The Open Data Innovation Summit agenda will cover areas on:

  • Finding and utilising open-source information
  • Open data visualisation and public engagement
  • Data privacy and security
  • Open data for decision making
  • Shared insight between public and private sector organisations for mutual benefit.

With an impressive line-up confirmed, this is the perfect opportunity to share ideas and best practices with leaders and experts from across various industries in an interactive environment.

Agenda to be released early March - for more information, please contact Jordan Charalampous at jc@theiegroup.com or +44 203 868 0306



          Big Data & Analytics Innovation Summit        
Start Date: Wed, 26 Apr 2017
End Date: Thu, 27 Apr 2017
City: #
Description:

The summit brings together business leaders and innovators from the industry for an event acclaimed for its interactive format; combining keynote presentations, interactive breakout sessions and open discussion. 

Make sure to check back regularly for schedule additions and changes. Click the box on your right to view the full agenda.

Topics covered include:

  • Data Automation
  • Data Visualisation
  • Data Analytics & Customer Centric, Data Centric 
  • Algorithmic Risk Assessment 
  • Big Data & eCommerce 
  • Logical Data Warehousing
  • Real Time Analytics
  • Artificial Intelligence & Machine Learning 
  • IoT & Cloud, Modelling
  • Enterprise Architecture, Fin Tech
  • Cyber Security 


          Data Visualisation Summit        
Start Date: Wed, 16 Nov 2016
End Date: Thu, 17 Nov 2016
City: #
Description:

The Data Visualisation Summit is the perfect opportunity to share insights and best practices with leaders and experts from across various industries in an interactive environment.

With an impressive line-up confirmed, this event will provide the ideal platform for a deep analysis of the need for organisations to not only invest in but understand data visualisation, from breaking down big data and how this is represented.

Topics to be covered include:

  • Visualising Open Data
  • Storytelling with Data
  • Charting and Visual Tools
  • The Impact of Data Visualisation on Decision Making
For More information please contact Roy Asterley

By Email: rasterley@theiegroup.com 

By Phone: +44 203-868-0033


          Data Visualisation Summit        
Start Date: Wed, 11 Nov 2015
End Date: Thu, 12 Nov 2015
City: #
Description:

The Data Visualisation Summit is the perfect opportunity to share insights and best practices with leaders and experts from across the industry in an interactive environment.

With an impressive line up confirmed, this event will provide the ideal platform for a deep analysis of the need for organisations to not only invest in, but understand data visualisation, from breaking down big data and how this is represented to the power and danger of data visualisation.

Topics to be covered include:

  • Visualizing Open Data
  • Storytelling with Data
  • Charting Connections & Trends in Social Media
  • The impact of Data Viz on Decision Making


          Big Data Innovation        
Start Date: Thu, 25 Sep 2014
End Date: Fri, 26 Sep 2014
City: #
Description:

The Big Data Innovation Summit is the largest gathering of Fortune 500 business executives leading Big Data initiatives.

We are currently accepting speaker submissions for the 2014 event, if you have something to share you can submit a speaker submission here.

The summit will comprise of multiple tracks, covering the most current topics in Big Data today:



          Data Visualisation Summit        
Start Date: Wed, 14 May 2014
End Date: Thu, 15 May 2014
City: #
Description:

The Data Visualisation Summit is the perfect opportunity to share insights and best practices with leaders and experts from across the industry in an interactive environment.

With an impressive line up confirmed, this event will provide the ideal platform for a deep analysis of the need for organisations to not only invest in, but understand data visualisation, from breaking down big data and how this is represented to the power and danger of data visualisation. 



          Big Data Innovation Summit        
Start Date: Thu, 19 Sep 2013
End Date: Fri, 20 Sep 2013
City: #
Description:

The Big Data Innovation Summit is the largest gathering of senior business executives leading Big Data initiatives.

The Summit includes six tracks please click below for more information:



          Data Visualization Summit        
Start Date: Thu, 12 Sep 2013
End Date: Fri, 13 Sep 2013
City: #
Description:

The Data Visualization Summit is the perfect opportunity to share insights and best practices with leaders and experts from across the industry in an interactive environment.

Topics Include:

  • Open Data
  • Impact of Data Visualization on Business
  • Data Sketching & Process
  • Storytelling with Data Visualizations
  • User Interaction
  • Creating Visualizations for Mobile


          Data Visualization Summit        
Start Date: Thu, 11 Apr 2013
End Date: Fri, 12 Apr 2013
City: #
Description:

The Data Visualization Summit is the perfect opportunity to share insights and best practices with leaders and experts from across the industry in an interactive environment.

With an impressive line up confirmed, this event will provide the ideal platform for a deep analysis of the need for organizations to not only invest in, but understand data visualization, from breaking down big data and how this is represented to the power and danger of data visualization. 



              Spoiler Alert        
    Spoiler Alert is actively looking for a handful of interns to join its team. Positions are available in Software Engineering, Business Development, Marketing, and Data Visualization and range from full-time (summer) to part-time opportunities for both undergraduate and graduate students. To learn more about the roles and to apply, visit www.spoileralert.com/careers. Spoiler Alert is a...
              AWS re:Invent 2015 Video & Slide Presentation Links with Easy Index        
    As with last year, here is my quick index of all re:Invent sessions.  Please wait for a few days and I'll keep running the tool to fill in the index.  It usually takes Amazon a few weeks to fully upload all the videos and slideshares.

    See below for how I created the index (with code):


    WRK307 - A Well-Architected Workshop: Working with the AWS Well-Architected Framework
    This workshop describes the AWS Well-Architected Framework, which enables customers to assess and improve their cloud architectures and better understand the business impact of their design decisions. It address general design principles, best practices, and guidance in four pillars of the Well-Architected Framework.  We will work in teams, assisted by AWS Solutions Architects, to review an example architecture, identifying issues, and how to improve the system.  You will need to have architecture experience to get the most from this workshop. After attending this workshop you will be able to review an architecture and identify potential issues across the four pillars of Well-Architected: security, performance efficiency, reliability, and cost optimization. Prerequisites: Architecture experience.  Optional - review the AWS Well-Architected Framework whitepaper. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only. View Less
    WRK306 - AWS Professional Services Architecting Workshop
    The AWS Professional Services team will be facilitating an architecture workshop exercise for certified AWS architects, with a class size limited to 40. In this highly interactive architecture design exercise, the class will be randomly divided into teams and given a business case for which to design an effective AWS solution. Flipcharts will be provided, and students are encouraged to bring their laptops to document their designs. Each team will be expected to present their solution to the class. Prerequisites: Participants should be certified AWS Architects.  Bring your laptop. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 40 attendees.  The session will be offered twice on October 7 and twice on October 8, using the same case study for each to allow for scheduling flexibility.   Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only. View Less
    ARC403 - From One to Many: Evolving VPC Design
    As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against evolving design requirements. This session follows this evolution of a single regional VPC into a multi-VPC, multiregion design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, securing private access to S3, managing multitenant VPCs, integrating existing customer networks through AWS Direct Connect and building a full VPC mesh network across global regions. View Less
    ARC402 - Double Redundancy with AWS Direct Connect
    AWS Direct Connect provides low latency and high performance connectivity to the AWS cloud by allowing the provision of physical fiber from the customer's location or data center into AWS Direct Connect points of presence. This session covers design considerations around AWS Direct Connect solutions. We will discuss how to design and configure physical and logical redundancy using both physically redundant fibers and logical VPN connectivity, and includes a live demo showing both the configuration and the failure of a doubly redundant connectivity solution. This session is for network engineers/architects, technical professionals, and infrastructure managers who have a working knowledge of Amazon VPC, Amazon EC2, general networking, and routing protocols. View Less
    ARC401 - Cloud First: New Architecture for New Infrastructure
    What do companies with internal platforms have to change to succeed in the cloud? The five pillars at the heart of IT solutions in the cloud are automation, fault tolerance, horizontal scalability, security, and cost-effectiveness. This talk discusses tools that facilitate the development and automate the deployment of secure, highly available microservices. The tools were developed using AWS CloudFormation, AWS SDKs, AWS CLI, Amazon RDS, and various open-source software such as Docker. The talk provides concrete examples of how these tools can help developers and architects move from beginning/intermediate AWS practitioners to cloud deployment experts. View Less
    ARC348 - Seagull: How Yelp Built a Highly Fault-tolerant Distributed System for Concurrent Task Execution
    Efficiently parallelizing mutually exclusively tasks can be a challenging problem when done at scale. Yelp's recent in-house product, Seagull, demonstrates how an intelligent scheduling system can use several open-source products to provide a highly scalable and fault-tolerant distributed system. Learn how Yelp built Seagull with a variety of Amazon Web Services to concurrently execute thousands of tasks that can greatly improve performance. Seagull combines open-source software like ElasticSearch, Mesos, Docker, and Jenkins with Amazon Web Services (AWS) to parallelize Yelp's testing suite. Our current use case of Seagull involves distributively running Yelp's test suite that has over 55,000 test cases. Using our smart scheduling, we can run one of our largest test suites to process 42 hours of serial work in less than 10 minutes using 200 r3.8xlarge instances from Amazon Elastic Compute Cloud (Amazon EC2). Seagull consumes and produces data at very high rates. On a typical day, Seagull writes 60 GBs of data and consumes 20 TBs of data. Although we are currently using Seagull to parallelize test execution, it can efficiently parallelize other types of independent tasks. View Less
    ARC346-APAC - Scaling to 25 Billion Daily Requests Within 3 Months: Building a Global Big Data Distribution Platform on AWS (APAC track)
    What if you were told that within three months, you had to scale your existing platform from 1,000 req/sec (requests per second) to handle 300,000 req/sec with an average latency of 25 milliseconds? And that you had to accomplish this with a tight budget, expand globally, and keep the project confidential until officially announced by well-known global mobile device manufacturers? That's what exactly happened to us. This session explains how The Weather Company partnered with AWS to scale our data distribution platform to prepare for unpredictable global demand. We cover the many challenges that we faced as we worked on architecture design, technology and tools selection, load testing, deployment and monitoring, and how we solved these challenges using AWS. This is a repeat session that will be translated simultaneously into Japanese, Chinese, and Korean. View Less
    ARC346 - Scaling to 25 Billion Daily Requests Within 3 Months: Building a Global Big Data Distribution Platform on AWS
    What if you were told that within three months, you had to scale your existing platform from 1,000 req/sec (requests per second) to handle 300,000 req/sec with an average latency of 25 milliseconds? And that you had to accomplish this with a tight budget, expand globally, and keep the project confidential until officially announced by well-known global mobile device manufacturers? That's what exactly happened to us. This session explains how The Weather Company partnered with AWS to scale our data distribution platform to prepare for unpredictable global demand. We cover the many challenges that we faced as we worked on architecture design, technology and tools selection, load testing, deployment and monitoring, and how we solved these challenges using AWS. View Less
    ARC344 - How Intuit Improves Security and Productivity with AWS Virtual Networking, identity, and Account Services
    Intuit has an "all in" strategy in adopting the AWS cloud. We have already moved some large workloads supporting some of our flagship products (TurboTax, Mint) and are expecting to launch hundreds of services in AWS over the coming years. To provide maximum flexibility for product teams to iterate on their services, as well as provide isolation of individual accounts from logical errors or malicious actions, Intuit is deploying every application into its own account and virtual private cloud (VPC). This talk discusses both the benefits and challenges of designing to run across hundreds or thousands of VPCs within an enterprise. We discuss the limitations of connectivity, sharing data, strategies for IAM access across account, and other nuances to keep in mind as you design your organization's migration strategy. We share our design patterns that can help guide your team in developing a plan for your AWS migration. This talk is helpful for anyone who is planning or in the process of moving a large enterprise to AWS with the difficult decisions and tradeoffs in structuring your deployment. View Less
    ARC342 - Closing the Loop: Designing and Building an End-to-End Email Solution Using AWS
    Email continues to be a critical medium for communications between businesses and customers and remains an important channel for building automation around sending and receiving messages. Email automation enables use cases like updating a ticketing system or a forum via email, logging and auditing an email conversation, subscribing and unsubscribing from email lists via email, transferring small files via email, and updating email contents before delivery. This session implements and presents live code that covers a use case supported by Amazon.com's seller business: how to protect your customers' privacy by anonymizing email for third-party business-to-business communication on your platform. With Amazon SES and the help of Amazon S3, AWS Lambda, and Amazon DynamoDB, we cover architecture, walk through code as we build an application live, and present a demonstration of the final implementation. View Less
    ARC340 - Multi-tenant Application Deployment Models
    Shared pools of resources? Microservices in containers? Isolated application stacks? You have many architectural models and AWS services to consider when you deploy applications on AWS. This session focuses on several common models and helps you choose the right path or paths to fit your application needs. Architects and operations managers should consider this session to help them choose the optimal path for their application deployment needs for their current and future architectures. This session covers services such as Amazon Elastic Compute Cloud (Amazon EC2), EC2 Container Services, AWS Lambda, and AWS CodeDeploy. View Less
    ARC313 - Future Banks Live in the Cloud: Building a Usable Cloud with Uncompromising Security
    Running today's largest consumer bitcoin startup comes with a target on your back and requires an uncompromising approach to security. This talk explores how Coinbase is learning from the past and pulling out all the stops to build a secure infrastructure behind an irreversibly transferrable digital good for millions of users. This session will cover cloud architecture, account and network isolation in the AWS cloud, disaster recovery, self-service consensus-based deployment, real-time streaming insight, and how Coinbase is leveraging practical DevOps to build the bank of the future. View Less
    ARC311 - Decoding the Genetic Blueprint of Life on a Cloud Connected Ecosystem
    Thermo Fisher Scientific, a world leader in biotechnology, has built a new polymerase chain reaction (PCR) system for DNA sequencing. Designed for low- to midlevel throughput laboratories that conduct real time PCR experiments, the system runs on individual QuantStudio devices. These devices are connected to Thermo Fisher's cloud computing platform, which is built on AWS using Amazon EC2, Amazon DynamoDB, and Amazon S3. With this single platform, applied and clinical researchers can learn, analyze, share, collaborate, and obtain support. Researchers worldwide can now collaborate online in real time and access their data wherever and whenever necessary. Laboratories can also share experimental conditions and results with their partners while providing a uniform experience for every user and helping to minimize training and errors. The net result is increased collaboration, faster time to market, fewer errors, and lower cost. We have architected a solution that uses Amazon EMR, DynamoDB, Amazon Elasticache, and S3. In this presentation, we share our architecture, lessons learned, best design patterns for NoSQL, strategies for leveraging EMR with DynamoDB, and a flexible solution that our scientist use. We also share our next step in architecture evolution. View Less
    ARC310-APAC - Amazon.com: Solving Amazon's Catalog Contention and Cost with Amazon Kinesis (APAC track)
    The Amazon.com product catalog receives millions of updates each hour across billions of products, and many of the updates involve comparatively few products. In this... View More
    ARC310 - Amazon.com: Solving Amazon's Catalog Contention and Cost with Amazon Kinesis
    The Amazon.com product catalog receives millions of updates an hour across billions of products with many of the updates concentrated on comparatively few products. In this session, hear how Amazon.com has used Amazon Kinesis to build a pipeline orchestrator that provides sequencing, optimistic batching, and duplicate suppression whilst at the same time significantly lowering costs. This session covers the architecture of that solution and draws out the key enabling features that Amazon Kinesis provides. This talk is intended for those who are interested in learning more about the power of the distributed log and understanding its importance for enabling OLTP just as DHT is for storage. View Less
    ARC309 - From Monolithic to Microservices: Evolving Architecture Patterns in the Cloud
    Gilt, a billion dollar e-commerce company, implemented a sophisticated microservices architecture on AWS to handle millions of customers visiting their site at noon every day. The microservices architecture pattern enables independent service scaling, faster deployments, better fault isolation, and graceful degradation. In this session, Derek Chiles, AWS solutions architect, will review best practices and recommended architectures for deploying microservices on AWS. Adrian Trenaman, SVP of engineering at Gilt, will share Gilt's experiences and lessons learned during their evolution from a single monolithic Rails application in a traditional data center to more than 300 Scala/Java microservices deployed in the cloud. View Less
    ARC308-APAC - The Serverless Company with AWS Lambda: Streamlining Architecture with AWS (APAC track)
    In today's competitive environment, startups are increasingly focused on eliminating any undifferentiated heavy lifting. Come learn about various architectural patterns for building scalable, function-rich data processing systems using AWS Lambda and other AWS managed services. Find out how PlayOn! Sports went from a multi-layered architecture for video streaming to a streamlined and serverless system by using AWS Lambda and Amazon S3. This is a repeat session that will be translated simultaneously into Japanese, Chinese, and Korean. View Less
    ARC308 - The Serverless Company Using AWS Lambda: Streamlining Architecture with AWS
    In today's competitive environment, startups are increasingly focused on eliminating any undifferentiated heavy lifting. Come learn about various architectural patterns for building a scalable, function-rich data processing systems, using AWS Lambda and other AWS managed services. Come see how PlayOn! Sports went from a multi-layered architecture for video streaming to a streamlined and serverless system using Lambda and Amazon S3. View Less
    ARC307 - Infrastructure as Code
    While many organizations have started to automate their software develop processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure. View Less
    ARC305 - Self-service Cloud Services: How J&J Is Managing AWS at Scale for Enterprise Workloads
    Johnson & Johnson is a global health care leader with 270 operating companies in 60 countries. Operating at this scale requires a decentralized model that supports the autonomy of the different companies under the J&J umbrella, while still allowing knowledge and infrastructure frameworks to be shared across the different businesses. To address this problem, J&J created an Amazon VPC, which provides simplified architecture patterns that J&J's application teams leveraged throughout the company using a self-service model while adhering to critical internal controls. Hear how J&J leveraged Amazon S3, Amazon Redshift, Amazon RDS, Amazon DynamoDB, and Amazon Kinesis to develop these architecture patterns for various use cases, allowing J&J's businesses to use AWS for its agility while still adhering to all internal policies automatically. Learn how J&J uses this model to build advanced analytic platforms to ingest large streams of structured and unstructured data, which minimizes the time to insight in a variety of areas, including physician compliance, bioinformatics, and supply chain management. View Less
    ARC304 - Designing for SaaS: Next-Generation Software Delivery Models on AWS
    SaaS architectures can be deployed onto AWS in a number of ways, and each optimizes for different factors from security to cost optimization. Come learn more about common deployment models used on AWS for SaaS architectures and how each of those models are tuned for customer specific needs. We will also review options and tradeoffs for common SaaS architectures, including cost optimization, resource optimization, performance optimization, and security and data isolation. View Less
    ARC303 - Pure Play Video OTT: A Microservices Architecture in the Cloud
    An end-to-end, over-the-top (OTT) video system is built of many interdependent architectural tiers, ranging from content preparation, content delivery, and subscriber and entitlement management, to analytics and recommendations. This talk will provide a detailed exploration of how to architect a media platform that allows for growth, scalability, security, and business changes at each tier, based on real-world experiences delivering over 100 Gbps of concurrent video traffic with 24/7/365 linear TV requirements. Finally, learn how Verizon uses AWS, including Amazon Redshift and Amazon Elastic MapReduce, to power its recently launched mobile video application Go90. Using a mixture of AWS services and native applications, we address the following scaling challenges:     Content ingest, preparation, and distribution     Operation of a 24x7x365 Linear OTT Playout Platform     Common pitfalls with transcode and content preperation     Multi-DRM and packaging to allow cross platform playback     Efficient delivery and multi-CDN methodology to allow for a perfect experience globally     Kinesis as a dual purpose system for both analytics and concurrency access management     Integration with Machine Learning for an adaptive recommendation system, with real time integration between content history and advertising data     User, entitlement, and content management     General best practices for ‘Cloud Architectures' and their integration with Amazon Web Services; Infrastructure as Code, Disposable and immutable infrastructure, code deployment & release management, DevOps and Microservices Architectures This session is great for architects, engineers, and CTOs within media and entertainment or others simply interested in decoupled architectures. View Less
    ARC302 - Running Lean Architectures: How to Optimize for Cost Efficiency
    Whether you're a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We'll cover how you can effectively combine EC2 On-Demand, Reserved, and Spot instances to handle different use cases, leveraging auto scaling to match capacity to workload, choosing the most optimal instance type through load testing, taking advantage of multi-AZ support, and using CloudWatch to monitor usage and automatically shut off resources when not in use. We'll discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely, by leveraging AWS high-level services. We will also showcase simple tools to help track and manage costs, including the AWS Cost Explorer, Billing Alerts, and Trusted Advisor. This session will be your pocket guide for running cost effectively in the Amazon cloud. View Less
    ARC301 - Scaling Up to Your First 10 Million Users
    Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud. View Less
    ARC201 - Microservices Architecture for Digital Platforms with AWS Lambda, Amazon CloudFront and Amazon DynamoDB
    Digital platforms are by nature resource intensive, expensive to build, and difficult to manage at scale. What if we can change this perception and help AWS customers architect a digital platform that is low cost and low maintenance? This session describes the underlying architecture behind dam.deep.mg, the Digital Asset Management system built by Mitoc Group and powered by AWS abstracted services like AWS Lambda, Amazon CloudFront, and Amazon DynamoDB. Eugene Istrati, the CTO of Mitoc Group, will dive deep into their approach to microservices architecture on serverless environments and demonstrate how anyone can architect AWS abstracted services to achieve high scalability, high availability, and high performance without huge efforts or expensive resources allocation. View Less
    WRK304 - Build a Recommendation Engine and Use Amazon Machine Learning in Real Time
    Build an exciting machine learning model for recommending top restaurants for a customer in real time based on past orders and viewing history. In this guided session you will get hands on with data cleansing, building AML model and doing real time predictions. Dataset will be provided. Prerequisites: Participants should have an AWS account established and available for use during the workshop.  Participants should bring their own laptop.    Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only. View Less
    WRK303 - Real-World Data Warehousing with Amazon Redshift and Big Data Solutions from AWS Marketplace
    In this workshop, you will work with other attendees as a small team to build an end-to-end data warehouse using Amazon Redshift and by leveraging key AWS Marketplace partners. Your team will learn how to build a data pipeline using an ETL partner from the AWS Marketplace, to perform common validation and aggregation tasks in a data ingestion pipeline.  Your team will then learn how to build dashboards and reports using a Data visualization partner from AWS Marketplace, for interactive analysis of large datasets in Amazon Redshift. In less than 2 hours your team will build a fully functional solution to discover meaningful insights from raw-datasets. The session also showcase on how you can extend this solution further to create a near real-time solution by leveraging Amazon Kinesis and other AWS Big Data services. Prerequisites: Hands-on experience with AWS. Some prior experience with Databases, SQL and familiarity with data-warehousing concepts. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only.   View Less
    WRK301 - Implementing Twitter Analytics Using Spark Streaming, Scala, and Amazon EMR
    Over the course of this workshop, we will launch a Spark Custer and deploy a Spark streaming application written in Scala that analyzes popular tags flowing out of Twitter.  Along the way we will learn about AWS EMR, Spark, Spark Streaming, Scala, and how to deploy applications into Spark clusters on AWS EMR. Prerequisites: Participants are expected be familiar with building modest-size applications in Scala. Participants should have an AWS account established and available for use during the workshop.  Please bring your laptop. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only.   View Less
    BDT404 - Building and Managing Large-Scale ETL Data Flows with AWS Data Pipeline and Dataduct
    As data volumes grow, managing and scaling data pipelines for ETL and batch processing can be daunting. With more than 13.5 million learners worldwide, hundreds of courses, and thousands of instructors, Coursera manages over a hundred data pipelines for ETL, batch processing, and new product development. In this session, we dive deep into AWS Data Pipeline and Dataduct, an open source framework built at Coursera to manage pipelines and create reusable patterns to expedite developer productivity. We share the lessons learned during our journey: from basic ETL processes, such as loading data from Amazon RDS to Amazon Redshift, to more sophisticated pipelines to power recommendation engines and search services. Attendees learn: Do's and don'ts of Data Pipeline Using Dataduct to streamline your data pipelines How to use Data Pipeline to power other data products, such as recommendation systems What's next for Dataduct View Less
    BDT403 - Best Practices for Building Real-time Streaming Applications with Amazon Kinesis
    Amazon Kinesis is a fully managed, cloud-based service for real-time data processing over large, distributed data streams. Customers who use Amazon Kinesis can continuously capture and process real-time data such as website clickstreams, financial transactions, social media feeds, IT logs, location-tracking events, and more. In this session, we first focus on building a scalable, durable streaming data ingest workflow, from data producers like mobile devices, servers, or even a web browser, using the right tool for the right job. Then, we cover code design that minimizes duplicates and achieves exactly-once processing semantics in your elastic stream-processing application, built with the Kinesis Client Library. Attend this session to learn best practices for building a real-time streaming data architecture with Amazon Kinesis, and get answers to technical questions frequently asked by those starting to process streaming events. View Less
    BDT402 - Delivering Business Agility Using AWS
    Wipro is one of India's largest publicly traded companies and the seventh largest IT services firm in the world. In this session, we showcase the structured methods that Wipro has used in enabling enterprises to take advantage of the cloud. These cover identifying workloads and application profiles that could benefit, re-structuring enterprise application and infrastructure components for migration, rapid and thorough verification and validation, and modifying component monitoring and management. Several of these methods can be tailored to the individual client or functional context, so specific client examples are presented. We also discuss the enterprise experience of enabling many non-IT functions to benefit from the cloud, such as sales and training. More functions included in the cloud increase the benefit drawn from a cloud-enabled IT landscape. Session sponsored by Wipro. View Less
    BDT401 - Amazon Redshift Deep Dive: Tuning and Best Practices
    Get a look under the covers: Learn tuning best practices for taking advantage of Amazon Redshift's columnar technology and parallel processing capabilities to improve your delivery of queries and improve overall database performance. This session explains how to migrate from existing data warehouses, create an optimized schema, efficiently load data, use work load management, tune your queries, and use Amazon Redshift's interleaved sorting features. Finally, learn how TripAdvisor uses these best practices to give their entire organization access to analytic insights at scale.  View Less
    BDT324 - Big Data Optimized for the AWS Cloud
    Apache Hadoop is now a foundational platform for big data processing and discovery that drives next-generation analytics. While Hadoop was designed when cloud models were in their infancy, the open source platform works remarkably well in production environments in the cloud. This talk will cover use cases for running big data in the cloud and share examples of organizations that have experienced real-world success on AWS. We will also look at new software and hardware innovations that are helping companies get more value from their data. Session sponsored by Intel. View Less
    BDT323 - Amazon EBS and Cassandra: 1 Million Writes Per Second on 60 Nodes
    With the introduction of Amazon Elastic Block Store (EBS) GP2 and recent stability improvements, EBS has gained credibility in the Cassandra world for high performance workloads. By running Cassandra on Amazon EBS, you can run denser, cheaper Cassandra clusters with just as much availability as ephemeral storage instances. This talk walks through a highly detailed use case and configuration guide for a multi PetaByte, million write per second cluster that needs to be high performing and cost efficient. We explore the instance type choices, configuration, and low-level tuning that allowed us to hit 1.3 million writes per second with a replication factor of 3 on just 60 nodes. View Less
    BDT322 - How Redfin and Twitter Leverage Amazon S3 to Build Their Big Data Platforms
    Analyzing large data sets requires significant compute and storage capacity that can vary in size based on the amount of input data and the analysis required. This characteristic of big data workloads is ideally suited to the pay-as-you-go cloud model, where applications can easily scale up and down based on demand. Learn how Amazon S3 can help scale your big data platform. Hear from Redfin and Twitter about how they build their big data platforms on AWS and how they use S3 as an integral piece of their big data platforms. View Less
    BDT320 - NEW LAUNCH! Streaming Data Flows with Amazon Kinesis Firehose and Amazon Kinesis Analytics
    Amazon Kinesis Firehose is a fully-managed, elastic service to deliver real-time data streams to Amazon S3, Amazon Redshift, and other destinations. In this session, we start with overviews of Amazon Kinesis Firehose and Amazon Kinesis Analytics. We then discuss how Amazon Kinesis Firehose makes it even easier to get started with streaming data, without writing a stream processing application or provisioning a single resource. You learn about the key features of Amazon Kinesis Firehose, including its companion agent that makes emitting data from data producers even easier. We walk through capture and delivery with an end-to-end demo, and discuss key metrics that will help developers and architects understand their streaming data flow. Finally, we look at some patterns for data consumption as the data streams into S3. We show two examples: using AWS Lambda, and how you can use Apache Spark running within Amazon EMR to query data directly in Amazon S3 through EMRFS. View Less
    BDT319 - NEW LAUNCH! Amazon QuickSight: Very Fast, Easy-to-Use, Cloud-native Business Intelligence
    Amazon QuickSight is a very fast, cloud-powered business intelligence (BI) service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. In this session, we demonstrate how you can point Amazon QuickSight to AWS data stores, flat files, or other third-party data sources and begin visualizing your data in minutes. We also introduce SPICE -  a new Super-fast, Parallel, In-memory, Calculation Engine in Amazon QuickSight, which performs advanced calculations and render visualizations rapidly without requiring any additional infrastructure, SQL programming, or dimensional modeling, so you can seamlessly scale to hundreds of thousands of users and petabytes of data. Lastly, you will see how Amazon QuickSight provides you with smart visualizations and graphs that are optimized for your different data types, to ensure the most suitable and appropriate visualization to conduct your analysis, and how to share these visualization stories using the built-in collaboration tools. View Less
    BDT318 - Netflix Keystone: How Netflix Handles Data Streams Up to 8 Million Events Per Second
    In this session, Netflix provides an overview of Keystone, their new data pipeline. The session covers how Netflix migrated from Suro to Keystone, including the reasons behind the transition and the challenges of zero loss while processing over 400 billion events daily. The session covers in detail how they deploy, operate, and scale Kafka, Samza, Docker, and Apache Mesos in AWS to manage 8 million events & 17 GB per second during peak. View Less
    BDT317 - Building a Data Lake on AWS
    Conceptually, a data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created “on demand”, providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we will introduce key concepts for a data lake and present aspects related to its implementation. We will discuss critical success factors, pitfalls to avoid as well as operational aspects such as security, governance, search, indexing and metadata management. We will also provide insight on how AWS enables a data lake architecture.   A data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created "on demand", providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we introduce key concepts for a data lake and present aspects related to its implementation. We discuss critical success factors and pitfalls to avoid, as well as operational aspects such as security, governance, search, indexing, and metadata management. We also provide insight on how AWS enables a data lake architecture. Attendees get practical tips and recommendations to get started with their data lake implementations on AWS. View Less
    BDT316 - Offloading ETL to Amazon Elastic MapReduce
    Amgen discovers, develops, manufactures, and delivers innovative human therapeutics, helping millions of people in the fight against serious illnesses. In 2014, Amgen implemented a solution to offload ETL data across a diverse data set (U.S. pharmaceutical prescriptions and claims) using Amazon EMR. The solution has transformed the way Amgen delivers insights and reports to its sales force. To support Amgen's entry into a much larger market, the ETL process had to scale to eight times its existing data volume. We used Amazon EC2, Amazon S3, Amazon EMR, and Amazon Redshift to generate weekly sales reporting metrics. This session discusses highlights in Amgen's journey to leverage big data technologies and lay the foundation for future growth: benefits of ETL offloading in Amazon EMR as an entry point for big data technologies; benefits and challenges of using Amazon EMR vs. expanding on-premises ETL and reporting technologies; and how to architect an ETL offload solution using Amazon S3, Amazon EMR, and Impala. View Less
    BDT314 - Running a Big Data and Analytics Application on Amazon EMR and Amazon Redshift with a Focus on Security
    No matter the industry, leading organizations need to closely integrate, deploy, secure, and scale diverse technologies to support workloads while containing costs. Nasdaq, Inc.-a leading provider of trading, clearing, and exchange technology-is no exception. After migrating more than 1,100 tables from a legacy data warehouse into Amazon Redshift, Nasdaq, Inc. is now implementing a fully-integrated, big data architecture that also includes Amazon S3, Amazon EMR, and Presto to securely analyze large historical data sets in a highly regulated environment. Drawing from this experience, Nasdaq, Inc. shares lessons learned and best practices for deploying a highly secure, unified, big data architecture on AWS. Attendees learn: Architectural recommendations to extend an Amazon Redshift data warehouse with Amazon EMR and Presto. Tips to migrate historical data from an on-premises solution and Amazon Redshift to Amazon S3, making it consumable. Best practices for securing critical data and applications leveraging encryption, SELinux, and VPC. View Less
    BDT313 - Amazon DynamoDB for Big Data
    NoSQL is an important part of many big data strategies. Attend this session to learn how Amazon DynamoDB helps you create fast ingest and response data sets. We demonstrate how to use DynamoDB for batch-based query processing and ETL operations (using a SQL-like language) through integration with Amazon EMR and Hive. Then, we show you how to reduce costs and achieve scalability by connecting data to Amazon ElasticCache for handling massive read volumes. We'll also discuss how to add indexes on DynamoDB data for free-text searching by integrating with Elasticsearch using AWS Lambda and DynamoDB streams. Finally, you'll find out how you can take your high-velocity, high-volume data (such as IoT data) in DynamoDB and connect it to a data warehouse (Amazon Redshift) to enable BI analysis. View Less
    BDT312 - Application Monitoring in a Post-Server World: Why Data Context Is Critical
    The move towards microservices in Docker, EC2 and Lambda points to a shift towards shorter lived resources. These new application architectures are driving new agility and efficiency. But they, while providing developers with inherent scalability, elasticity, and flexibility, also present new challenges for application monitoring. The days of static server monitoring with a single health and status check are over. These days you need to know how your entire ecosystem of AWS EC2 instances are performing, especially since many of them are short lived and may only exist for a few minutes. With such ephemeral resources, there is no server to monitor; you need to understand performance along the lines of computation intent. And for this, you need the context in which these resources are performing. Join Kevin McGuire, Director of Engineering at New Relic, as he discusses trends in computing that we've gleaned from monitoring Docker and how they'v
              Squadata visitera le salon DMExco 2017        

    Squadata au salon DMExco 2017 L’éditeur de solutions de data marketing Squadata se rendra au salon DMExco les 13 et 14 septembre 2017 à Cologne afin de poursuivre son expansion européenne et d’y présenter EasyDMP, sa Data Management Platform mid-market. Un salon en ligne avec la stratégie de Squadata Le salon DMexco réunit les acteurs […]

    Cet article Squadata visitera le salon DMExco 2017 est apparu en premier sur Ratecard.


              MicroStrategy Widgets 9.2.x Library        
    Overview
    MicroStrategy Widgets enhance understanding through advanced visualizations that highlight patterns and trends in each set of data. Built on the open and flexible architecture of MicroStrategy Report Services, widgets use the power of Adobe Flash to present data in an aesthetically pleasing and domain-specific manner. That means you can choose a widget that best suits your data and amplifies the impact of your dashboard.

    This library will continue to grow over time. Documentation is also provided with each widget to guide you in deploying and using the Widget.

    Widgets are available to any MicroStrategy Report Services customer as a part of their standard maintenance. The latest version of the widgets listed on this page is available only with the installation of MicroStrategy 9, Release 2 (9.0.1). However updates will be distributed via this web page when available. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    In addition, customers of the MicroStrategy SDK can learn how to build and deploy their own widgets by referring to the MicroStrategy Developer Library (MSDL) on the MicroStrategy Developer Zone (MSDZ). Access the latest release of the MSDL, and navigate to Visualization SDK->Building widgets and visualizations.

    The MicroStrategy Widgets development team welcomes your feedback. Please feel free to log enhancement requests and other suggestions for new or existing widgets

     
    Table of contents
    widget image Interactive Bubble Graph improvedwidget image Waterfallimprovedwidget image Heatmap
    widget image Data Cloudwidget image Funnelwidget image Fish Eye
    widget image Bubble Gridwidget image Graph Matrixwidget image Microcharts
    widget image Mediawidget image RSS Readerwidget image Weighted List Viewer
    widget image Gaugewidget image Time Series Sliderwidget image Date Selection
    widget image Interactive Stacked Graph




    Interactive Bubble Graph Widget, 9 Release 2  improved (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Interactuve Bubble Graph widget is a bubble plot that allows you to visualize the trends of three different metrics for a set of attribute elements. Unlike a convetional bubble plot, this widget allows the visualization of the data through time by animating the graph over a time dimension. You can also drill into the components of a bubble by expanding it into its child elements, for example you can expand a bubble that represents a region into its corresponding cities.

    What's new
    • Multi-select or zoom into a region of bubbles by creating a marquee around the elements of interest
    • Change the metrics that drive the axes and the bubble size on the fly
    • Render the graph in scatter mode, i.e. all bubbles have the same size
    • Embed the widget in a DHTML dashboard or a custom built Flash application
    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation.

    Waterfall Widget, 9 Release 2  improved (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Waterfall widget enables the user to see the contribution of a collection of metrics in a bar chart. Analysts can use the widget to perform what-if analyses on different aspects of the business by directly modifying the contribution amount of each measure. For example, end users can deploy the Waterfall widget to display financial income or cash flow statements in a graphical fashion and analyze the impact of the various components of the statement to the bottom line of their business.

    What's new
    • Additional series can be shown in the tooltip or rendered as bars
    • Final bar can be driven by a metric
    • When configured as a selector, a highlight appears around the bar that is clicked
    • Embed the widget in a DHTML dashboard or a custom built Flash application
    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    Heatmap Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    A Heat Map widget is a combination of colored rectangles, each representing an attribute element, that allow you to quickly grasp the state and impact of a large number of variables at once. Heat Maps are often used in the financial services industry to review the status of a portfolio. The rectangles contain a wide variety and shadings of colors, which emphasize the weight of the various components.

    The widget’s rich collection of interactive controls enables you to discover trends, outliers, and other important information about your business.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation.

    Data Cloud Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Data Cloud widget displays attribute elements in various sizes to depict the differences in metric values between the elements. The widget is similar to a heat map in that it allows an analyst to quickly identify the most significant, positive, or negative contributions.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    Funnel Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Funnel widget can be used for a wide variety of business purposes, including application management, click management, pipeline analyses for sales forecasts, and sales process analysis. The widget is a variation of a stacked percent bar chart that displays data that adds up to 100%. Therefore, it can allow analysts to visualize the percent contribution of sales data or the stages in a sales process to reveal the amount of potential revenue for each stage.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    Fish Eye Selector, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Fish Eye Selector is an interactive style of selector that magnifies an item when you hover the cursor over it. This style of selector is useful because it allows you to choose from a large list of attribute elements, metrics, or images without having to see all of the elements, metrics, or images displayed at one time. Any item that you hover over or select remains magnified, while the remaining items are minimized. This selector can display text or images that represent the different attribute elements, metrics or panel names.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    Bubble Grid Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Bubble Grid widget conveys information in such a way that an analyst can, at a glance, identify important trends or anomalies in data, relative to the total contribution of accompanying data. In the widget, metric values are plotted as bubbles of different colors and sizes; the colors and sizes of the bubbles represent the values of two distinct metrics on the widget template. Each bubble is displayed at the intersection of two different attribute elements.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    Graph Matrix Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Graph Matrix widget allows you to quickly analyze various trends across several metric dimensions. You can use the widget to assess questions such as "How do actual sales compare to forecasted sales, by time and region?".

    The Graph Matrix widget consists of several area graphs that display actual values. Each area graph also has a line graph above it to show forecasted values. One graph is displayed for every combination of elements from the attributes on the widget template’s rows and columns. You can click any graph to zoom in for a more detailed view.
    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    Microcharts Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Microcharts widget consists of one or more microcharts, which are compact representations of data that allow analysts to quickly visualize trends.

    Sparkline and Bar microcharts help you visualize trends by displaying a metric's current and historical values with respect to time. Bullet microcharts compare the value of one metric against another metric that typically represents a target value. One common example is comparing the year-to-date value of a metric to the annual target for that metric.

    The Microcharts Widget can display the information in various formats such as a grid, a scrolling ticker or a list of key performance indicators (KPI).

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    Media Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Media widget allows you to present a variety of media, such as video, audio, images, or website content, on your dashboard. You can include media in the widget to provide background information about data or instructions on how to use the dashboard. You can also use the Media widget to simply enhance the look and feel of a dashboard.

    By using dynamic URLs driven by a selector in the dashboard, you can configure the Media widget to play a media file based on any attribute in your dashboard.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    RSS Reader Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The RSS Reader widget helps provide a 360-degree view of your business by allowing you to compare and contrast data in your dashboard with information from external news feed sources. The widget retrieves news from an RSS news feed and displays it alongside the other components of your dashboard. You can use these widgets on a dashboard to view and update your favorite RSS news feeds as you analyze grids, graphs, and other objects in the dashboard.

    By using dynamic URLs driven by a selector in the dashboard, you can configure the RSS Reader widget to automatically display news related to any attribute in your dashboard.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    Weighted List Viewer Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    The Weighted List Viewer combines the data visualization techniques of thresholds and graphical weighting into a single visualization that displays how the items within a portfolio are performing. Much like a "heat map," it is intended to show, at a glance, the weighted performance of a portfolio. This could be a portfolio of stocks or the entries in a sale pipeline report.

    Thresholds in the widget highlight rows based on the value of a metric. The rows are also ordered automatically so that metrics that are performing well are at the top and metrics that are performing poorly are at the bottom. In addition, a stacked bar chart is included next to the grid; it indicates the relative contribution, or weight, of each row.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation. Other versions of the widgets that are compatible with previous versions of MicroStrategy are accessible using the links at the bottom of this page.

    Gauge Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    A Gauge widget is a simple status indicator that displays a needle that moves within a range of numbers displayed on its outside edges. This widget is designed to display the value of one or more metrics which can be represented by needles or markers on the face of the gauge. The Gauge widget is most useful when combined with a selector because this allows users to choose specific metric values to display in the gauge.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation.

    Time Series Slider Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    A Time Series Slider widget is an area or line graph that allows a document analyst to choose which section of the graph to view at a time. The widget consists of two related graphs, the controller and the primary graph. You use the slider on the controller to select a portion of the time series, which determines the range of data visible in the primary graph.

    Time series datasets are often long and require analysis from both a macro and micro view. Therefore, the time series slider widget displays the micro level view on the primary graph while keeping the context of the macro level view on the controller graph at the top.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation.

    Date Selection Widget, 9 Release 2   (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    A Date Selection widget is a calendar selector that allows you to select which dates you want to see data about in a document. You are able to see all of the dates of each month in the widget, which allows you to be able to select dates more easily. You can use the Date Selection widget to view data corresponding to a single day or to a range of days. Dates that have no data within the dataset will appear disabled on the calendar.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation.

    Interactive Stacked Graph Widget, 9 Release 2  (Shipped with MicroStrategy Web 9, Release 2)
    widget image
    An Interactive Stacked Graph widget presents a combination of a stacked area graph and an interactive legend. The graph allows you to see the contribution of various metric series to the change in value of a larger set of data. You can use the interactive legend to determine which elements make up the total graph and which ones are currently highlighted as stacked areas.

    This widget allows you to visualize total metric values as one large area graph, and the individual pieces of that total as smaller stacked areas within the large area graph. You can quickly analyze how the individual parts make up the whole, which is useful when making percent-to-total comparisons.

    Note: This widget requires MicroStrategy 9, Release 2. This widget and its documentation are only available with the MicroStrategy Web 9, Release 2 installation.

              The Lowdown On The Business Of Personal Data        
    VideoIn 2012, the Boston-area bicycle sharing program Hubway launched a contest in which they released anonymous data about when and from and to where users rented bicycles. They invited people to come up with interesting ways to use that information in the Hubway Data Visualization Challenge. My colleague, Harvard professor Latanya [...]
              Eyeo Festival Day 2        
    Day two of the Eyeo Festival in Minneapolis. To recap, this is an event that focuses on the intersection between art, technology and design, with a strong emphasis on data. See also: Workshop | Day 1 | Day 3 Kim Rees – The Data Future Kim Reesperiscopic.com@krees Kim works for Periscopic, a data visualization agency
              IoT Devices and Big Data Offer Path to Better Population Health        

    New data streams from IoT medical devices and wearables may help patients and doctors in the fight against chronic disease, including diabetes.

    Population health management is turning out to be a hotspot where data collected from IoT sensors has the potential to bring broad rewards. With a bevy of IoT-based fitness and certified medical devices available in 2017, data streams from wearables and health monitoring devices can be used alongside clinical data to provide doctors with a clear, near real-time picture of a patient’s health.  

    Once this type of data is integrated into the clinical workflow and dashboard for physicians, it will have a major impact on disease management and treatments, especially for chronic diseases such as diabetes.

    Seeing Diabetes More Clearly

    Diabetes management is one of the more expensive healthcare challenges for patients, doctors and healthcare networks. About 347 million people worldwide have diabetes, according to WHO, and the cost of treating diabetes is estimated to be $500 billion worldwide.

    IoT devices that help with diabetes management and patient monitoring are hitting the market in force in 2017. One example is the Google and Novartis/Alcon smart contact lens, which was first announced in 2014 and is now in the development stage at Verily, which is Alphabet Inc.’s research organization devoted to life sciences. The design was recently granted a patent, according to Time, and the lens will soon be ready for human testing.

    The contact lens contains an embedded wireless chip and a miniaturized glucose sensor and antenna that can rapidly measure blood sugar levels for people with diabetes. The radio antenna is thinner than a human hair. The chip and sensor are embedded between two layers of contact lens material, and a tiny pinhole in the lens lets tear fluid from the eye reach the glucose sensor.

    The sensor can measure levels every second, which is key. Google points out that sudden fluctuations in blood sugar cause the worst outcomes for diabetes patients, such as heart disease, strokes and nerve damage. The contact lens will provide an easier, frequent and noninvasive way to monitor blood sugar levels.

    Google smart lens

    Photo: Google

    Physicians who have weighed in on the Google contact lens technology note that the largest benefit is being able to offer patients a pain-free alternative to either pricking their fingers or using a continuous glucose monitor. The company has not yet specified how the data from the lens will be transmitted.

    Data-Driven Approaches

    Other companies are focused on improving diabetes management via patient monitoring and real-time data analysis. By using apps to monitor adherence to drug and treatment regimens, physicians can detect trends or changes in the health of a diabetic. One example of this type is the app created by SAP and Roche Diabetes Care that enables doctors to follow the progress of their prediabetes patients in real time.

    The app works with Accu-Chek view, a healthcare management program and kit that includes blood testing as well as fitness tracking tools for patients. The app utilizes the SAP HANA Cloud Platform to enable doctors to monitor and analyze current patient data via a dashboard on their tablets or computers.

    With the meters and sensors in place, if a patient’s indicators and parameters change, the health expert or physician receives an alert and can send messages or set new goals for the patient. They can also send a request to schedule an appointment to discuss further treatments. Sharing the data online with the doctor allows for constant management and provides the diabetic patient with a feeling of collaborative care. 

    A similar solution for diabetes management is in the works from Cognizant and Kaiser Permanente, who have developed a prototype remote patient-monitoring system based on Microsoft Azure IoT services. The system connects the medical device, such as a glucose reader in a patient’s home, to a smartphone. Functioning as a gateway device, the smartphone sends data to the cloud for integration with an existing analytics and data visualization program used in Kaiser Permanente data center hospitals and clinics. Clinicians can access the data via a dashboard for near real-time view of a patient’s health.  

    “You are reducing the cost of care, because there’s far less expense in having the patient record their vitals remotely from home, instead of coming to the clinic where a nurse or doctor physically collects the data,” says Mehul Shah, associate director of product engineering, Cognizant Technology Solutions, in a statement.

    These new devices and sensors have the potential to simplify health monitoring, which could result in lower costs and fewer unexpected trips to the ER. For millions of diabetics, the Internet of Things might be just what the doctor ordered.    

    Find out more about the smart lens technology and other healthcare IoT sensors on the Verily blog. Read the Cognizant report, Transform Patient Care with the Internet of Things. 


              Podcast #28: Rational Geographic — Map Chat with Aaron Straup Cope        
    https://trackchanges.postlight.com/podcast-28-rational-geographic-map-chat-with-aaron-straup-cope-b0006e8e8fc5 The history and the future of geotagging: this week Paul Ford and Rich Ziade talk to Aaron Straup Cope, a programmer who works with maps and geographical datasets. The conversation covers his time as one of Flickr’s earliest employees, data visualization, gazetteers, the evils of Wal-Mart, geocoding (and reverse geocoding), and one of the most controversial decisions in online mapping — Google’s decision to cut off the poles and make the world a square.
              Infographics of The Week #15        

    Inspired Magazine
    Inspired Magazine - creativity & inspiration daily

    As you may know by now, Inspired Magazine is a sucker for data visualization. That’s why we invited Tony Shin – a social media ninja and creative design samurai – to curate the weekly dose of infographics. If you like them as much as we do check out some of the older editions and follow...Continue Reading "Infographics of The Week #15"

    This post Infographics of The Week #15 was written by tony and first appearedon Inspired Magazine.


              Samyang T-S 24mm f3.5 ED AS UMC Tilt/Shift lens review        

    media_1385750024984.png


    With their range of movements tilt and shift lenses offered in various focal lengths for 35mm full-frame DSLRs have become indispensable for architecture, interiors, still-life, food and product photography. Before Canon redesigned their film-era 24mm version with an improved optical design and uniquely, adding a user-selectable option of aligning the tilt function with the shift movement, these lenses were quite reasonably priced.

    Just four years ago, Canon offered the three focal lengths (24, 45 and 90mm) at £899 each, suggesting that these were marketed a ‘loss leader’, to entice users to switch from Nikon. At that time, with just one 85mm f/2.8D model in the range lacking automatic aperture control, Nikon was lagging behind.

    The firm was soon to refresh the 85mm, while adding a 24mm and 45mm each with electronic automatic aperture control (a first for Nikon), identified by the PC-E designation. As with the earlier Canon TS-E models, they lack the option to tilt and shift in the same plane, prompting some users to call these lenses shift and swing. While you can specify the movements to be aligned at the factory when ordering new, or retrospectively via the subsidiary for a fee, it’s not exactly flexible if the user wants to switch back and forth regularly.

    While Canon has yet to upgrade the 45 and 90 mm models to include this sought-after feature, the upgraded version, the EF 24mm f/3.5L TS-E now retails at just over £1,700, while the less capable Nikon 24mm f/3.5 D ED PC-E is just shy of £1500.

    Third-party offerings are limited to three Schneider Kreuznach models, which start at £2,800 for the 90mm but increase dramatically to £5,400 for the 28mm. Crucially though, these can tilt while shifting, like the new Canon models.

    However, ROK based Samyang is the first to offer a accessibly priced 24mm f/3.5 at £950 inc VAT and in a number of mounts, including Sony A and Pentax K, as the usual Nikon and Canon. The manual claims Sony E, Samsung NEX, MFT and even Fujifilm X-mount, but these have yet to be seen. The optical construction is promising with 16 elements in 11 groups, of which two elements uses ED glass and two adopt aspherical surfaces. However, movements are still quite conservative (though similar to rivals) with ±8.5-degrees of tilt, and ±12mm of shift.

    As with others in the firm's range, the Samyang lacks autofocus, obviously, and any automatic aperture control. In fact, there are no mechanical or electronic interfaces on the lens mount, so there’s no lens data exchanged (or EXIF data visible in post). The Schneider models are the same, in that respect. Most cameras don't have a problem with stopped down metering but this may be an issue. It's simply all too easy to forget, especially if you have already worked with the Canon and newer Nikon equivalents with their electronic aperture control.

    Build quality is good rather than great. The body including the tilt-unit and shift plate is made from an aluminium alloy but the plastic aperture collar seems rather cheap. On a short-term loan, it's impossible to say just how well it stands up to professional use. From a quick look inside the throat of the lens, the rack and pinion teeth seem sturdy, but the same can be also said of the Canon models and they're known to break (usually when trying to make an adjustment while the mechanism is locked).

    The lens has no hood, which is a pity as the front element is both barely recessed and heavily convex. To its credit it's largely free of flare on the Canon EOS 1 DS MK III I used for testing, but it is highly prone to ghosting. Patches are small but it's worth shielding the lens at all times if shooting even vaguely towards the sun.

    media_1385750155533.png
    While one of the less visually interesting stitched panoramas taken with the Samyang, this particular image was chosen for the presence of ghosting. It's a fairly common phenomenon with this lens, necessitating effective shielding and just one of a couple of reasons that dictate the use of a tripod pretty much exclusively.


    A small depth of field scale is included though this is largely redundant on today's high-res digital bodies. With the relatively short throw of the manual focus collar, especially between infinity and 1m, focus accuracy is critical. My Canon focus screen is usually accurate enough for manual focusing at this maximum aperture but I had a number of poorly focused images when handheld. And, that's despite owning two Canon TSE lenses (one a 24mm) and having experience of using virtually every other model for DSLRs, including the Hasselblad HTS adaptor. I can really only conclude that tethering or focusing by live view (or EVF if you have it) is essential.

    As with the Nikon and Canon models, the Samyang adopts knurled controls to adjust the movements and has smaller versions of the same positioned 180 degrees apart on the outer casing to lock them. These are all made of plastic and are quite small. They're also fiddly to use when the movements and their associated controls are 90 degrees apart let alone when the tilt option is aligned with the shift movement.

    The much more expensive (and much larger) Schneider models avoid this scenario completely by adopting locking collars and by duplicating markings on the barrel, which may account in some part for the additional price.

    Although the slim profile of the Samyang’s controls is a necessity to avoid obstructing each other, it is not the Samyang's only shortcoming. More of an issue is that movements are slack, and that once unlocked the barrel is free to move and more often than not simply drop, due to gravity. This alone makes it almost impossible to use without error when hand-held, something that I do regularly with my own TS-E lenses. Locked down on a tripod it's a different story, but it's an unnecessary complication that's avoided with the Canon and Schneider models. With that proviso, the Samyang is sharp centrally wide-open but optimal performance isn't achieved until stopped down to f/5.6-8. Some slight fringing is visible on high contrast edges if you look carefully but it's negligible and easily removed in post.

    For me, personally, the Samyang's inability to reliably hold tilt and shift movements while making adjustments for occasional hand-held use is disappointing. However, if it's to be used exclusively on a tripod, as is often the case, the Samyang can be recommended. It will certainly be attractive to Sony full frame users, where the EVF and focus peaking of the Sony SLT-A99 will be a huge advantage over the OVFs in the current Nikon and Canon models.

    US Links


    B&H in New York at $999.

    Adorama at $999.

    Amazon at $859 (Branded as Rokinon)

    UK Links


    WEX at £949

    Amazon UK at £813

              Diveboard & GBIF team-up for the first citizen science platform monitoring marine species        

    GBIF, the Global Biodiversity Information Facility, has announced today that Diveboard has integrated their network to become the first citizen science platform providing user-generated data helping monitor the evolution of the marine ecosystem.

    What does it mean ?
    It means that every time someone logs a dive and marks down the species he's encountered, scientific data will be published to GBIF for scientists to use and support their research.

    Why does it matter ?
    Marine biologists always need data to support their research, and conducting large-scale surveys is always complicated. By empowering recreational divers with the proper tools we can help them make a difference and visualise & measure phenomena impacting the ecosystem.
    Here's an example of data visualisation made from Diveboard's data set on the lionfish invasion.


    Spread the word!
    You are already part of this, but you can help make a difference by making sure you tell your buddies to log their dives on Diveboard ! 
    Also if you know of specific research going on in your area that you would like to be specifically supported through Diveboard, make sur to let us know - we'll be really happy to integrate and support them
              Senior Reporting Analyst, Marketing Analytics        
    MA-Boston, Background: Organization seeks a Senior Reporting Analyst within its Marketing Analytics group. The Senior Analyst will collaborate with various stakeholders to build out reports / dashboards that meet reporting requirements. Additionally, the candidate will work with various data sources, utilize SQL and leading BI tools to build out reports and provide data visualizations on marketing campaign d
              WholeCellViz – Data Visualization for Whole-cell Models        
    WholeCellViz :: DESCRIPTION WholeCellViz is a web-based software program for visually analyzing whole-cell simulations. ::DEVELOPER Karr Lab :: SCREENSHOTS N/A :: REQUIREMENTS Web browser :: DOWNLOAD WholeCellViz :: MORE INFORMATION Citation BMC Bioinformatics. 2013 Aug 21;14:253. doi: 10.1186/1471-2105-14-253. WholeCellViz: data visualization for whole-cell models. Lee R1, Karr JR, Covert MW.
              iDEA Challenge 2011: Illumina’s Data Excellence Award        
    Illumina is holding a data visualization contest to promote the development of new ideas for visualizing high throughput sequencing data. I think ExpressionPlot fits the bill well since it makes it easy for all biologists to create the types of plots necessary for interpreting their RNA-Seq data and comparing it with other data sets. Key […]
              Tableau Support Services        
    If your goal is to become a data scientist, the basic step one should opt for is to learn tableau. I must say your internal customers (marketing, sales, operations) will be using tableau or something similar. Let's give a brief space to what tableau exactly is. Tableau is a Business Intelligence (BI) tool that can help you create beautiful and visually-appealing reports, charts, graphs and dashboards using your data. This data visualization software is extremely fast and easy to use as it has a drag and drop interface. Tableau helps you visually analyze your big data and it can be great catalyst in helping you solve representation of data and also can be very helpful in finding solutions to a lot of problems. You only have to invest few weeks for learning tableau and you will take another precious steps towards your success. Contact Information 209, Phase-4 Udyog Vihar, Gurgaon INDIA Mobile: 9810083546 India: 98110 50802 Landline : +91-124-4088844 USA: 408 844 3702 Email : sf@rowalim.com
              Seeing is believing in the fight against climate change        

    In 2005, more than a thousand of acres of land in my hometown in the Santa Cruz mountains were under threat from a proposed logging contract that would have severely damaged our ecosystem by tearing down ancient Redwoods, increasing potential fire danger and endangering public safety. As part of the community group Neighbors Against Irresponsible Logging, I used Google Earth to build a flyover of the area to show how closely this logging would take place to residential life, and the dangers it would create. Making geographic data visible and easily intelligible helped to bring together the community to defeat the logging proposal. Seeing is often believing.

    That’s the core mission behind Google Earth. We aim to build the most detailed and realistic digital replica of our changing planet and make it universally accessible to the public—a utility for all. We’re trying to fix what former Vice President Al Gore, in his speech on the Digital Earth, called the challenge of “turning raw data into understandable information.”

    Emerging technologies like our own Google Earth Engine and Google Cloud Machine Learning, and artificial intelligence in general are doing just that: empowering scientists and practitioners to create solutions at the cutting edge of global sustainability, and turning the mountains of geo-data we have into the insights and knowledge needed to guide better decision-making. This work helps drive adoption of renewable energy technologies such as solar, and allows us to better understand and manage the world’s forests, oceans, water and air.

    Our team had the chance to sit down with former Vice President Al Gore to discuss the roles of data, tools and technology in solving the climate crisis.

    We’re grateful to leaders like Al Gore, and all who act as stewards of our shared planetary home. The last decade has seen immense technological progress—and we'll continue to work on data and tools to guide us to a more sustainable world.


              Updated Wiki: Home        

    Overview

    Open Diagram is the definitive open source .net diagramming component library for the .Net framework winforms environment.

    Use Open Diagram to add interactive data visualizations to your .net applications.

    Knowledge of design patterns such as Model-View-Controller is highly recommended.

    The latest version of the source is available from the Source Code tab. Examples are located in the Crainiate.Diagramming.Testing.Forms and the Crainiate.Diagramming.Examples.Forms projects.

    Licensing

    Open Diagram is now fully open source using the permissive BSD licence.

    Getting Started

    View the videos showing how to open the Open Diagram solution in Visual Studio and run the examples and test forms:

    Open Diagram June 2010 Part 1
    Open Diagram June 2010 Part 2

    Previous Versions

    Version 4.1 of Open Diagram is now available in source control including examples and tutorial code.

              Case study - MultiScaleHuman, a multi-scale biological data visualization & knowledge management system.        
    MultiScaleHuman is a multi-scale biological data visualization and knowledge management system developed with ZK to help improve the understanding, diagnosis and treatment of physiological human articulation. Read
     

              Tableau online training classes        
    Tableau is one of the fastest evolving Business Intelligence (BI) and data visualization tool. It is very fast to deploy, easy to learn and very intuitive to use for a customer. Here is a learning path to all those people who are new to Tableau. This path will help you to learn Tableau in a structured approach. Beginners are recommended to follow this path religiously visit us @: http://www.eratrainings.com/course/tableau-online-training/ Contact us : Era Trainings US :: +1 972 646 8090 India :: +91 9966881730
              Data Visualization Consultant - Neustar, Inc. - McLean, VA        
    Neustar, Inc., complies with applicable state and local laws prohibiting discrimination in employment and provides reasonable accommodation to qualified...
    From NeuStar, Inc. - Thu, 10 Aug 2017 16:59:15 GMT - View all McLean, VA jobs
              Data Analyst - Tableau Expert        
    CA-Newport Beach, RESPONSIBILITIES: Kforce has a client that is searching for a Data Analyst - Tableau Expert in Newport Beach, California (CA). REQUIREMENTS: Hands on expert in Tableau development that can build and deploy complex dashboards and visualizations Experience consulting with developmental groups to build and enhance data visualization or analytical applications; able to define solutions that are consis
              A Map to Perfection: Using D3.js to Make Beautiful Web Maps        
    Data Driven Documents, or D3.js, is an awesome data visualization library. In this article, I'll discuss one particularly compelling application of D3.js: map making. We'll go through the common challenges of building a useful and informative web map, and show how in each case, D3.js gives you everything you need to make your map look and feel beautiful.
              3D Data Visualization with Open Source Tools: A Tutorial Using VTK        
    How do we understand and interpret the huge amounts of data coming out of simulations? How do we visualize potential gigabytes of datapoints in a large dataset? In this article I will give a quick introduction to VTK and its pipeline architecture, and go on to discuss a real-life visualization example.
              Mastering colours in your data visualisations        

    I’ll be the first to admit that I am terrible at colours. Be it the selection of paint for a room through to the colours of an Excel chart. I simply choose the ones that I liked without much regard to everyone else. It’s natural for me to think “well I know what I’m talking… Read More »

    The post Mastering colours in your data visualisations appeared first on International Blog.


              I am Maryse Bourgault, and This is How I Work        
    Today, I have the pleasure of interviewing Dr. Maryse Bourgault. Maryse is an Assistant Professor at Montana State University Northern Agricultural Research Center in Cropping Systems and Agronomy. Her aim is to help improve the productivity, profitability and sustainability of agricultural systems in Montana and other dry areas through diversification of cropping systems, in particular using pulse crops (field peas, lentils, chickpeas) in rotation in cereal-based cropping systems. She is a crop physiologist by training and uses these methodologies to bridge the gap between field- and farm-level productivity and genetic improvement for drought and cold tolerance. She previously conducted research on the effects of elevated CO2 in Australia at the Australian Grains Free Air CO2 Enrichment (AGFACE) and at CSIRO with the Climate Adaptation Flagship program. Maryse graduated with a Ph.D. from McGill University (Montreal, Canada) in 2009.

    Current job: Assistant Professor, Cropping Systems Agronomy
    Current Location: Havre, MT
    Current mobile device: iPhone (but I don’t know if the next one will be)
    Current computer: DELL

    Please explain your current situation and research to us.
    I have just started as an Assistant Professor at Montana State University Northern Agricultural Research Station. My research direction is yet to be defined, but I am particularly interested in using crop physiology to support breeding by variety/advanced lines characterization. I am also interested in testing new crops for increased diversification for benefits in soil health, reduced disease pressure, etc.

    What tools, apps, or software are essential to your work flow?

    For better or for worst, the Microsoft Office suite. We use the outlook calendar as a group quite a bit, and it allows us to see what everyone is up to.

    For statistics and data visualization and making graphs, I use R.

    I also like the software FreeMind, a mind map software, although sometimes, my white board is just as good.

    What does your workspace look like?

    I used to like to alternate between offices (home, lab, library even), but I am increasingly appreciative of a fixed workplace. It allows me to compartmentalize work and leisure activities/rest. I find it helps me with my work-life balance. This said, I do not get interrupted much at work, so it does not break my flow when trying to write (or I put a sign on my door saying “Trying to write; enter at your own risk” – which knowing me people know it is a joke (I won’t bite their heads off), but they also know not to bother me for too long). I would like to have my desk clean and uncluttered, but I tend to have unread papers in various piles (something I’m working on…).



    What is your best advice for productive academic work?

    What has helped me a lot is to have a regular schedule. I know not everyone is like this, and of course during the field season, this does not apply, but at least when I am in the office, I do an 8 to 5 day. And I focus on work and only work during this time. If I know I’ll work on the weekend, then I start doing other things (paying bills, checking things out on the net, etc.) and I don’t feel guilty, because, hey, I’ll be pulling more hours this week. However, I find that I get more distracted, and I might do more hours, but I am not convinced I achieve more in the end.

    How do you keep track of projects and tasks?

    Every month, I take some time to reflect on my achievements of the last month, I plan the following month, I think about what might go wrong, and I identify priorities. And I write it all down. I have a special lab book for this. I have been doing this for the last nine years, since mid-way through my PhD, and it has worked great for me. I still underestimate how long it might take for tasks, but I’m getting better at it. I will sometimes have another look at it if something crops up that I didn’t expect. At other times, especially if I feel overwhelmed by everything that needs to be done, I use it just to remind myself of what really needs to be done, and what can wait a bit more.

    Besides phone and computer, do you use other technological tools in work and daily life?

    I do have a tablet, mostly for reading, but honestly, I still use a lot of paper and ink. My “agenda” (besides what I need to share with others which ends up on our shared calendar) consists of blank monthly calendar sheets I print off Outlook and I fill these by hand (usually in pencil, so I can erase). My lab books are all paper and ink – often I’ll print graphs off and paste them in. The paper calendar helps me count days better (for example when growing plants), and I find the paper lab books are better when trying to find something. It might take a bit longer, but I think it also allows me more time to think.

    Which skills make you stand out as an academic?
    I have checked with colleagues about this, and the first thing that I was told was that I was personable and friendly and that I could talk with just anyone. It’s important when dealing with farmers to be approachable, and I think it has helped me a lot to get this new job.

    What do you listen to when you work?

    I used to listen to Sarah McLachlan, Enya, Miss Higgins, Nora Jones… easy listening, soothing music or classical such as Chopin and Mozart when reading or writing. In the field, anything goes! In the last few years though, I found that when reading or writing, I prefer to work without music at all as I find it distracting.

    What are you currently reading? How do you find time for reading?

    I do read a lot both at work and at home. I have lived without a television at home for years, listening to the news on the radio and catching some programs on the internet only. So, I read a lot of fiction for entertainment, crime thrillers, sci-fi fantasy, romance, drama, etc. At work, I will sometimes book specific time slots for reading, especially as part of a literature review for a project proposal and when I start a manuscript. I also take time during lunch to read the Societies magazines and trade magazines.

    Are you more of an introvert or extrovert? How does this influence your working habits?

    This is a little hard to say because every test I have done is putting me either a little of one or the other, so I think I am mostly in the middle. I do tend to feel a little lonely if I am working from home for too long, but I also get a bit overwhelmed with everyone in conferences. For every day routine though, it is good. I’m happy to be around people without talking to them much during the day, but I’m also happy to have a good long (productive) chat with someone once in a while as well.

    What is your sleep routine like?

    I sleep a lot! I love sleeping! LOL. I like to get to bed at 10 pm and get up around 6 or 6:30 am, although I often get to bed a little later (and end up throwing myself out of bed at 7:30…). I often sleep until noon on Saturdays. It also happens on Sunday, but I try to get up earlier so I’m not turning in bed for hours on Sunday night trying to get to sleep.

    What is your work routine like?
    I like to follow the routine of everyone else around me. It sort of forces me to get to work in the morning – and to get home not too late at night. These days, I’m working 8 to 5 pm. During field work, we can pull long days and then it is a different story. This said, I’m not a workaholic. I really like to focus on work during office hours, and do other stuff during evenings and weekends.

    What is the best advice you ever received?
    It’s important to know what you want in life. Things will not always go your way, and no one ever only does what they want, but at least when the going gets tough, you don’t have to go adrift as well. There might be situations where the choice is to stay or leave, and if you know what you want, you can make that choice and not feel victimized.
              Top 5 jQuery UI Alternatives        

    When building for the modern web you often need to create useful UI components. Whether you need a calendar, slider, graph or anything else that's useful for improving or simplifying user interaction, your options are either to create it yourself or leverage existing functionality.

    Developing these components yourself is often time-consuming and complex, and unless you're doing something entirely unique it's often not a great use of your time. That is where UI libraries and frameworks come into play. These libraries simplify the process of creating common UI components. You can leverage these existing frameworks and customize them to suit your needs.

    One of the largest and widely used frameworks is jQuery UI. It's an extended set of widgets, effects, and themes built off of jQuery, separated into its own set of components. You can download all of the jQuery UI elements in a single bundle, or you can pick and choose the components and functionality you're interested in. Using a collection like this lets you create a consistent appearance for your components and lets you get up and running with a minimal of fuss.

    While jQuery UI works great and is a good go-to option, there are other frameworks out there that boast amazing, high-quality controls. In this article I will analyze a few of these to see how they stack up.

    Kendo UI

    Paid Framework

    The Kendo UI Framework provides a series of over 70 components useful for speeding up your development. These components are responsive, themeable, fast and highly customizable.

    Keno UI Framework Example

    There are several things to love about Keno UI and how it can help you create awesome interactive elements.

    Firstly these components are built from the ground up by Telerik to be fast. Unlike some other frameworks, these widgets have been built from scratch with just JS and don't require jQuery at all. The components themselves feel fast, smooth and solid even when viewed on mobile devices.

    While we're talking about mobile devices, this is another area where Kendo UI shines. The components are built with mobile devices in mind, providing a responsive and adaptive layout depending on where they are viewed. Most widgets adjust accordingly and change their controls depending on if you're on a mobile device. It's a great feature. Here's the Slider component, it automatically adjusts based on your screen size.

    enter image description here

    From an implementation standpoint, these controls as well thought out. Developers can either set them up in JS or have them configured server-side (e.g output via PHP). Besides the web aspects of Kendo UI, there are also branches of this framework that can be used for Android and iOS (In case you wanted to use them in your apps).

    Another thing that's interesting is the integration component with Angular JS. It's a fairly complete system with UI elements created from scratch to perfectly match your Angular project. If you're looking to leverage Angular it's nice to know that Kendo has embraced them and that moving forward you should have support.

    Kendo UI Angular sample image

    One thing to note is that Kendo UI isn't a free framework. It's a fully commercial library that can cost you up to several thousand depending on your licensing needs. This by itself might be enough to scare some developers away, however, the quality and support that you receive are what you're paying for.

    Webix

    Free / Paid Framework

    Webix provides developers with a quick and easy way to get started building common UI elements. It comes with a range of data visualization, layout, navigation and editing controls. While this framework shares some components with jQuery UI (calendars, accordions, dialogues etc), Webix extends and goes beyond what you can normally get with jQuery UI.

    Webix main page UI example

    The documentation you get is impressive. All of the controls come with an API reference guide which outlines all of the control's methods, properties, and events. In addition, most controls will have links to several samples, showing you exactly how the control functions. Having friendly documentation is really important so it's great to see they put time into their docs.

    API documentation for the Webix Calendar control

    Continue reading %Top 5 jQuery UI Alternatives%


              Raytheon acquires analytics business        
    DULLES, Va. -  Raytheon Company (NYSE: RTN) has acquired a privately held company, Visual Analytics Incorporated, further extending Raytheon's capabilities to meet the data analytics, data visualization and information sharing needs of its customers. Terms of the transaction were not disclosed. As one of the largest processors of data for the intelligence community, Raytheon has extensive experience handling large data sets and providing actionable information to its customers. The acquisition of Vi...
              Data Dashboard Offers Real-Time Energy Stats        
    One of the beautiful things about the otherwise esoteric open data movement is that it lays the groundwork for data visualizations that are both pretty and informative. “Big data” in its raw form is incomprehensible, but with tables and charts and graphs and other tools, you can illustrate facts and trends in...
              Open-ended, Open Science        

    In this special guest post, Rob McIntosh, associate editor at Cortex and long-time member of the Registered Reports editorial team, foreshadows a new article type that will celebrate scientific exploration in its native form.

    Exploratory Reports will launch next month and, now, we need your input to get it right.

    Chris has kindly allowed me to crash his blog, to publicise and to gather ideas and opinions for a new article type at Cortex. The working name is Exploratory Reports. As far back as 2014, in his witterings and twitterings, Chris trailered a plan for Cortex to develop a format for open-ended science, a kind of louche, relaxed half-cousin to the buttoned-up and locked-down Registered Reports. Easier tweeted than done. We are now preparing to launch this brave new format, but even as we do so, we are still wrestling with some basic questions. Does it have a worthwhile role to play in the publishing landscape? Can it make a meaningful contribution to openness in science? What should its boundaries and criteria be? And is there a better name than Exploratory Reports?

    Visitors to this blog will have a more-than-nodding familiarity with misaligned incentives in science, with the ‘deadly sin’ of hidden flexibility, and with the damage done to reliability when research conducted in an open-ended, see-what-we-can-find way, is written into the record as a pre-planned test of specific hypotheses. No one doubts that exploratory research has a vital role to play in empirical discovery and hypothesis generation, nor that it can be rigorous and powerful (see recent blog discussions here and here). But severe problems can arise from a failure to distinguish between exploratory and confirmatory modes of enquiry, and most perniciously from the misrepresentation of exploratory research as confirmatory.

    A major driver of this misrepresentation is the pervasive idealisation of hypothesis-testing, throughout our scientific training, funding agencies, and journals. Statistical confirmation (or disconfirmation) of prior predictions is inferentially stronger than the ‘mere’ delineation of interesting patterns, and top journals prefer neat packages of strong evidence with firm impactful conclusions, even if our actual science is often more messy and… exploratory. Given a more-or-less-explicit pressure to publish in a confirmatory mode, it is unsurprising that individual scientists more-or-less-wittingly resort to p-hacking, HARKing, and other ‘questionable research practices’.

    Regulars of this blog will need no further education on such QRPs, or on the mighty and multi-pronged Open Science movement to reform them. Still less will you need reminding of the key role that study pre-registration can play by keeping researchers honest about what was planned in advance. Pre-registration does not preclude further exploration of the data, but it keeps this clearly distinct from the pre-planned aspects, eliminating p-hacking, HARKing, and several other gremlins, at a stroke. The promise of enhanced truth value earns pre-registered studies an Open Practices badge at a growing number of journals, and it has even been suggested that there should be an automatic bonus star in the UK Government’s Research Excellence Framework (where stars mean money).

    This is fine progress, but it does little to combat the perceived pre-eminence of confirmatory research, one of the most distorting forces in our science. Indeed, a privileged status for pre-registered studies could potentially intensifythe idealisation of the confirmatory mode, given that pre-registration is practically synonymous with a priori hypothesis testing. A complementary strategy would therefore be for journals to better value and serve more open-ended research, in which data exploration and hypothesis generation can take precedence over hypothesis-testing. A paper that is openly exploratory, which shows its working and shares its data, is arguably as transparent in its own way as a pre-registered confirmatory study. One could even envisage an Open Practices badge for explicitly exploratory studies. 

    Some journal editors may believe that it is typically inappropriate to publish exploratory work. But this is not the case at Cortex, where the field of study (brain-and-behaviour) is relatively uncharted, where many research questions are open-ended (e.g. What are the fMRI or EEG correlates of task X? What characterises patient group Y across test battery Z?), and where data collection is often costly because expensive technologies are involved or a rare or fleeting neuropsychological condition is studied. It is hard to estimate how much of the journal’s output is really exploratory because, whilst some authors have the confidence to make exploratory work explicit, others may still dress it in confirmatory clothing. If publication is their aim, then they are wise to do so, because the Action Editor or reviewers could be unsympathetic to an exploratory approach.

    Hence, a new article type for exploratory science, where pattern-finding and hypothesis generation are paramount, and where the generative value of a paper can even outweigh its necessary truth value. A dedicated format is a commitment to the centrality of exploratory research in discovery. It also promotes transparency, because the incentives to misrepresentation are reduced, and the claims and conclusions can be appropriate to the methods. Some exploratory work might provide strong enough evidence to boldly assert a new discovery, but most will make provisional cases, seeding testable hypotheses and predictions for further (confirmatory) studies. The main requirements are that the work should be rigorous, novel, and generative.

    Or that is the general idea. The devil, as ever, is in the detail. Will scientists – as authors, reviewers and readers - engage with the format? What should exploratory articles look like, and can we define clear guidelines for such an open-ended and potentially diverse format? How do we exercise the quality control to make this a high-status format of value to the field, not a salvage yard for failed experiments, or a soapbox for unfettered speculation? Below, a few of the questions keeping us awake at night are unpacked a little further. Your opinions and suggestions on these questions, and any aspect of this venture, would be most welcome. 

    1. Scope of the format. At the most restrictive end, the format would be specific for studies that take an exploratory approach to open-ended questions. Less restrictive definitions might allow for experimental work with no strong a priori predictions, or even for experiments that had prior predictions but in which the most interesting outcomes were unanticipated. At the most inclusive end, any research might be eligible that was willing to waive all claims dependent upon pre-planning. Are there clear boundaries that can be drawn? 

    2. Exploration and review. A requirement for submission to this format will be that the full data are uploaded at the point of submission, sufficient to reproduce the analyses reported. To what extent should reviewers, with access to the data, be allowed to recommend/insist that further analyses, of their own suggestion, should be included in the final paper? 

    3. Statistical standards. Conventional significance testing is arguably meaningless in the exploratory mode, and it has even been suggested that this format should have no p-values at all. There will be a strong emphasis on clear data visualisation, showing (where feasible) complete observations. But some means of quantifying the strength of apparent patterns will still be required, and it may be just too radical to exclude p values altogether. When using conventional significance testing, should more stringent criteria for suggestive and significant evidence be used? More generally, what statistical recommendations would you make for this format, and what reporting standards should be required (e.g. confidence intervals, effect sizes, adjusted and non-adjusted coefficients etc.)? 

    4. Evidence vs. theory. Ideally, a good submission presents a solid statistical case from a large dataset, generating novel hypotheses, making testable predictions. The reality is often liable to be more fragmentary (e.g. data from rare neuropsychological patients may be limited, and not easily increased). Can weaker evidence be acceptable in the context of a novel generative theoretical proposal, provided that the claims do not exceed the data? 

    5. The name game. The working title for this format has been Exploratory Reports. The ambition is to ‘reclaim’ the term ‘exploratory’ from a slightly pejorative sense it has acquired in some circles. Let’s make exploration great again! But does this set up too much of an uphill struggle to make this a high-status format; and is there anyway a better, fresher term (Discovery Reports; Open Research)?

    Rob will oversee this new article format when it launches next month. Please share your views in the comments below or you can feed back directly via email to Rob or Chris, or on twitter.

              Comment on THE SILENT COSMOS: The Extinction Cascade by Watching the cosmos. | Cwill        
    [...] THE SILENT COSMOS: The Extinction Cascade (cosmos5000.wordpress.com) Eco World Content From Across The Internet. Featured on EcoPressed Mental Health and Climate Change Share the love:Like this:LikeBe the first to like this post. This entry was posted in Life and tagged Cosmos, Data Visualization, Diagram, Flash, Physics, Space, Time, Visual Explanation, XML, Zodiac by Chris W.. Bookmark the permalink. [...]
              Shipping, efficiency and emissions        
    We are told that shipping is the most efficient way of transport. Basically, the energy consumption and emissions from shipping are so low that it should not be seen as a problem. Or?

    Well, like many other discussions the reality is more complex. While it is very efficient per ton-km, shipping still consumes a lot of energy. One single large container ship can use 200 ton of diesel per day, this means that the daily emissions for one single sailor is more than the yearly emissions of two  Americans. 

    The International Maritime Organization (IMO) says sea shipping makes up around 3% of global CO2 emissions which is slightly less than Japan’s annual emissions, the world’s 5th-highest emitting country. And with current trends CO2 emissions from ships will increase by up to 250% in the next 35 years, and could represent 14% of total global emissions by 2050. These calculations don't include all the infrastructure needed for the shipping, the harbours, the ships themselves. And of course most goods moved by ship will anyway be loaded on a truck in the few major harbours. Maritime shipping seems to provide a typical example of Jevons paradox.

    Unfortunately maritime transportation is not part of the Paris agreement or in the national greenhouse gas inventories. So it is largely forgotten or disregarded in the climate debate.

    Kiln have produced this fantastic interactive map showing movements of the global merchant fleet over the course of 2012.It gives you an idea....


              What is data visualisation?        

    This may seem an odd question: surely it just means turning data into a visually accessible form such as charts and diagrams? However, people have been drawing charts for hundreds of years and Edward Tufte published the seminal book “The Visual Display of Quantitative Information” way back in 1983; so why has the term ‘data […]

    The post What is data visualisation? appeared first on Tobias & Tobias.


              Things to Do with Words: Illustrations from Italian Fascism (1919-1922) and Georgia lynchings (1875-1930) [Video]        
    Speaker(s): Professor Roberto Franzosi | This talk will illustrate the power of Quantitative Narrative Analysis, a quantitative social science approach to texts developed by the speaker using data collected from newspapers on the rise of Italian fascism and lynchings in the American 'Deep South'. It will show how narrative data lend themselves to cutting-edge tools of data visualization and analysis as dynamic network graphs and maps in Google Earth and other GIS software, and how QNA data provide the basis for fascinating digital humanities projects. Roberto Franzosi is professor of sociology and linguistics at Emory University.
              Highly Interactive Software with Java and Flex        
    This morning at TSSJS, I attended James Ward's talk about Highly Interactive Software with Java and Flex. Below are my notes from his talk.

    Application have moved from mainframes (hard to deploy, limited clients) to client/server (hard to deploy, full client capabilities) to web applications (easy to deploy, limited clients) to rich internet applications (easy to deploy, full client capabilities).

    Shortly after showing a diagram of how applications have changed, James showed a demo of a sample Flex app for an automobile insurance company. It was very visually appealing, kinda like using an iPhone app. It was a multi-form application that slides right-to-left as you progress through the wizard. It also allowed you to interact with a picture of your car (to indicate where the damage happened) and a map (to indicate how the accident happened). Both of these interactive dialogs still performed data entry, they just did it in more of a visual way.

    Adobe's developer technology for building RIAs is Flex. There's two different languages in Flex: ActionScript and MXML. ActionScript was originally based on JavaScript, but now (in ActionScript 3) uses features from Java and C#. On top of ActionScript is MXML. It's a declarative language, but unlike JSP taglibs. All you can do with MXML is instantiate objects and set properties. It's merely a convenience language, but also allows tooling. The open source SDK compiler takes Flex files and compiles it into a *.swf file. This file can then be executed using the Flash Player (in browser) or Air (desktop).

    The reason Adobe developed two different runtimes was because they didn't want to bloat the Flash Player. Once the applications are running client-side, the application talks to the web server. Protocols that can be used for communication: SOAP, HTTP/S, AMF/S and RTMP/S. The web server can be composed of REST or SOAP Web Services, as well as BlazeDS or LC Data Services to talk directly to Java classes.

    To see all the possible Flex components, see Tour de Flex. It contains a number of components: core components, data access controls, AIR capabilities, cloud APIs, data visualization. The IBM ILOG Elixir real-time dashboard is particularly interesting, as is Doug McCune's Physics Form.

    Next James showed us some code. He used Flex Builder to create a new Flex project with BlazeDS. The backend for this application was a JSP page that talks to a database and displays the results in XML. In the main .mxml file, he used <s:HTTPService> with a URL pointing to the URI of the JSP. Then he added an <mx:DataGrid> and the data binding feature of Flex. To do this, he added dataProvider="{srv.lastResult.items.item}" to the DataGrid tag, where "srv" is the id of the HTTPService. Then he added a Button with click="srv.send()" and set the layout to VerticalLayout. This was a simple demo to show how to hook in a backend with XML.

    To show that Flex can interact with more than XML over HTTP, James wrote a SOAP service and changed <s:HTTPService> to <s:WebService> and changed the "url" attribute to "wsdl" (and adjusted the value as appropriate). Then rather than using {srv.lastResult.*}, he had to bind to a particular method and change it to {srv.getElements.lastResults}. The Button's click value also had to change to "srv.getElements(0, 2000)" (since the method takes 2 parameters).

    After doing coding in Flex Builder, James switched to his Census to compare server-execution times. In the first example (Flash XML AS), most of the time was spent gzipping the 1MB XML file, but the transfer time is reduced because of this. The server execution time is around 800ms. Compare this to the Flex AMF3 example where the server execution time is 49ms. This is because the AMF (binary) protocol streamlines the data and doesn't include repeated metadata.

    To integrate BlazeDS in your project, you add the dependencies and then map the MessageBrokerServlet in your web.xml. Then you use a services-config.xml to define the protocol and remoting-config.xml to define what Java classes to export as services. To use this in the Flex aplication, James changed <s:WebService> to <s:RemoteObject>. He changed the "wsdl" attribute to "endpoint" and added a "destination" attribute to specify the name of the aliased Java class to talk to. Next, James ran the demo and showed that he could change the number of rows from 2,000 to 20,000 and the load time was still much, much faster than the XML and SOAP versions.

    There's also a Spring BlazeDS Integration project that allows you to simply annotate beans to expose them as AMF services.

    BlazeDS also includes a messaging service that you can use to create publishers and subscribers. The default channels in BlazeDS uses HTTP Streaming and HTTP Long Polling (comet), but it can be configurable (e.g. to use JMS). There's also an Adobe commercial product that keeps a connection open using NIO on the server and has a binary protocol. This is useful for folks that need more real-time data in their applications (e.g. trading floors).

    I thought this was a really good talk by James. It had some really cool visual demos and the demo was interesting in showing how easy it was to switch between different web services and protocols. This afternoon, I'll be duking it out with James at the Flex vs. GWT Smackdown. If you have deficiencies of Flex you'd like me to share during that talk, please let me know.

              Kabbage Hires Chief Technology Officer and Chief Data Officer        

    Former ACI Worldwide and eBay Executives Join the Kabbage Team to Capitalize on Technology and Data Vision

    (PRWeb November 11, 2016)

    Read the full story at http://www.prweb.com/releases/2016/11/prweb13839220.htm


              Sales dashboard application         
    The sales dashboard illustrates how to design and build a data visualization application as both a mobile app and a web application through a shared codebase.
              Table2Visualization : Rendering Google Visualization Charts with HTML Tables        
    With the advent of the analytics age, one would find applications spring-up every month that lets you track, monitor, and analyze just about anything, from human habits to system behaviors. Such applications generally have a standard implementation pattern wherein they primarily collect, upload, process, and display data. Now, somebody trying to develop one such application would obviously face challenges, of different kinds, at each of these points depending upon the domain of applicability. However, the challenges in displaying the data, in a way that it is understandable to non-professionals, are the same for all of them.

    Oftentimes, we rely upon statistical charts and graphs to render such data. These days one would find a plenty of charting libraries that make this possible easily (and some of them without any cost), for example Visualize, MooCharts, ProtoChart etc. One such player in the world of data visualization is Google itself.

    Google Visualization is a set of JavaScript libraries that offer a wide range of data representation options with an easily pluggable API. It enables a developer to render statistical charts and graphs from a wide range of data-sources. However, as a developer, I always felt that a very simple and basic data source was always missing in that exhaustive list of data sources; and that is HTML tables. My searched across the web for an answer, either resulted in vein or a complex process with involved importing the data in a Google Docs spreadsheet and then rendering a chart from there. In simple words, there was nothing “Simple and Sweet”.

    So, here I had a problem in-hand and decided to solve it for myself and all my fellow developers around the world. The implementation is now available on Google Code at the following location, with a sample to help you jump-start

    http://code.google.com/p/table2visualization


              New Tools Make Complex Data Visible to the Naked Eye        
    none
              DC: Focus on the Future        
    AC-B7-TG-RedBlog-Banner-650x289.jpg


    The 7th and final part of the Asset Class ‘Back to the Future’ series. Having opened our Defined Contribution business a year earlier, Tara Gillespie reflects on our approach to helping DC members in the 2016 edition.

    ********************


    Focus on the Future

    What we said...

    The theme of our 2016 edition of Asset Class was focusing on the future and starting with the end in mind.

    For DB Schemes specifically, this was about understanding the end-goal and the most secure way of getting there. It could be full funding on a prudent self-sufficiency basis with the aim of managing the scheme into run off. Or it could be targeting buy-out and transferring responsibility to an insurance provider.

    The journey of a DB Scheme can be split into opening, middle and end stages. The right investment strategy, hedge ratio and risk tolerance for a Scheme is defined by where they are in the journey and what their ‘end-game’ target is. Starting with the end in mind continues to be core to our business.

    ...and what happened

    All of the strategies we reviewed in the 2016 edition of Asset Class are still used by our clients. They are assessed by our Investment Committee every month.

    Zooming out from DB pensions, our focus on the future has seen the growth of our DC business. The 2016 edition of Asset Class featured our first ‘DC Special’.

    This looked at how we believe the industry should be thinking about DC in the following areas:

         1. Starting with the end in mind – as with DB schemes, always think about what the end result should
             be for DC members.
         2. Target a Specific Outcome – such as minimum income in retirement.
         3. Investment Strategy Design – getting the default strategy right for your members to ensure they
             progress towards their goals in the most effective way.
         4. Empower Your Members – using the EAST framework by making communications Easy,
             Attractive, Social and Timely to help your members best help themselves.

    The fundamentals of what we said last year still stands true today. However, we have developed our thinking and tools, and can share our experience of putting these ideas into action.

    Starting with the end in mind


    The DC savings journey

    Like DB pension schemes, DC savers also have an opening, middle and end stage to their journey (see below).
     

    The-Opening-Middle-End-(1).png


    And in each stage, the appropriate asset mix, risk tolerance and return requirements should be considered relative to the income at retirement needs of members. For example, achieving the living pension at retirement.

    The reason this philosophy is so fundamental is because not taking enough risk in the opening stage is as bad for the outcome as taking too much risk in the end stage. Only by thinking ahead can you, and your members, make informed decisions about today.


    Target Members’ Outcomes – Importance of Value-for-Money (VfM)

    The tangible outcome in terms of income at retirement is the primary objective for a DC saver. However, to achieve the best outcomes members must be getting the best value.

    Assessing and achieving VfM is fundamental to DC investment strategy design. We aren’t just talking expensive = bad, and cheap = good. There are a broad range of factors that feed into whether a strategy is achieving VfM, including expected return, risk and objectives. All of this need to be considered in the context of what the member is actually looking to achieve. One man’s meat is another man’s poison and all that.

    We have designed a clear framework to help Trustees quickly digest their scheme’s data and easily assess all factors to determine whether a default strategy is offering VfM.


    Investment Strategy Design – No One-Size-Fits-All

    Central to any DC offering is the default fund. However, no two people are the same. So why do we think the investment needs of an entire population of employees can be met by a single default fund? What if you could offer your members a personalised default strategy that better meets their needs? You can!

    Working with an online financial advice platform, we have designed and implemented a personalised default strategy approach for our members. Through technology we believe DC will become increasingly personalised to help achieve the best outcomes for people, not averages.


    Empowering your Members – How to apply the EAST Framework

    The EAST framework is a powerful tool for encouraging action. It continues to be our mantra when it comes to encouraging people to save earlier and save more. However, EAST means different things to different people. The best communication approach will depend on their generation, location, wealth and a variety of other factors.

    Using powerful data visualisation, you can focus your communication efforts in the right places at the right time. We can bring member data to life so you can easily identify who is off track, where they are and how best to target them with Easy, Attractive, Social and Timely messages.
     

    Click to go back to Asset Class 2017

     


              Data visualization        
    Data visualization tools create graphic representations of business data to make information understandable and meaningful. By doing so, they can bring insight to your marketing strategies and shed light on trends, patterns, and possible changes that affect your business. These insights change with growing technology. The modern visualization tools make viewing data simpler, more accessible, and…
              Use of Icons in Diagrams in Thesis - Do I need to reference them?        

    I am a paid subscriber to an online service that lets you download icons. As a paid subscriber— the service tells me I don't need to attribute the illustrator of any icons I download (i've essentially bought them) (free versions are meant to be attributed to the illustrator). 

    If I use these icons to create my own diagram for my thesis—what are the referencing conventions? Do I indeed not reference the original illustrator? Or as its academic work should I reference anyway?


              Comment on The Case for Beautiful Science by British Library’s Beautiful Science exhibit of data visualization leads to Vancouver, Canada’s Martin Krzywinski, scientist and data visualizer | FrogHeart        
    […] previous excerpt is Johanna Kieniewicz and the Beautiful Science exhibition’s curator. In the Feb. 13, 2014 posting on her ‘At the Interface’ blog, where she discusses the exhibit she also makes it clear […]
              Visualization for Fraud Detection        
    Data visualization is being used for detecting fraud, especially with respect to wire and credit card transactions. Work done at the Charlotte Visualization Center at UNC Charlotte provides some interesting insights into fraud detection. This work was conducted in collaboration with the Bank of America.In the following paper they highlight four visualization techniques that allow […]
              Interactive visualization at your fingertips        
    With today’s release of Tableau Public, Tableau Software has opened up infinite possibilites for researchers, corporations and enthusiasts alike to interact, explore and play with their data. More importantly, with Tableau Public one can now have ‘interactive’ visualizations online as opposed to static images. This is a step in the right direction for Data Visualization […]
              Redesigned Visualizations        
    Lately, we have been seeing a high number of ‘bad’ visualizations in media. Over at Infosthetics, they even had a contest to identify the ‘Most Ugly and Useless Infographic‘. It was worth a few chuckles but it definitely made one realize the importance of effective data visualization. It is unfortunate that some people have to […]
              Telling the Library Story with Data        
    Do you want to demonstrate the impact of your organization? Would you like to take library data and turn it into a story that resonates with stakeholders? Are you interested in designing effective data visualizations, including charts and infographics? Join instructor Linda...
              Choosing an Infographic Style: Data Visualizations        

    Infographics are a powerful option for making large quantities of information easy to understand. Visitors don’t always have time to read through an entire article, but they can scan and[...]

    The post Choosing an Infographic Style: Data Visualizations appeared first on CopyPressed.


              Comment on GEPHI – Introduction to Network Analysis and Visualization by @Hanumanum        
    Martin Grandjean » Digital humanities, Data visualization, Network analysis » GEPHI – Introduction to Network Anal… https://t.co/zgIMIy67pJ
              Comment on Network visualization: mapping Shakespeare’s tragedies by @Hanumanum        
    Martin Grandjean » Digital humanities, Data visualization, Network analysis » Network visualization: mapping Shake… https://t.co/JZIjIsiAXX
              Comment on Data-art challenge: visualize millions of chess moves by @Hanumanum        
    Martin Grandjean » Digital humanities, Data visualization, Network analysis » Data-art challenge: visualize millio… https://t.co/6dOi8mtmEw
              Comment on [Data Visualization] More Americans killed by guns since 1968 than in all U.S. wars by Bürgerkriegsähnliche Zustände in den USA: Normales Großstadt-Wochenende mit über 50 Schussopfern | volksbetrug.net        
    […] erschreckende Ausmaß verdeutlicht eine Statistik: Zwischen 1968 und 2015 sind mit über 1,5 Millionen mehr US-Bürger daheim durch Schusswaffen ums […]
              Comment on [Data Visualization] More Americans killed by guns since 1968 than in all U.S. wars by @zonkerabf        
    [Data Visualization] More Americans killed by guns since 1968 than in all U.S. wars https://t.co/n7O95tM2No
              Infoviz for the people: Mass media mentions        
    Increasingly, it seems, mass media outlets are talking up infoviz. Great news for us here at Synoptical Charts, but more than that, helpful for people in businesses that demand clear, concise and logical communication. In other words, everybody.

    Today's installment, from Forbes.com:
    ...[W]hile Hadoop may be the poster child of Big Data, there are other important technologies at play. In addition to Hadoop, the open source framework for distributing data processing across multiple nodes, these include massively parallel data warehouses “that deliver lightening [sic] fast data loading and real-time analytic capabilities,” as the report states; analytic platforms and applications that allow Data Scientists and business analysts to manipulate Big Data; and data visualization tools that bring insights from Big Data analysis alive for end-users.
    Big Data is Big Market & Big Business - $50 Billion Market by 2017



              Resource Recommendation: an "illustrated chronology of innovations"        
    Michael Friendly and Daniel J. Denis have a wonderful interactive timeline on milestones in the theory and practice of data visualization. Be prepared to spend a lot of time there; it's a deep well.

    Milestones in the History of Thematic Cartography, Statistical Graphics, and Data Visualization
              Installation of Interest: Data Communication/Visualization        
    Check out Lauren Manning's installation/survey about data visualization methods.

    She has created and mounted 40-odd versions of a big yet easy-to-understand data set. Viewers can show her which versions attracted them most strongly, got them thinking, and so forth by marking up "experience cards" that show the array in miniature.

    My own faves tend to be those that illustrate the proportions of different foods in fresh ways (mostly in the Abstract/Complex quadrant of her matrix), rather than just showing images and labeling them with numbers (the Simple/Literal quadrant).

    Among those I like best:

    Food by Line Weight
    Concentric Circles
    Shaded Box Chart
    Stacked Bar Chart and Mini Months

    At the same time, I found a few of the formats hard to grasp; one such is the Rainbow Diagram Full Circle. I don't understand the purpose of the connections or the meaning of the line width. It needs a legend, at the very least.

    Still, the photoset/installation overall is very much worth a look. Well done, Ms. Manning.
              Practicing scales        
    As a response to Kai Krause's Africa map, Jeffrey Winter shows us just how small Vatican City really is


              Super Fancy Sexy (again)        

    Data Visualization HierarchyThose of you that know me or read my blog know that I am a strong advocate of visualization in Business Intelligence. I honestly belief that using images, charts, or other visual stimuli greatly helps users of a BI environment to quickly understand the data presented and work with it. I also think that great looking reports, dashboards or other BI products will be much more quickly accepted by business users if they just look nice. I often call this (within our Capgemini practice): It has to look super fancy sexy.Stephen Few is somebody who understand this and has written some great books about this as well (see http://www.perceptualedge.com/). Another believer is Hans Rosling with his Gapminder explorations (http://www.gapminder.org/). More recently I came across this website: http://www.informationisbeautiful.net/. This is a nice website where you will soon discover that data-journalist Davic McCandless take data visualization serious. His Tetris like animation comparing the financial crisis with let&#8217;s say the fight against hunger in Africa is brilliant.Definitely something to check out if you have a couple of minutes. I strongly urge all BI professionals to work on a visual intelligence strategy and framework. 




              Security Analytics - Visualization - Big Data Workshop Black Hat 2017        


    VISUAL ANALYTICS – DELIVERING ACTIONABLE SECURITY INTELLIGENCE


    BlackHat 2017 - Las Vegas


    Big Data is Getting Bigger - Visualization is Getting Easier - Learn How!
    Dates: July 22-23 & 24-25
    Location: Las Vegas, USA

    SIGN UP NOW


    OVERVIEW

    Big data and security intelligence are the two very hot topics in security. We are collecting more and more information from both the infrastructure, but increasingly also directly from our applications. This vast amount of data gets increasingly hard to understand. Terms like map reduce, hadoop, spark, elasticsearch, data science, etc. are part of many discussions. But what are those technologies and techniques? And what do they have to do with security analytics/intelligence? We will see that none of these technologies are sufficient in our quest to defend our networks and information. Data visualization is the only approach that scales to the ever changing threat landscape and infrastructure configurations. Using big data visualization techniques, you uncover hidden patterns of data, identify emerging vulnerabilities and attacks, and respond decisively with countermeasures that are far more likely to succeed than conventional methods. Something that is increasingly referred to as hunting. The attendees will learn about log analysis, big data, information visualization, data sources for IT security, and learn how to generate visual representations of IT data. The training is filled with hands-on exercises utilizing the DAVIX live CD.



    What's New?

    The workshop is being heavily updated over the next months. Check back here to see a list of new topics:

    • Security Analytics - UEBA, Scoring, Anomaly Detection
    • Hunting
    • Data Science
    • 10 Challenges with SIEM and Big Data for Security
    • Big Data - How do you navigate the ever growing landscape of Hadoop and big data technologies? Tajo, Apache Arrow, Apache Drill, Druid, PrestoDB from Facebook, Kudu, etc. We'll sort you out.


    SYLLABUS

    The syllabus is not 100% fixed yet. Stay tuned for some updates.

    Day 1:

    Log Analysis

    • Data Sources Discussion - including PCAP, Firewall, IDS, Threat Intelligence (TI) Feeds, CloudTrail, CloudWatch, etc.
    • Data Analysis and Visualization Linux (DAVIX)
    • Log Data Processing (CSVKit, ...)

    SIEM, and Big Data

    • Log Management and SIEM Overview
    • LogStash (Elastic Stack) and Moloch
    • Big Data - Hadoop, Spark, ElasticSearch, Hive, Impala

    Data Science

    • Introduction to Data Science
    • Introduction to Data Science with R
    • Hunting

    Day 2:

    Visualization

    • Information Visualization History
    • Visualization Theory
    • Data Visualization Tools and Libraries (e.g., Mondrian, Gephi, AfterGlow, Graphiti)
    • Visualization Resources

    Security Visualization Use-Cases

    • Perimeter Threat
    • Network Flow Analysis
    • Firewall Visualization
    • IDS/IPS Signature Analysis
    • Vulnerability Scans
    • Proxy Data
    • User Activity
    • Host-based Data Analysis



    Sample of Tools and Techniques

    Tools to gather data:

    • argus, nfdump, nfsen, and silk to process traffic flows
    • snort, bro, suricata as intrusion detection systems
    • p0f, npad for passive network analysis
    • iptables, pf, pix as examples of firewalls
    • OSSEC, collectd, graphite for host data

    We are also using a number of visualization tools to analyze example data in the labs:

    • graphviz, tulip, cytoscape, and gephi
    • afterglow
    • treemap
    • mondrian, ggobi

    Under the log management section, we are going to discuss:

    • rsyslog, syslog-ng, nxlog
    • logstash as part of the elastic stack, moloch
    • commercial log management and SIEM solutions

    The section on big data is covering the following:

    • hadoop (HDFS, map-reduce, HBase, Hive, Impala, Zookeper)
    • search engines like: elastic search, Solr
    • key-value stores like MongoDB, Cassandra, etc.
    • OLAP and OLTP
    • The Spark ecosystem


    SIGN UP

    TRAINER

    Raffael Marty is vice president of security analytics at Sophos, and is responsible for all strategic efforts around security analytics for the company and its products. He is based in San Francisco, Calif. Marty is one of the world's most recognized authorities on security data analytics, big data and visualization. His team at Sophos spans these domains to help build products that provide Internet security solutions to Sophos' vast global customer base.

    Previously, Marty launched pixlcloud, a visual analytics platform, and Loggly, a cloud-based log management solution. With a track record at companies including IBM Research, ArcSight, and Splunk, he is thoroughly familiar with established practices and emerging trends in the big data and security analytics space. Marty is the author of Applied Security Visualization and a frequent speaker at academic and industry events. Zen meditation has become an important part of Raffy's life, sometimes leading to insights not in data but in life.


              $7.2B: BIG NUMBERS FOR BIG DATA        
    From our friends at Washington Technology: New research from Deltek Inc. pegs federal government spending on big data at $7.2 billion by 2017. Currently, big data spending stands at $5 billion. The market research firm defines big data spending as hardware, software and services that are used for advanced analytics programs, data visualization and data [...]
              Good Charts and Data Visualization        
    Good Charts by Scott Berinato is published by Harvard Business Review Press so it understandably uses examples relating to the corporate world (power outages, customer complaints and revenue, for example). However, it is written in such an accessible way that our students will definitely benefit from many of its suggestions on how to best present and read graphs.  Berinato “speaks” to the reader and asks numerous questions and shows many charts (both good and bad), encouraging interaction with them and the data they display. Sections with titles like “When A Chart Hits Our Eyes” or “Getting into Their Minds: Storytelling” further encourages the reader to think critically about the reasons for sharing data and how to best do so, often needing to follow Berinato’s mantra to “deconstruct and reconstruct.” I look forward to sharing Good Charts by Scott Berinato with Social Studies, Business and even Art teachers and classes.  If you would like to see more right now, read Berinato's "Visualizations that Really Work" online from the June 2016 issue of Harvard Business Review.

    Visual Literacy is an area that we are increasingly exploring in the library profession.  This 2015 Knowledge Quest article, for example, explores the possible connections with Math, Science and English Language Arts and this Edutopia post from 2014 offers several strategy suggestions. More recently, Journalist’s Resource has published an article, “Getting Started with Data Visualization,” that we have recommended to our newspaper classes along with several database (Statista) and open sources (Pew Research Center) on this Classlinkspage.   If you have other ideas to suggest, please let us know. 
     

    I will leave you with a chance to reflect upon a couple of charts; one leads to Berinato's article and the other (click to enlarge) from the Knowledge Quest article illustrates commonalities amongst disciplines -- where the library can often be a leverage point: 


    https://hbr.org/2016/06/visualizations-that-really-work


    http://knowledgequest.aasl.org/wp-content/uploads/2015/06/Graph.jpg


              Review of Learning IPython for Interactive Computing and Data Visualization        

    valuable but traditional

    May 25, 2013 by Catherine Devlinphoto of 'Learning IPython for Interactive Computing and Data Visualization' 4 stars (of 5)

    Packt Publishing recently asked if I could review their new title, Learning IPython for Interactive Computing and Data Visualization. (I got the e-book free for doing the review, but they don't put any conditions on what I say about it.) I don't often do reviews like that, but I couldn't pass one this up because I'm so excited about the IPython Notebook.

    It's a mini title, but it does contain a lot of information I was very pleased to see. First and foremost, this is the first book to focus on the IPython Notebook. That's huge. Also:

    • The installation section is thorough and goes well beyond the obvious, discussing options like using prepackaged all-in-one Python distributions like Anaconda.
    • Some of the improvements IPython can make to a programming workflow are nicely introduced, like the ease of debugging, source code inspection, and profiling with the appropriate magics.
    • The section on writing new IPython extensions is extremely valuable - it contains more complete examples than the official documentation does and would have saved me lots of time and excess code if I'd had it when I was writing ipython-sql.
    • There are introductions to all the classic uses that scientists doing numerical simulations value IPython for: convenience in array handling, Pandas integration, plotting, parallel computing, image processing, Cython for faster CPU-bound operations, etc. The book makes no claim to go deeply into any of these, but it gives introductory examples that at least give an idea of how the problems are approached and why IPython excels at them.

    So what don't I like? Well, I wish for more. It's not fair to ask for more bulk in a small book that was brought to market swiftly, but I can wish for a more forward-looking, imaginative treatment. The IPython Notebook is ready to go far beyond IPython's traditional core usership in the SciPy community, but this book doesn't really make that pitch. It only touches lightly on how easily and beautifully IPython can replace shell scripting. It doesn't get much into the unexplored possibilities that IPython Notebook's rich display capabilities open up. (I'm thinking of IPython Blocks as a great example of things we can do with IPython Notebook that we never imagined at first glance). This book is a good introduction to IPython's uses as traditionally understood, but it's not the manifesto for the upcoming IPython Notebook Revolution.

    The power of hybrid documentation/programs for learning and individual and group productivity is one more of IPython Notebook's emerging possibilities that this book only mentions in passing, and passes up a great chance to demonstrate. The sample code is downloadable as IPython Notebook .ipynb files, but the bare code is alone in the cells, with no use of Markdown cells to annotate or clarify. Perhaps this is just because Packt was afraid that more complete Notebook files would be pirated, but it's a shame.

    Overall, this is a short book that achieves its modest goal: a technical introduction to IPython in its traditional uses. You should get it, because IPython Notebook is too important to sit around waiting for the ultimate book - you should be using the Notebook today. But save space on your bookshelf for future books, because there's much more to be said on the topic, some of which hasn't even been imagined yet.

    This hReview brought to you by the hReview Creator.


              WOMBAT MeDaScIn 2017        
    Last year we had WOMBAT (Workshop Organized by the Monash Business Analytics Team) at the zoo, and MeDaScIn (Melbourne Data Science Initiative) in the city. This year we are combining forces to hold WOMBAT MeDaScIn 2017. There will be four days of tutorials (Monday 29 May to Thursday 1 June), and the main conference on Friday 2 June. We have an impressive range of local and international presenters including Yihui Xie (author of Rmarkdown, Knitr, Bookdown, Blogdown and more), Di Cook (data visualization guru), Stephanie Kovalchik (Data Scientist at Tennis Australia), Amy Shi-Nash (Head of Data Science at Commonwealth Bank of Australia), Graham Williams (Director of Data Science at Microsoft) and many more.
              Statistics positions available at Monash University        
    We are hiring again, and looking for people in statistics, econometrics and related fields (such as actuarial science, machine learning, and business analytics). We have a strong business analytics group (with particular expertise in data visualization, machine learning, statistical computing, R, and forecasting), and it would be great to see it grow. The official advert follows. The Department of Econometrics and Business Statistics at Monash Business School in Melbourne, Australia, invites applications for full-time tenure-track positions at the Senior Lecturer level (equivalent to North American/European Assistant Professor with some post-doctoral academic experience) and Associate Professor level.
              ACEMS Business Analytics Prize 2016        
    We have established a new annual prize for research students at Monash University in the general area of business analytics, funded by the Australian Centre of Excellence in Mathematical and Statistical Frontiers (ACEMS). The rules of the award are listed below. The student must have submitted a paper to a high quality journal or refereed conference on some topic in the general area of business analytics, computational statistics or data visualization.
              Google workshop: Forecasting and visualizing big time series data        
    Workshop for Google, Mountain View, California. Automatic algorithms for time series forecasting Optimal forecast reconciliation for big time series data Visualization of big time series data
              Exploring the feature space of large collections of time series        
    Work­shop on Fron­tiers in Func­tional Data Analy­sis Banff, Canada. It is becoming increasingly common for organizations to collect very large amounts of data over time. Data visualization is essential for exploring and understanding structures and patterns, and to identify unusual observations. However, the sheer quantity of data available challenges current time series visualisation methods. For example, Yahoo has banks of mail servers that are monitored over time. Many measurements on server performance are collected every hour for each of thousands of servers.
              Visualization of big time series data        
    Talk given to a joint meeting of the Statistical Society of Australia (Victorian branch) and the Melbourne Data Science Meetup Group.It is becoming increasingly common for organizations to collect very large amounts of data over time. Data visualization is essential for exploring and understanding structures and patterns, and to identify unusual observations. However, the sheer quantity of data available challenges current time series visualisation methods. For example, Yahoo has banks of mail servers that are monitored over time.
              Di Cook is moving to Monash        
    I’m delighted that Professor Dianne Cook will be joining Monash University in July 2015 as a Professor of Business Analytics. Di is an Australian who has worked in the US for the past 25 years, mostly at Iowa State University. She is moving back to Australia and joining the Department of Econometrics and Business Statistics in the Monash Business School, as part of our initiative in Business Analytics. Di is a world leader in data visu­al­iza­tion, and is well-​​known for her work on inter­ac­tive graph­ics.
              Visit of Di Cook        
    Next week, Professor Di Cook from Iowa State University is visiting my research group at Monash University. Di is a world leader in data visualization, and is especially well-known for her work on interactive graphics and the XGobi and GGobi software. See her book with Deb Swayne for details. For those wanting to hear her speak, read on.Research seminar She will be giving a seminar at 2pm on Monday 18 August at the Monash Clayton campus (Rm E457, Menzies Building 11).
              Data visualization        
    For those who have not read the seminal works of Tufte and Cleveland, please hang your heads in shame. To salvage some sense of self-worth, you can then head over to Solomon Messing’s blog where he is starting a series on data visualization based on the principles developed by Tufte and Cleveland (with R examples). The classics are also worth reading, and remain relevant despite the 20 or 30 years that have elapsed since they appeared.
              Data visualization videos        
    Probably everyone has seen Hans Rosling’s famous TED talk by now. If not, here it is: I recently came across a couple of other exceptional talks on data visualization: Hans Rosling again: ”Let my dataset change your mindset”. If only all statistics lecturers were this dynamic! David McCandless: “The beauty of data visualization”. Not so exciting as Hans, but some great examples. And here’s an hour-length documentary hosted by Hans Rosling called “The Joy of Stats”.
              Statistical Analysis StackExchange site now available        
    The Q&A site for statistical analysis, data mining, data visualization, and everything else to do with data analysis has finally been launched. Please head over to stats.StackExchange.com and start asking and answering questions. Also, spread the word to everyone else who may be interested — work colleagues, students, etc. The more people who use the site, the better it will be. There are already 170 questions, 513 answers and 387 users.
              About Hyndsight        
    I was thinking of writing a book on doing research in statistics. Instead, I decided to write a blog covering the same material, plus other things that might be of interest to my research team. Topics covered include LaTeX, R, writing and preparing a thesis, writing a journal article, submitting an article to a refereed journal, how to convince editors to publish your work, and writing referee reports. Topics of more specific interest to my research team include forecasting, data visualization and functional data, and local events such as meetups or statistics conferences.
              Data visualization for time series in environmental epidemiology        
    Data visualization has become an integral part of statistical modelling. Exploratory graphical analysis allows insight into the underlying structure of observations in a data set, and graphical methods for diagnostic purposes after model fitting provide insight into the fitted model and its inadequacies. In this paper we present visualization methods for preliminary exploration of time series data and graphical diagnostic methods for modelling relationships between time series data in medicine. We will use exploratory graphical methods to better understand the relationship between a time series response and a number of potential covariates.
              Love Thy (Theban) Neighbours, or how neighbour networks could help us solve the witness issue in Ptolemaic contracts (Silke Vanbeselaere)        
    In a first stage of the project on Theban witnesses in Demotic documents, we illustrated social network analysis and data visualisation as a technique for identifying and disambiguating historic actors in a large dataset. This next phase will present you with an example of how historical research can evolve after having used the identification method. Inspired by Padgett and Ansell’s seminal paper on the Medici: “Robust Action and the Rise of the Medici 1400-1434”, we now aim to explore different types of relationships attested in the Theban sources and compare the resulting networks.
              Egyptology meets Digital Humanities: The Book of the Dead (Patrick Sahle and Ulrike Henny)        
    The Egyptian “Book of the Dead” has been the object of study in a long term research project of the North Rhine-Westfalian Academy of Science and the Arts, operated at the University of Bonn (early 1990s - 2012). The “Book” is a corpus of c. 200 spells in the form of texts and/or illustrations (vignettes) and witnessed in varying order and completeness by c. 3000 objects. Within the digitization efforts of the academy, in 2011, the Cologne Center for eHumanities (CCeH) was commissioned to transform the internal research database and the image archive into a digital research platform. It is built on a project specific data model for object descriptions and a contextual knowledge base. Regarding data standards and techniques, the digital environment resides completely in the X-world: underlying XML data, an eXist database as backbone and XQuery, XSLT and XForms as processing methods to create the user interface. The archive provides several browse & search facilities allowing to explore the textual and visual witnesses. New information can be added by input interfaces, and various indices and visualizations have been prepared to support scholars in finding answers to their research questions. In addition to a general overview on the project and its achievements, three particular issues will be addressed: practical and theoretical implications of data visualization, the integration of the archive into the research community by technical interfaces, and the question of a sustainable information resource beyond the funding period.
              Data Visualization Competition 2015 Winning Entries Announced        
    The winners of the data visualisation competition have been announced. The aim of the competition was to encourage participants to use well-being measurement in innovative ways to a) show how data on well-being give a more meaningful picture of the…
              Data Visualisation Competition Now Closed        
    Entries for Web-COSI’s Data Visualisation Competition have now closed. To find out more about the competition, click here. Winners will be announced shortly and invited to the 5th OECD World Forum on Statistics, Knowledge and Policy in Mexico, 13-15 October…
              Circle City Con 2015 Videos        
    Link http://www.irongeek.com/i.php?page=videos/circlecitycon2015/mainlist
    These are the Circle City Con videos. Thanks to the staff for inviting me up to record. Big thanks to Oddjob, Glenn, Jordan, Tim, Will, Mike, Nathan, & Chris for helping set up AV and record, as well as others who I'm forgetting. It was a great time.

    Track 1

    Opening Ceremonies

    Keynote
    SpaceRogue

    Rethinking the Trust Chain: Auditing OpenSSL and Beyond
    Kenneth White

    Actionable Threat Intelligence, ISIS, and the SuperBall
    Ian Amit

    Security Culture in Development
    Wolfgang Goerlich

    Simulating Cyber Operations: "Do you want to play a game?"
    Bryan Fite

    Hacking IIS and .NET
    Kevin Miller

    User Awareness, We're Doing It Wrong
    Arlie Hartman

    Departmentalizing Your SecOps
    Tom Gorup

    Shooting Phish in a Barrel and Other Terrible Fish Related Puns
    Amanda Berlin

    ZitMo NoM - Clientless Android Malware Control
    David Schwartzberg

    Data Loss Prevention: Where do I start?
    Jason Samide

    Reducing Your Organization's Social Engineering Attack Surface
    Jen Fox

    1993 B.C. (Before Cellphones)
    Johnny Xmas

    Building a Comprehensive Incident Management Program
    Owen Creger

     Is that a PSVSCV in your pocket
    Jake Williams

    Analyzing the Entropy of Document Hidden Code
    Adam Hogan

    Making Android's Bootable Recovery Work For You
    Drew Suarez

    Does anyone remember Enterprise Security Architecture?
    Rockie Brockway

    Malware Armor
    Tyler Halfpop

    Closing Ceremonies

    Track 2

    Ruby - Not just for hipster
    Carl Sampson

    Configure your assets, save your butt
    Caspian Kilkelly

    Digital Supply Chain Security: The Exposed Flank
    Dave Lewis

    I Amateur Radio (And So Can You)
    Kat Sweet

    Wireless Intrusion Detection System with Raspberry Pi
    Chris Jenks

    The Answer is 42 - InfoSec Data Visualization (Making Metric Magic & Business Decisions)
    Edward McCabe

    Running Away from Security: Web App Vulnerabilities and OSINT Collide
    Micah Hoffman

    Lessons Learned from Implementing Software Security Programs
    Todd Grotenhuis

    Stupid Pentester Tricks - OR - Great Sysadmin Tips! - Done in style of Rocky and Bullwinkle
    Alex Fernandez-Gatti / Matt Andreko / Brad Ammerman (not to be posted)

    Findings to date.
    Cameron Maerz

    Clean Computing: Changing Cultural Perceptions
    Emily Peed (No Sound)

    From Parking Lot to Pwnage - Hack?free Network Pwnage
    Brent White / Tim Roberts

    PlagueScanner: An Open Source Multiple AV Scanner Framework
    Robert Simmons

    How not to Infosec
    Dan Tentler

    Building a sturdy foundation - a program-based approach to IT Operations, Application Development, and Information Security in business
    Steven Legg

    Hacking the Jolla: An Intro to Assessing A Mobile Device
    Vitaly McLain / Drew Suarez

     

    Track 3

    Operationalizing Yara
    Chad Robertson

    An Inconvenient Truth: Security Monitoring vs. Privacy in the Workplace
    Ana Orozco

    From Blue To Red - What Matters and What (Really) Doesn't
    Jason Lang

    Using Evernote as an Threat Intelligence Management Platform
    Grecs

    Surfing the Sea and Drowning in Tabs: An Introduction to Cross-Site Request Forgery
    Barry Schatz

    Turn Your Head And Cough: Why Architecture Risk Assessments Are Like Being A General Physician
    Nathaniel Husted

    OBAMAS CYBER SECURITY PLAN DISSECTED
    Jonathan Thompson

    The Hacker Community is Dead! Long Live the Hacker Community!
    Bruce Potter

    Square Peg, Round Hole: Developing a Security Culture Within an Enterprise
    Jeff Pergal / Stuart McIntosh

    Smuggling Plums - Using Active Defnse techniques to hide your web apps from your attackers and their scanners
    John Stauffacher

    Deploying Honeypots To Gather Actionable Threat Intelligence
    James Taliento

    Clear as FUD: A look at how confusing jargon and technology can create fear, uncertainty, and doubt
    Chris Maddalena

    How to Budget for IDS
    Brian Heitzman

    Reverse Engineering Windows AFD.sys
    Steven Vittitoe

    Nepenthes: Netpens With Less Pain
    Andy Schmitz

    Do We Still Need Pen Testing?
    Jeff Man

     

    Workshops

    Lock Picking & Bypass Class

    Your Own Worst Enemy Landing Your First Infosec Gig Despite Yourself - Johnny Xmas

    Building an Incident Response Program - Lesley Carhart

    Security Auditing Android Apps - Sam Bown


              Book is out :)        


    http://www.amazon.com/OpenGL-Data-Visualization-Cookbook-Raymond/dp/1782169725

    You can buy a copy at: http://www.amazon.com/OpenGL-Data-Visualization-Cookbook-Raymond/dp/1782169725

    The book comes with all source code that you would need to build applications using OpenGL in Windows, Linux, Mac OS X or Android! I've also put in some effort to link OpenCV and the Android Sensor Manager in Android, and so others who are interested in building interactive application have a simple-to-use code to get started.

    With a little bit of hack, you can connect the code base to any depth sensor and OpenCV! That allows you to build lots of applications easily.

    Although the book is out, there are still lots I would like to cover, e.g., OpenGL lighting techniques, Compute Shader, and OpenCL/CUDA! Those are super useful for data visualization! and will be covered if there is a great sale to this book this time.


              Veeam Brings Powerful Innovations Designed to Deliver Seamless Digital Life Experience        
    • VeeamON 2017 commences with a raft of innovations to deliver on today’s consumer requirements for seamless Digital Life experience
    • NEW Veeam Availability Suite v10 drives non-stop business continuity, digital transformation agility and analytics & visibility to new levels
    • NEW Veeam CDP helps customers to protect and recover Tier-1 and mission-critical applications
    • NEW Veeam Availability for AWS offers industry’s first cloud-native, agentless backup and Availability solution to protect AWS apps and data
    • NEW Veeam Agent for Microsoft Windows provides ‘Always-On Cloud’ Availability for Windows-based physical servers and endpoints, as well as applications running in Microsoft Azure, AWS and other public clouds
    • NEW Extending Veeam ‘Always-On Cloud’ Availability Platform with new Universal Storage API framework; IBM, Lenovo and Infididat join Veeam’s storage partner ecosystem

     

    Baar, Switzerland and VeeamON, New Orleans, LA – May 17, 2017:  Users want the confidence of knowing that their information is available whenever and wherever they need it.  In short, they want a seamless Digital Life experience. At VeeamON 2017, annual customer and partner conference, Veeam® Software, the innovative provider of solutions that deliver Availability for the Always-On Enterprise™, today unveiled a raft of innovations to help enterprises ensure ‘Always-On Cloud’ Availability in a Multi-Cloud and Hybrid Cloud environment.   

    Veeam helps businesses across the globe enable seamless Digital Life experiences with Veeam Availability Suite for the 'Always on Cloud'.  Veeam Availability Suite delivers a fundamentally new kind of solution by providing:

    • Non-Stop Business Continuity to deliver user confidence that their Digital Life will be available when, where and how they want it. Instantly recover – cross-cloud anything to anywhere. Backup, replication and continuous data protection (CDP) for Multi-Cloud or Hybrid Cloud environment wherever it is: private, public, managed or SaaS;
    • Digital Transformation Agility provides easy, secure and reliable cross-cloud data management and migration. Choose your Cloud, your way. Veeam delivers a software-defined, hardware and cloud agnostic platform so you can adapt to user requirements as they change;
    • Analytics and Visibility with actionable insights for data management, operational performance and compliance across your entire infrastructure.  Enterprises can now monitor, analyze and act with confidence.  Veeam and its ecosystem of partners provide robust data analytics and discovery, simplified data management, workflow automation and more. 

    “Today’s users are demanding – period.  At home, work or school, users want a seamless digital experience and anything less is unacceptable.  Enterprises are having to re-think their IT strategies and service models, and Availability is of paramount importance,” said Peter McKay, co-CEO and President at Veeam.  “As enterprises move to the Cloud, Veeam ensures Availability of services, applications and data in Multi-Cloud and Hybrid-Cloud environments.  Over the last 10 years Veeam has been the primary innovator in the space, with many industry first capabilities and highly differentiated solutions for private, managed, public and SaaS clouds, and at VeeamON we’re raising the bar even further.”

    Today at VeeamON 2017 the company unveiled NEW Veeam Availability Suite v10 and NEW extended Veeam ‘Always-On Cloud’ Availability Platform.  

    NEW Veeam Availability Suite v10

    Veeam Availability Suite v10 drives business continuity and agility to new levels by extending the ‘Always on Cloud’ Availability Platform to manage and protect:

    • Physical servers and Network Attached Storage (NAS);
    • Tier-1 applications and mission-critical workloads with NEW Veeam CDP (continuous data protection), bringing recovery SLAs of seconds using continuous replication to the private or managed cloud;
    • Native object storage support, freeing up costly primary backup storage with policy-driven automated data management to reduce long term retention and compliance costs. This includes broad cloud object storage support with Amazon S3, Amazon Glacier, Microsoft Azure Blob and any S3/Swift compatible storage.

    With v10, Veeam offers a complete end-to-end Availability and cross-cloud data management platform for enterprise customers by supporting any workloads (virtual, physical or cloud) on any infrastructure in Multi-Cloud and Hybrid Cloud environments (private, public, managed or SaaS).

    Additional major enhancements to Veeam Availability Suite include:

    • Veeam Availability for AWS
    • Veeam Agent for Microsoft Windows
    • Extended Veeam ‘Always-On Cloud’ Availability Platform

    NEW Veeam Availability for AWS

    Amazon Web Services (AWS) is becoming a popular choice for customers of all sizes. However, it is customer’s responsibility to protect and recover their applications and data running in AWS. As more mission-critical applications are deployed in AWS the need for enterprise-class data protection and Availability solution is increasing.

    Veeam Availability for AWS (delivered through a Veeam - N2WS strategic partnership) is the industry’s first cloud-native, agentless backup and Availability solution designed to protect and recover AWS applications and data helping enterprises reliably move to and manage a Multi-Cloud or Hybrid Cloud environment. This solution mitigates the risk of losing access to your applications and ensures protection of your AWS data against accidental deletion, malicious activity and outages.

    Included in Veeam Availability for AWS:

    • Cloud-native, agentless backup and recovery using native AWS snapshots eliminating complexity and dramatically improving recovery SLAs;
    • Mitigate risk of downtime and data loss for your AWS workloads by decoupling data and storing backups independently from the underlying AWS infrastructure;
    • Take full advantage of powerful and reliable recovery technologies to achieve industry-leading RTOs including instant recovery, granular file and application recovery.

    Veeam Availability for AWS is well positioned to revolutionize AWS data protection and become #1 Availability for AWS - the same way as Veeam has transformed VMware data protection.

    NEW Veeam Agent for Microsoft Windows

    Veeam Agent for Microsoft Windows, previously announced and being made generally available today at VeeamON 2017. This solution extends Veeam “Always-On Cloud” Availability Platform to public cloud and physical servers. It builds upon the success of Veeam Endpoint Backup — a product which has been downloaded over one million times since April 2015. This new solution offers features and capabilities designed to ensure Availability for Windows-based physical servers, workstations and endpoints, as well as Windows workloads running in public clouds including Microsoft Azure, AWS and other. 

    NEW Extended Veeam ‘Always-On Cloud’ Availability Platform; New Partners: IBM, Lenovo and Infididat

    The Platform delivers new Universal Storage API framework adding IBM, Lenovo and INFINIDAT to Veeam’s ever-growing ecosystem of strategic alliance partners which includes HPE, Cisco, NetApp, Dell EMC, Nimble and Exagrid. These combined solutions enable users to leverage innovative and powerful 1 + 1 = 3 capabilities dramatically improving ‘Always-On’ Cloud Availability and reducing costs without negatively impacting production. 

    Additionally, customers can leverage Veeam’s open and extensible platform to solve today’s business challenges through new integrations and partner solutions including VMware vRealize, DataGravity and Starwind Software:

    • Log data generated by storage, hypervisors and applications is large in scale and unstructured, and takes a long time to analyze for actionable insights, costing time and money. Veeam’s content pack for VMware vRealize Log Insight is a powerful analytics and monitoring tool for the Veeam Availability Suite environment designed to provide users improved visibility, actionable insights, and management of Veeam infrastructure, reducing IT management costs and mitigating the risk of downtime;
    • Data Gravity’s fully federated search and analytics capabilities enable organizations to meet corporate data governance and regulatory compliance requirements, while providing total data visibility, security and enhanced availability for IT, virtualization, and security professionals. Combined with Veeam, organizations can ensure the protection and security of their most sensitive business information while maintaining optimal service Availability;
    • StarWind Cloud VTL for AWS and Veeam offers cost-effective and scalable tape replacement with Amazon S3 and Glacier object storage, helping businesses to meet regulatory requirements for data retention with no changes to the established tape-centric data archival processes.

    “With these new innovations across the Platform and ecosystem, Veeam is once again pioneering the market.  Enterprises have long struggled to juggle user demands, but with Veeam Availability Suite v10, we are delivering wide array of new capabilities that will enable our portfolio of more than 242,000 customers deliver rich, seamless Digital Life experiences to their users,” added McKay.

    Supporting Quotes

    • “INFINIDAT is pleased to be among the first partners to leverage Veeam’s newly announced Universal Storage API framework to provide direct integration between our Infinibox portfolio of petabyte-scale enterprise-class data storage solutions, and Veeam Availability Suite,” said Jacob Broido, INFINIDAT’s Chief Product Officer. “The combined offering provides organizations with unprecedented levels of application performance, Availability and efficiencies to power their critical application workloads across enterprise data centers and hybrid cloud environments."
    • “The freedom offered by hybrid, multi-cloud IT environments carries a new set of challenges when it comes to protecting, managing and monitoring business applications that run outside the classic datacenter” said Steven Hill, a Senior Storage Analyst with 451 Research. “Next-generation DR/BC solutions need to go beyond basic data protection to help insure the continuous Availability of the underlying services that cloud-based applications now depend upon.”
    • "IT organizations are faced with ever-stricter service level requirements, more onerous governmental regulations implemented as a patchwork across geographies and stiffer penalties for any failure to meet them," said Phil Goodwin, Research Director, IDC. "Veeam's focus is on developing tools to help IT professional relieve the burden of data availability and governance that they face every day. IT leaders who attend VeeamON 2017 will have a unique, consolidated view of the Veeam ecosystem of solution providers, service providers and other partners to help them make the most of their investment in Veeam products."
    • “You always want your systems up, online and running smoothly, especially when the clock is ticking and decisions have to be made fast,” said Russ Trainor, Vice President of Information Technology for the Broncos. “Just like other companies, we have critical systems that need to be up and online 24/7. Veeam is scalable, so even though our data grows by 30 percent each year, Veeam scales with us. Reliability is where the rubber meets the road. Veeam is a reliable data availability solution. It eases our anxiety.”
    • “Our number one priority is delivering exceptional guest experiences, and we cannot begin to contemplate having critical services off-line, such as our bars or the casino floor,” said Kevin Ragsdale, Director of IT for the Hard Rock Hotel & Casino Las Vegas. “Whether guests are listening to live music, playing slots, eating in our restaurants, attending conventions, relaxing by the pool or shopping in our boutiques, our mission is to ensure their experiences are stellar. Veeam gives us the peace of mind of knowing that if anything does happen across our infrastructure, we are able to resume services in mere minutes. This a quantum leap forward in terms of performance compared to what we were used to. We are able to successfully deliver to our guests seamless experience they have come to expect.”

    For more information, visit www.veeam.com.

     

     

    About Veeam Software

    Veeam® recognizes the new challenges companies across the globe face in enabling the Always-On Enterprise™, a business that must operate 24.7.365. To address this, Veeam has pioneered a new market of Availability for the Always-On Enterprise™ by helping organizations meet recovery time and point objectives (RTPO™) of less than 15 minutes for all applications and data, through a fundamentally new kind of solution that delivers high-speed recovery, data loss avoidance, verified recoverability, leveraged data and complete visibility. Veeam Availability Suite™, which includes Veeam Backup & Replication™, leverages virtualization, storage, and cloud technologies that enable the modern data center to help organizations save time, mitigate risks, and dramatically reduce capital and operational costs, while always supporting the current and future business goals of Veeam customers.

    Founded in 2006, Veeam currently has 47,000 ProPartners and more than 242,000 customers worldwide. Veeam's global headquarters are located in Baar, Switzerland, and the company has offices throughout the world. To learn more, visit https://www.veeam.com.


              Å¹ródła energii odnawialnej w Polsce i UE, wizualizacja danych | Renewable energy sources, data visualization        


    pl | Ten projekt to nic innego, jak radzenie sobie z kupą danych z GUSu i projektowanie wykresów.
    [składają się na niego 2 plansze: dla UE oraz dla Polski oraz 10-minutowa animacja]


    eng This design is all about dealing with huge amount of data and making charts for both: Poland and whole European Union. [2 posters and 10-minute animation]





    Unia Europejska, European Union



    Polska, Poland







              The Infographic History of the World        

    Cast: Valentina D'Efilippo, YLLW and Sono Sanctus

    Tags: data visualization, infographic, book, history, evolution, promo, graphic design, illustration and art direction


              Cities are Many Things - Urban in Motion        
    Cities can be many things to its citizens. Urban as an acronym for constant change and transformation, a world to shape up dreams and visions. The artefact city as a construction and collage of layered times, hopes and desires is open to interpretation. Here on UT this has been a topic from the beginning and will continue to be.

    How to read the city and how to visualise the many possible interpretation of data, charts and reports is part of the ongoing discussion shaping the building culture of the present. From smart cities to participation, technology has been branded pervasive, particularly in relation to cities and hopes have been pinned to the rise of data visualisation. There has not been a definite result, certainly a business case is pitched, but more importantly a very specific practice has emerged. A practice that is not only lauded by city officials and leading researchers, but has become part of the individual everyday. In the sense of a very early post: You are the city

    An impression or interpretation thereof by the artist Saana Inari in a video installation made for Kiveaf about Belgrade back in 2013. Described as an Audiovisual installation is a study about the city of Belgrade, describing different sides of it, architecture, communication, traffic, humans…

    Stop Motion Beograd. Video by Saana Inari on Vimeo.

    Two to three channel vertical HD video, total duration 9 minutes. Stereo audio for the space, duration 10:30 min.
    Director / Camera / Animation / Sound: Saana Inari, made for: Kiveaf, funding: Oskar Öflunds Stiftelse

              Olympics 2012 in London and some Twitter Visuals        
    The Olympics are in town and about to kick off tonight in a packed Olympic Stadium out in Stratford. The last week was all about gearing up to for London to this big event. There were a few new changes, including the Olympic lanes for official traffic, but also simple things like chaining the timing of traffic lights for example.


    Image taken from zimbio / The Olympic Rings 2012 being shipped up the Thames past the O2.


    Image taken from msn.car / The official Olympics 2012 London car.

    However so far things are running smoothly if only the weather plays along. But then a bit of the very British weather won't harm the good spirit, it's the Olympics!

    The venues are reported to be all set. The velodrome was one of the first venues to be finished already last year. Now the Olympic Stadium is open, the Aquatics centre plus the little venues. Also the observation tower in the Olympic Park is open to visitors, at extra cot unfortunately.


    Image taken from London2012 / The Olympic Park as of July 2012. Compare to earlier stages for example in previous posts on urbanTick.

    London has prepared through out the city a massive events program to go alongside the Olympic Games. There are cultural events like the Tate is running at the newly opened Tanks or of course the official Olympic Festival with a massive program of arts and culture events through out the Olympics.

    The sponsors have all their own way of being present at the games. Coke has set up a pavilion that is at the same time a musical instrument. The facade is built from sensor equipped cushions and visitors can play tunes by interacting with the facade of the pavilion.

    EDF, also one of the big sponsors is running a special light show on their very own London Eye. Every evening the light on this big London attraction will have a light show on display that is governed by the mood of the nation.


    Image taken from gizmag / The London Eye with the Energy of the Nation light show in progress, earlier this week.

    The installation is using Twitter data to feel the pulse of the nation through out the day and summarise it in the evening for a show of flashing lights and colours. The data from Twitter is analysed regarding the positive or negative content of the message. The overall count of this rating is then via an algorithm transformed into the pattern of light and colour displayed on the wheel.

    For the Energy of the Nation project, EDF is work with Mike Thelwall, from the University of Wolverhampton and SOSO design company on this project, to light up the London Eye with a daily custom light show.



    Talking about Twitter data visualisation another one, pretty unrelated to the Olympics has been put together recently by Nikhil Bobb. Its a lens flare sort of visual effect to let the tweets blink up on a map. Looks very nice and the map is interactive and you don't have to wait until the evening to enjoy it. You can check it out round the clock fro London from HERE. Other cities are in the list on the left if you want to travel the world on a lens flare trip. Via Living Geography.

    twitterLenseFlare
    Image by urbanTick / Tweet flare visualisation of real time tweets by Nikhil Bobb.

    Let the Games Begin!
              Personal Manufacturing - Knitting Printer        

    Varvara Guljajeva and Mar Canet Sola gave a presentation about personal manufacturing and knitting, which they published on Slideshare.

    Varvara Guljajeva is an artist working in the field of art and technology. Varvara has exhibited her art pieces in a number of international shows and festivals. The artist was selected for the residency at FFKD, IAMAS, EMARE (FACT, Liverpool), Crida, MU Gallery, Verbeke Foundation, Marginalia+Lab, Seoul Art Space Geumcheon, and more.

    Mar Canet Sola is an artist, researcher who likes to write software exploring new ways of playfulness and expression, inspired in digital age. I am working in computer games, data visualization and new media art installations. He is a co-founder of the art collective Derivart, working in the intersection between finance, art and technology. He is also co-founder of Lummo, a small studio of new media architecture and working as an artist-duo with Varvara Guljajeva.


              Groundbreaking Mic story "Unerased" investigates transgender murder cases        

    Mic today launched "Unerased: Counting Transgender Lives," a comprehensive investigation into the epidemic of violence facing transgender people in the United States, researched and written by journalist Meredith Talusan, a trans woman of color.

    The story includes comprehensive data visualizations to illustrate the findings. In collaboration with advocacy organizations, individual advocates, academics, and victims' friends and family, the team at Mic collected and compiled information about every documented transgender homicide from 2010 to 2016, and created an interactive database containing demographic and biographic information about each victim.

    Among Talusan's key findings:

    • The database contains 111 trans and gender non-conforming victims
    • 46 (41%) of those cases remain unsolved
    • Black transgender women face the highest rates of violence: 72% of victims between 2010 and 2016 were Black trans women
    • Young Black trans women, ages 15 to 34, are estimated to be between 8 and 39 times more likely to be murdered than young cisgender women
    • If, in 2015, all Americans had the same risk of murder as young Black trans women there would have been 120,087 murders instead of 15,696 murders
    • Of the 25 cases that were tried, 5 involved Black trans women as victims and resulted in lesser charges of manslaughter or assault
    • Of the 25 cases, only 1 with a Black trans woman victim has resulted in a first-degree murder conviction

    In "Documenting Trans Homicides," the article accompanying the datase, Talusan found that because some public institutions and officials are not educated about what it means to be transgender, the identities of transgender victims are often erased or effaced after death. This is compounded by several factors, including the fact that transgender people often cannot afford a legal name change, may live in a community where obtaining correct identification on documentation is difficult, or family members reject a trans person's identity and withhold that information from authorities. Therefore the number of victims is likely much higher. 

    Of the report, Talusan said: 

    "In reporting this story and speaking with family members of transgender homicide victims, we focused on bringing light to systematic failures impacting trans people, especially trans women of color. If everyone in the U.S. were murdered at the rate young Black trans women and femmes are, there's no doubt that the public would consider this a crisis of massive proportion."

    Visit mic.com/unerased to read Talusan's article and view the interactive database. To make sure the stories of transgender people are told, and that transgender peoples' lives are accounted for and not erased, Mic has committed to continue tracking transgender homicides through this platform.

    The power of a project like Mic's is that in addition to providing information and data about the violence targeting the transgender community and the individual victims, it also provides a framework and contextualization for societal systems that consistently fail to acknowledge, raise awareness of, or offer solutions to a dangerous and alarming reality affecting some of the most vulnerable people in the U.S. 

    GLAAD talked to Talusan about the importance of this project:

    GLAAD: Why is a project like “Unerased: Counting Transgender Lives” so important?

    Talusan: Year after year, lists of trans people who've been murdered come out but the public reaction to and understanding of the epidemic of transgender murder has been limited. When Mic approached me about being involved in "Unerased," it immediately felt like a project that could break through this impasse in public awareness, by systematically accounting for transgender violence over many years, as well as the ways in which the crisis has been ignored not just by the public, but by government and social institutions at multiple levels.

    GLAAD: How do you envision the project will contribute to the cultural conversation?

    Talusan: I hope that it allows the public to more fully understand the gravity and specificity of the crisis of transgender violence. It's easy to dismiss a major problem when it affects a relatively small group of people. But it erodes our social fabric when trans people, especially Black trans women and gender-nonconforming femmes, are under so much greater threat of violence than the vast majority of Americans.

    GLAAD: For you personally, why was this an important project  for you to help take on?

    Talusan: As a transfeminine person of color myself who has experienced threats of violence, and have had close friends who’ve been attacked, I have a personal interest in intersectional transgender issues because of my lived experience. My background as a journalist and researcher also puts me in a good position to engage in the enormous amount of work required to investigate and quantify various aspects of the more than 100 documented cases of transgender homicide since 2010, so the project felt like the perfect coupling of my interests and capabilities.

    Take part in the conversation by sharing this impactful project and using the hashtag #unerased.

    December 8, 2016

              Carbon Fiber iPhone Case Low Price GPS DevicesA Review of Three GPS Systems        
    With so many different GPS receivers on the market these days it is almost impossible to know off hand which one is good for what you need it. GPS receivers have come a long way in recent years and their manufacturers keep coming out with new features and models.




    Carbon Fiber iPhone Cases
    In order to help you make a better selection of a GPS receiver this article will set out and examine three GPS devices from three different manufacturers. The first device is manufactured by one of the most popular GPS receiver models in the market manufactured by Garmin. The second is made by an equally popular brand, Magellan and the third one is made by a less known brand, Mio.





    The Garmin Nuvi 265w receiver is considered on of the few GPS systems on the market that has many features. The manufacturer, Garmin, has a reputation for manufacturing portable navigation systems and is considered one of the leaders in the field.





    The Garmin 4.3 nuvi 265 wt GPS is one of those receivers that feature a 4.3" wide screen display for easy viewing of the maps. The screen has three dimensional views so you can view the screen how you feel comfortable. In addition, the Garmin 4.3 nuvi 265 wt GPS comes preloaded with hundreds of street maps for the United States, Canada and Puerto Rico.





    The second GPS device of similar caliber is the Magellan RoadMate 1200. The Magellan RoadMate 1200 is one more relatively simple and easy to use gps system. The Magellan Roadmate1200 Touch-Screen GPS unit is a no frills GPS, and is geared towards the first time GPS buyer who has needs for a simple navigation device that just works.





    It features a 3.5 inch screen which is very bright and is viewable from wide angles. The text and maps are clear and sharp. What is also very noticeable about the RoadMate 1200 is the extremely slim profile and light weight. While not iPhone thin, I believe it is the thinnest GPS device I've held.





    The third GPS device this article will examine is the Mio Moov 500. The Miot Moov 500 features a wide 4.3-inch touch screen which places it on the larger side of the size spectrum for navigation devices. Other premium features include real-time traffic updates, and a text-to-speech engine for pronouncing street names, and a generous library of 3.5 million points of interest.





    The Moov's 4.3-inch screen is far from the best-looking gps device, but the design crew did a decent job of milking it to make the most crucial data visible. One more thing worth noting is that the Moov's text-to-speech synthesizer is far from being the best. Overall the device is of good value primarily because of its low price.


    Don't overpay for a GPS device. Find many discount GPS device online. Get the miot moov 500 portable GPS receiver for less or even the Magellan Roadmate1200 Touch-Screen GPS unit on sale. One you wont find that easy is the Garmin 4.3 nuvi 265 wt GPS navigator

    carbon fiber iphone 3g case: iPhone 3G cases Carbon Fiber

    carbon fiber iphone 3g case: Carbon Fiber iPhone Cases

    Article Source: www.articlesnatch.com


              Urban Youth as Data Scientists and Network Builders        
    My friend Sidney Hargro, posted an article on Linkedin today, which I started to respond to, but ran out of characters.

    Here's what I wrote:

    More than 15 years ago I began to see the potential of data management and information networking as a skill and 21ct century career opportunity and recognized that since the Internet was an emerging tool, the starting line for rich kids and poor kids was almost the same....IF...patrons were willing to put mentor-rich non-school programs in high poverty neighborhoods, filled with computers, the internet, and opportunities for young people to learn to use those in ways richer kids would also be learning.

    A youth who learns coding, web design, blogging, video creation, data visualization, and story telling and how to build an on-line network and motivate people in a desired direction, is learning leadership skills that will have great value.  These skills can be learned without the help of local schools, if the people making learning opportunities in the non-school hours have enough vision and resources.  Kids could be leaving  high school and starting their own consulting businesses or information networking companies --- transporting themselves and their families from poverty to the upper middle class and beyond, in one generation.   Unfortunately , I know of too few places where such programs are operating in high poverty areas of Chicago or other cities.

    The opportunity still exists.

    Following are a couple of visualizations that illustrate what such a program might look like.  The first is a graphic I've used for over 20 years to describe a program with volunteers from many different industries and backgrounds serving as tutors, mentors, leaders, organizers, etc.  This Total Quality Mentoring (TQM) PDF illustrates the idea.



    This next graphic visualizes three forms of learning that would be happening in such a program, if the leaders shared this vision.  This concept map shows a focus on academic, social and work skills and the goal of building habits of using the internet to find and share information.

    It's difficult to know how many, if any, Chicago area tutor and/or mentor programs have such a vision because few use visualizations on their web sites to show program design and strategy. I've been browsing a list of organizations that I host on Facebook, and just a few attempt to show program design with videos they share.  This East Village Youth Village Program video is one way of showing program design.

    When I write about extra roles volunteer tutors and mentors might take, or the role of talent volunteers, I'm thinking of people who work with kids and other volunteers to help programs communicate their own program vision and design better, by borrowing ideas from what others are already doing.  


    This vision needs to be shared by philanthropists, business leaders, volunteers and others, as well as by program leaders, if it is to become practice in more than a few places.

    I'd be happy to help others explore this idea and others that I share on this blog and the Tutor/Mentor Institute, LLC web site.
              Data Visualization UI Kit        

    By Quanti Design

    Download this free .sketch file resource


              W. E. B. Du Bois’s Modernist Data Visualizations of Black Life        

    For the 1900 Exposition Universelle in Paris, African American activist and sociologist W. E. B. Du Bois led the creation of over 60 charts, graphs, and maps that visualized data on the state of black life.

    The post W. E. B. Du Bois’s Modernist Data Visualizations of Black Life appeared first on Hyperallergic.


              Rig & Well Completion Data Visualization New to Digital H2O’s Industry Leading Oilfield Water Management Platform through RigData Partnership        

    Market Insight Report Leverages New Dataset to Highlight 2016 Rig Forecast, an Estimate of Demand for Water, and Recent Trends in Rigs and Completions

    (PRWeb January 19, 2016)

    Read the full story at http://www.prweb.com/releases/2016/01/prweb13167956.htm


              Data Visualization Elements Kit        

    By Sagi Shrieber

    Download this free .sketch file resource


              Mediware® Highlights ServicePoint® Upgrades at Boot Camp National Training Event        

    Specialized training to focus on new reporting and data visualization platform LENEXA, KS, April 4, 2017 – Mediware will host 200 customers at its annual customer training, to be held April 11-13 in Austin, Texas. The Mediware Boot Camp Training…

    The post Mediware® Highlights ServicePoint® Upgrades at Boot Camp National Training Event appeared first on Mediware Information Systems.


              Get your Infograph kick with our Free Infographic Elements Kit        

    Over the years, infographics have really exploded in terms of popularity. People are loving these for all good reasons. Everyone prefers data visualization. They are always fun to read. Instead of looking at dull charts and line graphs, infographics allow...

    The post Get your Infograph kick with our Free Infographic Elements Kit appeared first on WebHostFace Blog.


              Writing Across Technology: Developing Instructors to Teach..        

    11/15/2017

    Writing Across Technology: Developing Instructors to Teach 21st Century Composition The First-Year Writing Program mini grant enhances the education of both graduate students in the English Department and the approximately 3300 first-year students who in enroll in one of the largest service courses at the University. To this end, we developed a new curriculum to train our graduate students in methods for multimodal composition. “Multimodality” in composition refers to working not just “on paper” but with visual, aural, and even spatial means to create texts. The compositions that first-year writers produce will always include traditional academic essays, but we are adding other commonly used media such as infographics and other forms of data visualization, podcasts, web content, and video productions. To bring the undergraduate FYW curriculum into the 21st century, we first have to bring our instructors there. The grant awarded for our Writing Across Technology (WAT) initiative helped defray the costs of sending two graduate students and a faculty member to the Digital Media and Composition institute, where they began researching and producing teaching materials for new instructors. We now have two faculty and three graduate students with the research background to strengthen pedagogy and the technical skills to train the incoming class of new instructors. to regiter - http://cetl.uconn.edu/seminars/.


              Top 9 Best C/C++ IDEs For Windows/Mac OS X/Linux/Unix        
    If you are working on a big project, you definitely need to use IDE. There are various types of IDE's and you should select the right one which fits your needs.
    So I decided to give you the list of best C/C++ IDE's for different platforms.

    1) CLion

    Platforms: Linux, Mac OS X, Windows
    JetBrains well-known company has created this IDE for C/C++ developers.

    • Smart editor
    • Embedded terminal
    • Various languages and standards: C++11, libc++, boost, JavaScript, XML, HTML and CSS
    • Keyboard Shortcuts to help you with fast project creating
    • CMake support
    • Code analysis
    Why it's a number one? Well, because it has a multi-platform support first of all and a has a lot of functions which will help us in developing.

    Platforms: Windows
    IDE from Microsoft. The only minus about this IDE that it only works with Windows. This IDE is not only for C/C++ developers, but it also includes many popular languages in its list. If you are working with a team then you probably do need a Pro version minimum which is not free. But if you are working alone then you can use Express edition of IDE which is free.

    3) Xcode
    Platforms: Mac OS X
    This IDE is the best choice for Mac users. Probably there are so many programmers who prefer to use a Mac. And again this IDE like the previous one (Visual Studio) is not only for C/C++ developers, there are many other popular languages supported. It is completely free to use. So you get pretty cool features to develop your program with C/C++.

    Platforms: Linux, Mac OS X, Windows
    The second good IDE which has a multi-platform support. It is also open source which is a big plus and completely free.
    • C/C++ Development Tools
    • Eclipse Git Team Provider
    • Mylyn Task List
    • Remote System Explorer

    Platforms: Linux, Mac OS X, Windows
    Multi-platform, free IDE. Has a lot good feature which can help you in development.
    • C++11 Support
    • Qt Toolkit Support
    • Remote Development
    • File Navigation
    • Compiler Configurations

    Platforms: Linux, Mac OS X, Windows
    Multi-platform support, completely free. So why I suggest this IDE? First of all, it is a light IDE.
    • Written in C++. No interpreted languages or proprietary libs needed.
    • Extensible through plugins
    • Open Source! GPLv3, no hidden costs.
    • Multiple compilers support
    • Interfaces GNU GDB
    • Also supports MS CDB
    • View CPU registers
    • Switch between threads
    • Disassembly
    IDE have many features which are listed in its official website.

    Platforms: Linux, Mac OS X, Windows
    Qt is one of the most popular Libraries. You can download the open source version for free. Really great choice if you want to create a GUI for your application.
    • Qt Quick Compiler
    • Qt Data Visualization
    • Boot to Qt
    • Qt Quick 2D Renderer
    • Qt WebView
    • Qt Virtual Keyboard
    You can also purchase the pro version which gives you more features to work.

    8) Geany
    Platforms: Linux, Mac OS X, Windows
    It's completely free to use. Lightweight and perfect IDE for C/C++ developers.
    • Syntax highlighting
    • Code folding
    • Symbol name auto-completion
    • Auto-closing of XML and HTML tags
    • Build system to compile and execute your code
    • Simple project management

    Platforms: Linux, Mac OS X, Windows
    Open Source, free IDE for C/C++ development.
    • Generic support for compilers with built-in support for GCC/clang/VC++
    • Display errors as code annotations or as a tooltip in the editor window
    • Errors are clickable via the Build tab
    • Built-in GDB support
    • Supports C++11 auto keyword, templates, inheritance etc.

              Pattern Matters: Tangible Paper Infographic in Data Visualization #547791        

              Data Visualization - Kantar Information Is Beautiful Awards | Arty | Pinterest | Design, Visualização De Dados e Pastéis #547706        

              data visualization | Data Viz | Pinterest | Fractais, Visualização De Dados e Primeiro Lugar #547705        

              The Deep Web 3D Data Visualization Infographic by dr bolick, via Behance | Design | Pinterest | Infográfico, Visualização De Dados e Behance #547704        

              The Typographic Details Behind Typewolf’s Favorite Sites of May 2017        

    This is the 40th installment of my monthly feature on Typewolf where I share my favorite type-driven websites from the previous month and then write a little about the typographic details behind the designs. You can check out last month’s post for April here.

    Twin Pickles

    Twin Pickles

    Hawthorn is a typeface that falls squarely in the “evil serif” genre with its razor-sharp, blade-like letterforms. It’s the only font used here and is strong enough on its own to create a solid brand with not much else needed. Rather than a pure white background, the type is set on a lightly tinted green background which helps create cohesion with the bright green used in the logo.


    The Hill

    The Hill

    The Hill uses Commercial Type’s Graphik almost exclusively throughout their site. Hoefler & Co.’s Chronicle Display makes a brief appearance in the footer, however, that appears to be the only place it is used. The logo looks like it is set in Chronicle Display (or something very similar), so it may have been nice to bring the serif typeface into the design a bit more. That would help to create a little more unity between the logo and the rest of the type on the site.


    The Pudding

    The Pudding

    A few months ago, I wrote about how Canela, a typeface that isn’t quite a serif and isn’t quite a sans-serif, has been blowing up lately. Since then, it’s continued to become even more popular with four more sites on Typewolf using it. It’s paired here with the serif Publico and sans-serif Atlas Grotesk—all from Commercial Type. The type on this site is unique in that there isn’t really a consistent template used between the articles. The content focuses on data visualization and each article seems to be individually art directed to best match the visuals. Some of the pages even have their own unique type choices, using fonts from both Google Fonts and H&Co.


    Romance Journal

    Romance Journal

    High-contrast sans-serifs are making a comeback and Optima is considered a classic of the genre. It works nicely here with the subdued photography to help create a solemn mood. An extended cut of GT America is set in uppercase for the subheaders—usually it’s standard practice to add letterspacing to uppercase type, but the uppercase here actually has slightly negative letterspacing which is odd. I imagine that the font is already so wide that a tighter setting just felt more appropriate.


    Man Repeller

    Man Repeller

    The Man Repeller redesign features bright colors combined with elegant typography. A condensed style of Chronicle Display is used for the headlines—condensed faces always work well for headlines as they allow more words to fit comfortably per line. Knockout is used for the logo as well as subheaders and auxiliary text. As much as I love the typography, I think the body text suffers from a few flaws—faux italics are used, the line length is a little too wide and there is an unusually large gap between paragraphs which breaks up the reading flow.


    Stay Tuned for Next Month’s Post

    I’ll be publishing a new type-driven design roundup post like this at the beginning of every month. Enter your email below if you want to be notified when it is published.


              Comment on Designing interactive data visualizations for learning (by Katie Stofer and Lisa Anthony) by New TIDESS project website! | Dr. Katie Stofer        
    […] some of our pilot work – post by me and Lisa Anthony, co-PI from Computer Science […]
              Comment on Designing interactive data visualizations for learning (by Katie Stofer and Lisa Anthony) by Katie Stofer        
    We are presenting this pilot work as a conference paper at Interaction Design and Children (Part of ACM SIGCHI) this week! See here: http://init.cise.ufl.edu/?q=node/86 - there's a link to the camera-ready version of the paper there.
              Learn DAX, data modeling, and data visualization after MS Data Insight Summit #dax #powerbi #dataviz        

    I spent three days in Seattle at Microsoft Data Insight Summit, delivering a preconference day about data modeling for business users, and two workshops about DAX. All of these were sold-out, so even if you attended the conference, you might be interested in content that cover the same topics. And if you were in one of my sessions, you probably would like to get material to study. For this reason, I think it is useful to provide a complete recap here:

     

    • Data modeling: the preconference day was based on the book Analyzing Data with Power BI and Power Pivot for Excel. We do not have a corresponding classroom course at the moment, but if you are interested in this preconference day, I and Alberto Ferrari will deliver an updated version of the Data Modeling with Power BI at PASS Summit 2017 in Seattle, on October 30, 2017.
    • Learning DAX: I and Alberto Ferrari delivered two workshops introducing the DAX language to beginners. The workshops were not registered, but we have a corresponding free video course that has the same content and similar examples. Attending this free online course is a suggested preparation for the more advanced Mastering DAX workshop.
    • Data Visualization: this is the last day to take advantage of the launch offer for the video course Power BI Dashboard Design, saving 20$ from its list price.

    You do not have any excuse now: you can get some lecture and then make practice!


              Is Data Visualization A Separate Market Or Just A Feature Of Business Intelligence Platforms?        
    Lots of my clients are confused. They start a Forrester inquiry with a question about data visualization capabilities, but when I lead them into discussion about business intelligence (BI) platforms, they say "but we already have a BI platform. All we really want is an ability to create and share data visualizations". Is there a […]
              Data Analytics Online Training Course india        
    Become Data Analytics Expert and build your Career with Data Analytics Online Program. Enhance your Skills with Data Analytics Professional Courses in RDBMS,SAS,Data visualization.Data analytics professional has huge job opportunities in USA, UK, Canada and India.
              Interactive Data Visualization with HoloViews & Bokeh        
    none
              Python and Tableau Building an Interactive and Beautiful Data Visualization with TabPy        
    none
              Dash - A New Framework for Building User Interfaces for Technical Computing        

    Description

    If you are a data scientist today, it's actually pretty tough to build a data visualization web-application. If you're not a full-stack developer, you're practically out of luck.

    But GUIs like sliders, dropdowns, and text inputs are extremely helpful to the data scientist or engineer. If you're an R programmer, you're in luck with Shiny. If you're a MATLAB programmer, you can use GUIDE (but good luck sharing it!). The dash project introduces a framework for building web-based technical computing apps (GUIs). It's like a Shiny for Python. dash is built off of plotly.js and react.js to provide rich interactive graphing and user interfaces and Python's flask to provide a simple but scalable web server.

    This talk will introduce the scientific community to Dash. We'll go over motivations behind the project, the basic architecture of the framework, several interactive examples, and leave with a vision for the future of interactive and sharable technical computing.


              Stop blaming the rescuers        

    Attacks against rescue efforts in the Mediterranean must stop. The recent Italian and EU proposals are just the last steps of an ongoing de-legitimisation campaign that is putting the lives of thousands of migrants at risk.

    The Iuventa of the NGO Jugend Rettet rescues several migrants in distress during the Easter weekend 2017. Due to continuing inadequacy of state rescue operations, NGOs present in the area are often working at the limit of their capacities. Credit: Moonbird Airborne Operation / www.sea-watch.org, www.hpi.swissIt has been spreading like a trail of powder. A heinous argument blaming rescue efforts in the Mediterranean for colluding with smugglers, encouraging more migrants to attempt the perilous sea crossing and ultimately endangering their lives, has, over the past few months, broken out of the small circles of far-right conspiracy theories to reach the headlines of prominent newspapers and become the official position of European states and institutions. The latest proposal by the Italian government to block its ports to nongovernmental rescue vessels and the subsequent EU-endorsed plan to impose a code of conduct to limit their activities are only the most recent outcomes of months of virulent attacks. These proposals disturbingly converge with the initiative of far-right groups which are chartering their own vessel to stop NGOs at sea. Should these different initiatives succeed in blocking or hindering rescue efforts, the consequences for migrants would be disastrous.

    The accusation that rescue efforts would be the cause of the soaring numbers of crossings and deaths at sea is far from new. Already in 2014, the Italian military-humanitarian Mare Nostrum operation, which had for a year deployed unprecedented means to rescue migrants at sea, was accused of constituting a “pull-factor” that endangered migrants’ lives. The termination of Mare Nostrum, however, did not lead to less crossings, only to a staggering rise in the number of deaths at sea. It was precisely to fill the lethal gap in rescue capabilities left by the EU and its member states that NGOs courageously stepped in with their own vessels. During recent months, they have repeatedly given proof of their fundamental life-saving role, often operating at the limit of their capacities to make up for the lack of state rescue means. Despite this, it is their activities which are today threatened by a campaign of criminalisation and de-legitimisation.

    While the most heinous accusations of collusion with the smugglers have been revealed to be baseless and receded from mainstream discourse, a subtler but no less grave accusation initially formulated by Frontex, the European Union border and coast guard agency, and reminiscent of that formulated against Mare Nostrum, has proven remarkably resilient. A recent article by the New York Times titled “Efforts to Rescue Migrants Caused Deadly, Unexpected Consequences” offers the latest example of this argument. After showing through data visualisation and cartography that in the past few years rescue operations have moved closer to Libyan coasts, the NYT authors uncritically voice the concerns raised by Frontex that this shift would have “introduced a deadly incentive for more migrants to risk the journey and for smugglers to launch more boats”.

    They also claim that the presence of rescue vessels would have encouraged smugglers to use even more dangerous tactics, such as using “flimsy boats and provide just enough fuel to reach the edge of Libyan waters”. In sum, while admitting that “rescuing migrants closer to the Libyan coast saved hundreds of people at sea”, the article casts a dark shadow over rescue efforts in the Mediterranean, claiming that, despite themselves, “strategies to rescue migrants in the Mediterranean Sea […] have pushed desperate migrants into even more desperate situations”. With a cunning sleight of hand, the rescuers are turned here into the culprits for the growing numbers of deaths at sea.

    Rescued migrants on the deck of the Iuventa of the NGO Jugend Rettet during the Easter weekend 2017. Despite a nominal capacity of no more than 100 people, the Iuventa had to take on board hundreds of people to make up for the absence of state-led rescue assets. Credit: Giulia BertoluzziAs humanitarian actors know all too well, they must always confront the possibility that their intervention may unwillingly amplify the problem they set out to alleviate. But today, there is solid evidence that these arguments are fundamentally mistaken and that rehearsing them uncritically only contributes to legitimising a dangerous policy.

    As we have demonstrated in a recently published report, rescue efforts were not the main driver of increasing arrivals over 2016. Data collected by Frontex itself provides evidence that the overall increase during that year was mainly due to more crossings by migrants from several West and Central African nationalities which predated the deployment of NGO vessels. Furthermore, a 46% increase in the number of arrivals was registered in the western Mediterranean for 2016, while no proactive rescue operation was deployed in that area. Faced with political and economic crises in several countries on the African continent and with appalling conditions in Libya, migrants have little choice but to attempt the sea crossing, with or without resuce means.

    We also demonstrate that rescue efforts by NGOs were not the main cause of worsening conditions of crossing but a life-saving response to evolving smuggling practices that predated their intervention. For instance, the shift from larger and more solid wooden boats to rickety and smaller rubber boats, which has been acknowledged as a major factor in the increasing deaths at sea, occurred already in late 2015, when the presence of NGOs was still marginal.

    One of the most important factors leading to this trend was the EU’s anti-smuggling Operation Sophia, which, by destroying smugglers’ vessels once migrants had been rescued, prevented the re-use of wooden boats. Another crucial factor has been the increasing attempts by the Libyan Coast Guard to (selectively) intercept migrant boats. These and other factors converged to push even further the downward spiral in the conditions of crossing offered by smugglers. While it cannot be ruled out that NGO rescue efforts contributed to consolidate specific tactical shifts in the practices of smugglers, it is wrong, we show, to claim that they were driving them.

    Finally, and most importantly, our statistical analysis indicates that there is a strong negative correlation between the migrant mortality rate and the deployment of NGOs’ rescue vessels. In short, over the course of 2016, the more NGO vessels were deployed, the safer the crossing became for migrants. This provides the strongest demonstration of the life-saving role played by rescue efforts and a forceful empirical rebuttal of their supposed “deadly consequences”.

    Monthly migrant mortality rates for 2016 (based on IOM and UNHCR data) and number of deployed NGO rescue vessels, showing a striking negative correlation: the more vessels are present, the safer the crossing becomes for migrants. Credit: Forensic Oceanography

    The ending of Mare Nostrum was recognised too late by Jean-Claude Juncker, President of the European Commission, as a “serious mistake” that “cost human lives”. Today EU institutions and members states are on a course to repeat this same “mistake”, with a wicked twist. This time they are not simply persisting in their resolve to not provide adequate rescue means in the aim of deterring migrants from crossing, but they are also actively seeking to stop those who made up for their lethal absence and continue to remind the EU of th