1994        

From: "scrittore della domenica" <scritdom72@unimi.it>
To: "lorenzo" <Lrnz73@unipi.it>
Date sent: Mon, 14 Feb 1994 12:09:05
Subject: Eccomi

Mannaggia a te! Che c'era di male nelle lettere cartacee? Signornò. "Dai, mandami un'e-mail! E' facile, divertente, vedrai!". - Intanto ho dovuto iscrivermi al laboratorio d'informatica di questa cavolo di università, dove bisogna fare domanda in carta bollata anche per un caffè al bar. Poi, io il computer manco lo so usare. A quest'ora dovrei essere in sala studio a preparare l'esame di diritto privato comparato (capirai); invece sto qua ad arrovellarmi con tastiera e mouse.

Va beh. Ti scrivo per questo: devi assolutamente registrarmi un po' di cassette di musica rock. Hai presente? Quella che tu hai sempre detto che io la snobbo. Specialmente, quanto più progressive anni '70 riesci a procurarmi. Non so, gli Yes, i Genesis, i King Crimson, quella roba là. E poi il grunge. Non i Nirvana, ché onestamente non li reggo: Smashing Pumpkins, Pearl Jam, Soundgarden. E magari della musica da discoteca, però di un certo livello. E ci sarebbe un pezzo di un gruppo inglese che mi ossessiona, si chiama... ce l'ho sulla punta della lingua. Basta, appena mi ricordo il titolo te lo dico.

No, non sono impazzito. E' che ho conosciuto una ragazza molto simpatica che ha questi gusti musicali. Anche molto bella, a dire la verità. Anzi, la sola cosa che le vorrei dire è proprio questa: che la trovo bellissima. Ma, tu mi conosci; non c'è verso; se provo ad affrontare l'argomento, ammutolisco senza rimedio. Così mi servono urgentemente altri argomenti di conversazione. Lei studia inglese, e fin qui posso arrangiarmi. L'ho anche accompagnata a lezione, ho studiato i suoi appunti. Sai? Sto diventando, credo, il massimo esperto di T. S. Eliot di tutto il basso Ionio reggino. Quest'estate, se non altro, potrò esibirmi nei locali sul lungomare recitando The Waste Land a memoria.

Ma tutto questo non basta. Prima o poi dovrò trovare il coraggio di invitarla a uscire una sera, che so, ad un concerto, o addirittura (non ridere) a ballare. Perché ho capito che alle matinée in Conservatorio lei non ci viene. Ebbene, è qui che subentri tu. Devi aiutarmi ad andare oltre Beethoven. Ripeto, procurami del rock, perché mi serve come il pane!

Attendo tue notizie, stammi bene.

P.S. Mi sono ricordato il titolo di quella canzone: Creep, dei Radiohead.


From: "scrittore della domenica" <scritdom72@unimi.it>
To: "lorenzo" <Lrnz73@unipi.it>
Date sent: Mon, 21 Mar 1994 15:31:22
Subject: Re: Re: Eccomi

Grandi novità! Abbiamo occupato. Ora la nostra casa dello studente è Pensionato Universitario Occupato Autogestito. L'abbiamo deciso venerdì sera, al termine di un'agitata assemblea sugli ultimi aumenti delle rette e della mensa. Io mi sono aggiudicato la gran parte dei turni di notte in portineria: sia per evitare i turni di pulizia, sia perché mi piace fare le ore piccole e perché durante l'occupazione c'è un bel clima. Si parla, si beve, si fuma tabacco e non solo: effettivamente, non sto studiando granché. Però mi diverto.

Ieri, per esempio, verso la mezzanotte la situazione comprendeva una bottiglia di mirto, uno stereo, sei-sette persone fra cui Alice (la ragazza di cui ti ho parlato nella mia ultima lettera) e me. Il guaio è che dopo un'oretta loro sono usciti, per andare credo in discoteca, mentre io dovevo rimanere per continuare il turno; ma mi hanno lasciato la bottiglia, d'altronde semivuota, e lo stereo. O forse due bottiglie e due sterei, a quel punto non capivo più bene. Sono riuscito, in qualche modo, a togliere la cassetta dei Rage Against The Machine e a mettere su il Lamento d'Arianna di Monteverdi, che in quel momento era anche più adatto.

A proposito: mi hai registrato le cassette che ti ho chiesto? Se scendi anche tu in Calabria per le elezioni, tra una settimana me le puoi dare. Ma l'hai visto quel pagliaccio miliardario? E pensa che magari qualcuno lo voterà anche. Viviamo in un paese meraviglioso.


From: "scrittore della domenica" <scritdom72@unimi.it>
To: "lorenzo" <Lrnz73@unipi.it>
Date sent: Fri, 15 Apr 1994, 16:05:30
Subject: buona notte al secchio

La sai una cosa? Tre anni fa, dopo il diploma, commisi un solo errore.

Allo sportello della stazione chiesi un biglietto per Milano. Sbagliai: avrei dovuto comprare un sola andata per Londra, o per Parigi, o per Berlino.

L'occupazione è finita. L'Ente per il Diritto allo Studio ha promesso che, col nuovo bilancio, ben il 3,75% delle risorse aggiuntive sarà stanziato per porre rimedio ai recenti rincari delle tariffe, compatibilmente con il principio del pareggio contabile.

Insomma: non abbiamo ottenuto un bel niente.

Ho saputo che Alice si è fidanzata. Con uno studente d'ingegneria, milanese da varie generazioni e di ottima famiglia, buon giocatore dilettante di basket. Ha una macchina svedese con doppia alimentazione, benzina verde e gas, perché bisogna proteggere l'ambiente. Un bravissimo ragazzo.

Tutto è perduto. Mentre questa ridicola nazione, consegnatasi ancora volontaria nelle mani di un tirannello squilibrato, si avvia verso il suo triste destino, io ti comunico la mia irrevocabile decisione.

Pongo rimedio al mio errore.

Espatrio.

E basta, con queste e-mail del piffero! Ti farò avere prossimamente il mio nuovo recapito estero.

La sai un'altra cosa? Siamese Dream degli Smashing Pumpkins è molto più noioso della più tediosa sinfonia di Bruckner.


From: "scrittore della domenica" <scritdom72@unimi.it>
To: "lorenzo" <Lrnz73@unipi.it>
Date sent: Tue, 26 Apr 1994 16:28:43
Subject: scherzavo

E niente: all'estero non sono andato.

Ti spiego cos'è successo.

Sai che mi è sempre piaciuto stare sveglio fino a tardi; ultimamente sto esagerando; mi succede spesso di fare l'alba. Da un po' non scendo neanche più in sala studio, per non rischiare d'incontrare Alice. La sera me ne sto tappato in camera, ascoltando Brahms.

Un mio consiglio per i giovani: se abitate in una casa dello studente, cercate di non innamorarvi mai di un'altra persona ospite del pensionato. Soprattutto se non corrisposti.

Bene: domenica notte, mentre studiavo, ho messo su il Tristano. Così, tanto per cambiare. Ma, poco dopo il preludio, ho dovuto spegnere.

Frisch weht der Wind
Der Heimat zu
Mein Irisch Kind,
Wo weilest du?


Mi si è annebbiata la vista, e ho scoperto che avevo finito i kleenex. Ho chiuso il libro. Mi sono alzato dalla sedia e sono andato a lavarmi il viso con l'acqua fredda. Poi sono tornato alla scrivania.

Ho acceso il quinto canale della radio (quello che trasmette solo classica). C'era un brano di avanguardia; una cosa che non conoscevo, una specie di collage sonoro sul tipo di Revolution 9 dei Beatles, hai presente? Folle in tumulto, suoni strani, rumori ambientali, nastri in loop, voci umane che declamano sillabe incomprensibili. Un delirio.

Roba forte! Ho attaccato lo spinotto delle cuffie alla radio e mi sono sparato il brano nelle orecchie, chiudendo gli occhi e reclinando indietro la testa. A un certo punto, dal tappeto sonoro ha cominciato a emergere una voce femminile, che sembrava pronunciare un testo coerente. Dapprima si sentiva sullo sfondo, e non si capiva bene cosa dicesse; qualcosa sul Vietnam. Man mano, la voce diventava sempre più forte, più netta, e veniva sempre più in primo piano. Alla fine, con una presenza sonora impressionante, la voce scandiva, anzi no scolpiva, queste parole: "Rimanete qui, e combattete per la vostra dignità di uomini".

"Avete ascoltato: Luigi Nono, Contrappunto dialettico alla mente", ha detto l'annunciatore.

Ho staccato lo spinotto, ho tolto le cuffie e ho spento la radio.

Ho riconsiderato la mia intenzione di espatriare. Ho pensato: ma non l'ho già fatto? Sono già emigrato. E prima di me mio padre, proprio negli anni in cui Nono componeva quel pezzo.

Quante volte ancora voglio fare le valigie?

Possibile che l'unica soluzione sia quella di scappare sempre più lontano?

L'indomani, 25 aprile, sono andato alla manifestazione. Il concentramento era alle due del pomeriggio; ma mi sono svegliato così tardi che non ho potuto neanche fare colazione. Ho preso un caffè al distributore automatico giù in atrio e sono uscito, in direzione porta Venezia.

Era previsto un corteo molto grande. E' stato immenso. Mai vista tanta gente tutta insieme! E che varietà, quanti colori, che musiche, che allegria! Quanta pioggia, anche: veniva giù a secchiate, per tutta la durata della manifestazione.



Io ero in fondo al corteo, e pensa che non sono neppure riuscito a entrare in piazza Duomo, tanto era gremita! Ho svoltato per piazza Fontana, poi per via Larga. Ho vagato per un pezzo; non so chi o cosa mi aspettassi di trovare, a parte qualcosa da mettere sotto i denti, dato che avevo una fame da lupo.

Mi sono ritrovato, non so come, in piazza Sant'Alessandro, vicino alla facoltà di lingue, che naturalmente era chiusa; c'era però un bar-tavola calda miracolosamente aperto; mi ci sono fiondato dentro.

Ero seduto sul mio sgabello, dando le spalle all'ingresso, e divoravo la mia pizzetta, ovviamente pessima come solo a Milano la sanno fare, quando mi è parso di vedere una sorta di variazione nella luce. Come se avesse improvvisamente smesso di piovere e fosse uscito un sole splendente.

Mi giro, e vedo Alice entrare nel locale e raggiungermi.

Era in tenuta da manifestazione: jeans sdruciti, camicia grunge, niente make-up (lei che di solito è così curata). I lunghi capelli biondi inumiditi dalla pioggia, che intanto continuava a scrosciare.

"Ciao", le dico. "Dov'è il tuo fidanzato?"

"Sciocco", mi ha risposto. "Oggi ci sono cose più importanti", ha detto. "C'è mezza casa dello studente qui fuori: abbiamo fatto uno striscione molto figo, lo andiamo ad attaccare al portone dell'I.S.U. Cosa fai qua, solo come un gufo? Vieni anche tu!"

Non me lo sono fatto dire due volte.

E dunque. Fra un mese ho l'esame di diritto sindacale. Materia interessante, sai? Credo che chiederò la tesi.

Per ora rimango qui.

(Racconto già pubblicato su Evulon. Ogni riferimento a fatti, persone, città, eventi, uomini politici, bar e pizzette della realtà è puramente casuale).
           Scientific Workflow Tools         
Crawl, Daniel and Altintas, Ilkay. (2010) Scientific Workflow Tools. In: Nanoinformatics 2010, November 3 - 5, 2010, Arlington, VA. (Unpublished)
          Atac informatic în Venezuela: 7 milioane de utilizatori ai serviciilor companiei de stat de telefonie mobilă, decuplaÅ£i        
Peste jumătate din utilizatorii companiei naționale de telefonie mobilă din Venezuela au fost privați de servicii &icirc;n urma unui atac informatic, a indicat joi guvernul acestei țări &icirc;n care protestele contra puterii au loc de mai bine de patru luni, transmite AFP.
Citește mai departe...
          Spitalul de Urgenţă Tg-Jiu cumpără soluÅ£ii software de 12 miliarde de lei vechi de la băieÅ£ii deştepÅ£i şi penali din IT        
Suma alocată pentru implementarea unui sistem informatic la Spitalul Judeţean de Urgenţă Tg-Jiu a atras firme importante din domeniul IT care îşi dispută contractul de un an de zile de zile, după ...
          Twitter altera relacionamentos em rede e influi nos processos de comunicação        

Se a palavra blog foi a grande sensação de 2008, a mais escrita, a mais falada, a mais digitada, a mais buscada, neste 2009 que está dando seus últimos suspiros a expressão da moda na web e fora dela foi Twitter. A mídia analógica e eletrônica se rendeu ao site de microblogs. Jornaisrevistas, editoras de livros e emissoras de televisão dedicaram páginas e mais páginas, matérias e mais matérias, minutos e mais minutos sobre o fenômeno que levou milhões de pessoas em todo o mundo a se comunicar com mensagens diversas em apenas 140 caracteres.

No Brasil, assim como aconteceu com o Orkut, o Twitter virou febre e ganhou a adesão de adolescentesjornalistas, publicitários, jogadores e técnicos de futebol, celebridades, CEOs, empresários e políticos. Os discussões ganharam aspecto telegráfico e uma multidão de pessoas passaram a murmurar – e às vezes gritar – seus comunicados no site de microblog. Mas como aconteceu com os blogs, ainda há uma parcela de usuários de internet que tem um certo ranço em relação ao fenômeno que agitou as redes sociais na web neste ano.

Provavelmente, uma nova onda cibernética surgirá em 2010, mas o Twitter pode ser considerado o grande fenômeno da web nessa década que está prestes a se findar. Tando assim que muito profetas cibernéticos chegaram a decretar o fim da “era blogger” devido ao sucesso estrondoso do site de microblog. Mas o que resta evidente de todo esse sucesso do Twitter é que muitas usuários desta rede social ou twitters, como estes são conhecidos na web, ainda precisam aprender a usar todo o potencial desta valiosa rede de contatos que é o Twitter.

O site de microblogs pode ajudar a posicionar uma marca no mercado, fidelizar clientes, turbinar ou arruinar carreiras, dependendo do uso que se faz dele ou da competência de quem resolve as mensagens telegráficas de 140 caracteres. Estudos acadêmicos e livros começaram a pipocar no mercado sobre os usos otimizadores podemos fazer do Twitter. Probloggers despejam enxurradas de posts sobre como bem utilizar a ferramenta para fins de marketing pessoal, profissional ou empresarial. É preciso saber com qual foco e com que finalidade se vai usar o Twitter. Usar por usar sem saber direito o porquê e como fazer é apenas ampliar a gritaria cibernética que para nada contribui.

Assim como aconteceu com os blogs num passado bem recente, mas que nessa era digital parece ter sido há décadas, a discussão é se o uso do Twitter por jornalistas consiste em jornalismo ou não. Eu digo que sim e que não, a depender de como se usa o poder dos 140 caracteres. Como blogueiros conseguiram construir credibilidade a partir de suas páginas pessoas e serem buscados por empresas de comunicação para atuarem como colunistas, há na Twitosfera um grande número de twitters que conseguem construir uma boa reputação com credibilidade, arrebatando muitos seguidores que querem saber o que ele tem a informar naquele dia, naquele momento.

Logo, para se dar bem no Twitter, assim como em qualquer plataforma de comunicação que se faça uso, é preciso produzir conteúdo de relevância, algo que seja útil para os internautas. O Twitter funciona como um espécie de agregador de notícias, com a diferença de que o usuário pode escolher quais “comunicadores” e quais conteúdos vai receber em sua “caixa de mensagem”. Do ponto de vista do marketing, muitas empresas já entenderam o espírito da coisa e estão fazendo bom uso do site de microblog.

2010 será um ano em que novos usos e aplicações poderão surgir para o Twitter. A empresa Twitter está se reestruturando para acompanhar o crescimento estrondoso que o servido alcançou em escala mundial. O que o Twitter nos reservará para os próximo 360 dias? Só vivendo para ver. Apenas uma coisa é certa: o mundo da comunicação já profundamente alterado pelos blogs definitivamente nunca mais será o mesmo depois do Twitter.

Blogueiro, Especialista em Comunicação Social e Novas Tecnologias e consultor em Marketing em Redes Sociais. Blog do autor: http://luizvalerio.com.br Email: luiz.valerio.silva@gmail.com Twitter: http://tuitter.com/valerio34


Subscreva o CSS-Desvendado completo!
Subscreva CSS - Desvendado por Email
          come mi diverto a smanettare sul pc        
Tra le cose con cui mi tengo occupato il cervello per non pensare a me stesso, una posizione di rilievo la occupa la politica. Ma viste le ultime pesanti delusioni e l'impressione che per qualche anno è meglio lasciar perdere, questa sta cedendo il primo posto ad un'altra occupazione, l'informatica, o meglio, , anzi, !
Non fosse altro perché lui (lei?) mi riempie di soddisfazioni.

L'altro giorno (lunedì) la scheda, come previsto, è arrivata. Poi pausa per aspettare il mio informatico di fiducia ché mi fidavo poco delle mie capacità (nel mentre mi acculturavo su come smontare e rimontare tutti i componenti del pc con le spine inserite ad occhi chiusi e in meno di 1 minuto).
Lavoro (tutto da solo) per:
  • preparare (rimuovi i vecchi driver ATI)
  • estrarre la vecchia scheda
  • introdurre la nuova scheda
  • porconare per liberare un cavetto molex per l'alimentazione usando 1 coltello, 1 cutter ed un cacciavite per sollevare la fascetta in plastica e non segare con lei anche i cavi sottostanti
insomma... un parto.
Poi riavvio il PC, parte Ubuntu, risoluzione a 800x600 vabbè. Installo i driver per la Nvidia usando Envy; va tutto bene, ma la risoluzione resta 800x600.
Qui entra l'informatico di fiducia, che coinvolgo nel riconfigurare Xorg.
Alla fine va tutto, ma non .
Eh no! mi compro la scheda video figa, voglio Compiz!
In 2 ore ho incasinato tanto il sistema che quando ho spento sapevo che la soluzione per uscirne era formattare ed installare da zero tutto, approfittandone per passare alla nuova versione, la 8.04.

La mattina seguente installo i driver per Windows (e mi ricordo del perché ho smesso di usarlo), dovrei formattare e reinstallare tutto anche lì.
Quindi reinstallo Ubuntu dal cd, installo la .
Che dire?
Va che è una meraviglia.
          Sfoghino...        
Premessa: è un forte senso di chiusura allo stomaco che mi fa scrivere... e non è tutta questione di fame.

Il succo è che dovrei cercarmi un lavoretto: prendere un poco di soldi potrebbe darmi un minimo di senso di concretezza, ma anche solo un senso. Potrebbe rendermi autonomo dai miei, dagli annessi sensi di colpa per la vita a sbafo che faccio qui, visto che i miei studi procedono lentissimi (procedono?).

Con quei soldi potrei poi risolvere un altro problema: la casa. Un monolocale, ad esempio, potrebbe essere tra le soluzioni contemplabili per il settembre prossimo. Il fatto è che non ne posso proprio più della vita in doppia e in appartamento con sconosciuti. Sarà un mio limite, ma non mi ci trovo bene (dopo quattro anni posso proprio dire che è così).

Ma lavoretto + monolocale + autonomia dai miei è di gran lunga una formula improponibile... Se volessi un monolocale le mie spese aumenterebbero vertiginosamente, e più che un lavoretto mi ci vorrebbero 1000 euro al mese...
Senza contare che se anche trovassi un lavoro serio (ne avrei voglia?) poi smetterei definitivamente di studiare.

Ma anche qui: perché non studio? perché ho esami del cazzo (4-5 saranno gli esami davvero impegnativi, gli altri non lo sono proprio) e non riesco a darli?
Il senso di studiare cose interessanti ma inutili a trovarti un lavoro forse fa la sua parte... Lo smarrimento nel pensare a cosa fare dopo la triennale in storia è brutto. Specialistica in storia, geografia, qualcosa sul giornalismo?
Poi sì, non ho più voglia di studiare. Ma se non studio che mi invento?

Ripenso a quando scelsi di iscrivermi a storia... L'idea di iscrivermi ad informatica mi sfiorò solo per pochissimo, quella di restare in casa anche per meno.
Oggi rifarei le stesse scelte?

Per Bologna sì, ad occhi chiusi... informatica pure oggi mi perplime, ma magari sarebbe stato più saggio. O forse no, ormai è tardi.
Chissà se c'è richiesta di storici-informatici nel mondo del lavoro :)

Ascoltando Joe Strummer & the Mescaleros, Global a go-go
          informatico mancato        
L'idea era quella di risolvere i problemi con Linux e dare una buona dose di potenza in più al PC; per questo domani dovrebbe arrivarmi una nuova scheda video, una Point of View 7600 GT, ovviamente nella versione per AGP (ho sudato ma alla fine l'ho trovata).
(risolvere i problemi di Linux = passare a Nvidia ed i suoi driver, lasciando l'ATI ai suoi problemi)

Idea brillante, avevo cercato bene, avevo controllato tutto quello che pensavo di dover controllare. Ed invece ho fatto i miei conti proprio male...

Nell'attesa della nuova arrivata ho pensato bene di pulire il case (che immaginavo zeppo di polvere). Aperto, ripulito con un panno le superfici esterne, spolverato e spulciato come potevo le superfici interne, alla fine avrò tolto un kilo di polvere...
All'apertura (la prima, visto che prima era in garanzia e poi non avevo validi motivi né voglia di smanettarci) ho notato che:

quelli che me l'hanno assemblato alla FRAEL hanno fatto un capolavoro: i cavi sono impacchettati benissimo, lasciando ampio spazio nel case; hanno anche aggiunto una ventola che non credevo prevista, garantendo un ottima circolazione dell'aria (che la polvere aveva provveduto a dimezzare); essendoci 2 prese USB "libere" hanno addirittura montato 4 porte USB aggiuntive sul retro (di cui solo 2 funzionanti, naturalmente). La domanda è lecita: cazzo me ne faccio di 8 porte USB? vabbè.

Alla fine, richiudendo dopo il lavoro certosino, ho fatto un paio di conti e mi son figurato i disastri che ci saranno domani...

L'alimentatore di sicuro non reggerà la scheda video / la scheda video richiederà un voltaggio doppio di quello supportato dalla mia scheda madre (questa non la sapevo ma l'ho appena letta sul manualetto dell'MB). Mi sto già vedendo a comprare un nuovo alimentatore, a staccare tutti i cavi mirabilmente piegati ed impacchettati, a rimettere i cavi nuovi disperato non ricordando come andavano...
O a vendere la scheda video nuova su E-Bay... :(
eccheppalle... uno non può nemmeno comprare una scheda video...

Fino a prima di queste scoperte l'hardware mi sembrava un argomento affascinante
          Per finire la triennale        
La seguente è una manovra disperata: utilizzare lo sputtanamento pubblico per tentare di imporre una disciplina alla mia mente eternamente distratta.

Dunque:
* da oggi fino al 16 dicembre studio per il seminario di Le Monde Diplomatique (bellissimo);
* dal 17 dicembre al 20 gennaio studio per l'esame di Storia moderna;
* durante le vacanze ed entro il 30 gennaio preparazione delle "prove" per l'esame di Informatica per le scienze storiche (caliamo un velo pietoso...);
* dal 7 febbraio al 15 aprile preparare l'esame di Storia delle dottrine politiche (paura)

avanti così con gli esami in vista di una laurea per dicembre... ce la farò?

Ho già qualche idea per la testa... mi piacerebbe approfondire la storia dell'informatica e del computer e la questione del digital divide.
Il problema sarà trovare docenti che non si son fermati al motore a scoppio...

Altre idee ne ho, ma per ora sto fantasticando su questa, una alla volta!

Tag: , ,
          surreale o grottesco?        
La notizia (una delle) del giorno lascia inizialmente esterrefatti; poi magari la si butta sul ridere.
Sì, perché il DDL sull'editoria (attenzione, è un pdf), come hanno fatto notare su Punto Informatico (che riprende Spataro su Civile.it) è una minaccia alla libertà di espressione. Ma non solo; è una ridicolaggine.
Ridicolissimo pensare di equiparare ogni pubblicazione su internet ad un prodotto editoriale; e ridicolissimo pensare di poter applicare un simile provvedimento.

In buona sostanza, nel quadro di una riorganizzazione della legislazione sull'editoria, si intende equiparare a prodotto editoriale (con tutto quello che ne consegue in termini di burocrazia, responsabile unico anche in piattaforme collaborative come Wikipedia, maggiori responsabilità per chi scrive) ogni pubblicazione su internet, con una definizione che include anche i blog (anche se non a scopo di lucro), i siti personali etc.

Se venisse applicata oggi, chiuderebbero le piattaforme di blog italiane (Splinder etc.), chiuderebbero i progetti Wikimedia in italiano (Wikipedia etc.), chiuderebbero una miriade di siti internet e l'Italia smetterebbe di partecipare alla rete, in buona sostanza.

Son impazzito?
No, tanto è palese che questa legge non è applicabile, che non mi preoccupo affatto che sia stata approvata dal CDM.
Una conferma in più che i ministri & co. non leggono le leggi o non ne comprendono il significato. O semplicemente ignorano l'esistenza di internet.

Ok, non verrà mai approvata... ma intanto è meglio firmare la petizione per chiederne il ritiro, che non si sa mai.

Technorati tag: , , ,
          Solar irrigation cooperative to solve groundwater crisis        
Teaser: 
News this week
A solar water pump (Source: Sehgal Foundation)

India's groundwater crisis: Gujarat's solar irrigation cooperative embarks on a solution

The world's first Solar Pump Irrigators’ Cooperative Enterprise (SPICE) has been formed in Dhundi village in Gujarat's Kheda district. Members of the enterprise have not only made a switch from diesel to solar pumps but are also selling power to the local electricity utility, thereby creating a parallel revenue stream. The project has been initiated and partly funded by IWMI-Tata Water Policy Program. By December 2016, the six members had together earned more than Rs 1,60,000 from the sale of surplus energy to the local power utility.

Storage capacity of Karnataka reservoirs lost to siltation

With the accumulation of silt in Karnataka's 11 major reservoirs, nearly 10 percent of the storage capacity, that can annually cater to at least five cities as large as Bengaluru, is being lost. The loss of storage is primarily in the reservoirs of north Karnataka. The worst affected is the Tungabhadra dam which witnesses nearly 17 days of overflow due to high siltation levels. To tackle the issue, a proposal to construct a dam that will hold 30 tmcft of Tungabhadra water is made. 

Wetland panel formed in Gujarat

The Gujarat government has finally formed a 23-member state wetlands conservation authority under the Wetlands (Conservation and Management) Rules. Apart from government officials, the committee has representatives of Indian Space Research Organisation (ISRO), Bhaskaracharya Institute For Space Applications and Geo-Informatics (BISAG) among others. The aim of the committee is to examine wetlands, review conservation activities and make suggestions to the central government and financial agencies for various conservation projects.

Centre adopts two villages along the Ganga river

The Ministry of Drinking Water And Sanitation, in collaboration with Global Interfaith WASH Alliance (GIWA), has adopted two villages along the bank of Ganga river to make them model Ganga villages. The two villages are Veerpur Khurd in Dehradun and Mala in Pauri Garhwal. With the help of various stakeholders and ministries involved, these two villages will be provided with solid and liquid waste management facilities, drainage systems and groundwater recharge. 

Odisha blames Chhattisgarh for providing wrong information on water

The Odisha government has blamed the neighbouring Chhattisgarh government for giving wrong information on the flow of water to Hirakud reservoir in the downstream of the Mahanadi river. As per the claims, the latter is operating the gates at Kalma barrage in an improper manner to intentionally restrict the water flow to Hirakud. The matter will be taken up with the Central Water Commission to ensure the free flow of water in Mahanadi river through Odisha.  

This is a roundup of important news from May 29 - June 5, 2017. Also read the policy matters this week. 

Languages: 
Don't Show In All Article: 

          Springer Publishing Continues AJN BOTY Winning Streak With Eleven Awards, Six First-Place Wins        

Springer Publishing Company is excited to announce that the American Journal of Nursing (AJN) has chosen ten of its titles as winners of the 2016 AJN Book of the Year awards in their respective categories. With eleven awards and six first-place wins, Springer Publishing can celebrate more this year than any other winning publisher. 2017 also marks the fifth year in a row in which Springer Publishing has been the most awarded nursing publisher.

The winning titles are:

First Place Winners:

Second Place Winners:

Third Place Winners:

Since 1969, the AJN Book of the Year awards have honored the best nursing books published, today encompassing 20 distinct categories. The top three books in each category are selected by a panel of experts in each field.

          Infonotas        
Blog sobre apuntes referentes a la informatica. (software, hardware)
          Reparación de laptos y de electrodomesticos        
Somos una empresa dedicada a la reparación de equipos electrónicos e informaticos. nuestra mision es garantizar calidad e innovacion en servicios para aparatos electronicos y computadoras en general, esto nos permite ofrecer a nuestros clientes soluciones que mejoren su economía al mismo tiempo que colaboramos con el medio ambiente. contamos con tecnicos calificados con mas de 5 años de experiencia, no pongas tus aparatos en manos inexpertas, llamanos nosotros vamos por tus aparatos.
          Bioinformatician II - University of Massachusetts Medical School - Worcester, MA        
We are looking for enthusiastic bioinformaticians and computational biologists to become part of the team and collaborate with the computational community both...
From University of Massachusetts Medical School - Fri, 21 Jul 2017 18:27:29 GMT - View all Worcester, MA jobs
          Research Scientist, Bioinformatics - George Washington University - Foggy Bottom, MD        
We are seeking a highly motivated, skilled, and collaborative computational biologist to contribute to multiple NIH -funded microbiome research projects....
From George Washington University - Fri, 16 Jun 2017 17:12:57 GMT - View all Foggy Bottom, MD jobs
          Biology: Assistant Professor of Biology - University of Richmond - Richmond, VA        
We seek a biologist who has expertise in analysis of big data, modeling, bioinformatics, genomics/transcriptomics, biostatistics, or other quantitative and/or...
From University of Richmond - Thu, 06 Jul 2017 23:17:18 GMT - View all Richmond, VA jobs
          Data Scientist/Quantitative Analyst, Engineering - Google - Mountain View, CA        
(e.g., as a statistician / data scientist / computational biologist / bioinformatician). 4 years of relevant work experience (e.g., as a statistician /...
From Google - Sat, 05 Aug 2017 09:55:57 GMT - View all Mountain View, CA jobs
          Automazione su Windows: Chocolatey        
Oggi è uscito un gran bel pezzo di software: Chocolatey v0.9.9.1 (link al commit su github).
Per questo ho deciso di accennare ad un tipo di automazione informatica oggi; un termine di cui si parla troppo poco in Italia, dove i settori dove usualmente si utilizza sono quelli della meccanica ed elettronica. Partiamo dalla definizione:

IT automation

Unione di software e sistemi separati atta a renderli in grado di gestirsi e regolarsi da soli.

Certo, per chi conosce i termini potrebbe risultare una ripetizione: automazione dell'informazione automatica, eppure il numero di operazioni ripetitive da compiere al PC per ognuno di noi sta diminuendo sempre più grazie all'automatizzazione, che sta aumentando vertiginosamente.

Creazione di una mappa mosaico per libreria australiana
Quanta fatica per arrivare ad un navigatore che aggiorna mappe e stato del traffico
Compiti tediosi come aggiornare il proprio software, assicurarsi che ci sia sempre una versione salvata del proprio file da qualche parte o avere una serie di regole per filtrare efficacemente le email di spam fanno parte di questa definizione.

Read more »

          Neologismi Gergali Informatici        
Per tutto il pubblico anglofobico ho deciso di fare una nuova traduzione! Andremo nel cuore della filosofia della programmazione e vedremo come si chiamano le più abusate tipologie di pattern e anti-pattern dell'ambiente informatico!
Avverto tutti che il compito è lungo e pedante: stiamo parlando di neologismi inadatti a chi storce il naso parlando di zippare, cliccare, twittare, etc...
Userò come al solito una personalissima traduzione maccheronica, e che la grammatica abbia pietà di me.
Pokemon Exceptions
Eccezzioni: acchiappiamole tutte!

Read more »

          Libromatica        
Ecco la traduzione italiana al concetto espresso da Jeff Atwood nel precedente post "I libri fanno schifo", dove cerchèrò di spiegare il mio pensiero a riguardo.

Informatica, libri e idee; chi vince? Chi è l'eroe?

"Mi dispiace Dave", disse HAL9000, "Purtroppo non posso farlo."
Illustrazione: Josh Cooley - Golden Book
Con questa iconica immagine, che rappresenta un e-book con illustrazioni tratto da "un film", tratto da "un libro", spero di istigarvi alla lettura di questo post.
Read more »

          Cette application trouve la recette d’un plat rien qu’avec la photo que vous lui montrez        

Nick Hynes, étudiant en génie électrique et informaticien du Massasuchetts Institute of Technology (MIT), a développé Pic2Recipe, une application capable de trouver la recette d’un plat en se basant uniquement sur une photo. Selon Hynes, des tests ont prouvé que dans 65% des analyses, l’application fournit une recette correcte. Cependant, l’application doit encore être affinée […]

The post Cette application trouve la recette d’un plat rien qu’avec la photo que vous lui montrez appeared first on Express [FR].


          [urls] Top 10 Urls: 25 July-1 August 2004        
Monday, August 2, 2004
Dateline: China
 
The following is a sampling of my "urls" for the past eight days.  By signing up with Furl (it's free), anyone can subscribe to an e-mail feed of ALL my urls (about 150-350 per week) -- AND limit by subject (e.g., ITO) and/or rating (e.g., articles rated "Very Good" or "Excellent").  It's also possible to receive new urls as an RSS feed.  However, if you'd like to receive a daily feed of my urls but do NOT want to sign up with Furl, I can manually add your name to my daily Furl distribution list.  (And if you want off, I'll promptly remove your e-mail address.)
 
Best new selections (in no particular order):
 
* A Web Services Choreography Scenario for Interoperating Bioinformatics Applications (SUPERB, covering all the bases; might serve as the foundation for a blog posting)
* ICC Report: Software Focus, June 2004 issue (if you're not familiar with this monthly newsletter from Red Herring, it's worth scanning; this particular issue is their "annual" on enterprise software)
* Northeast Asia: Cultural Influences on the U.S. National Security Strategy (this might serve as the basis for a blog posting; EXCELLENT, broad-based review of cultural issues)
* Economics of an Information Intermediary with Aggregation Benefits (think B2B and e-markets, although the implications are wide-ranging)
* How to Increase Your Return on Your Innovation Investment (provides a link to an article published in the current issue of Harvard Business Review; good food for thought)
* Why Mobile Services Fail (insights from Howard Rheingold)
* Anything That Promotes ebXML Is Good (lots of good links; I'm an ebXML advocate, so the tone of this article is one which I fully support)
 
Examples of urls that didn't make my "Top Ten List":
 
> RightNow, Sierra Atlantic Announce Partnership to Deliver Enterprise CRM Integration (a trend in the making; I've talked about this quite a bit, i.e., systems integrators working with utility computing vendors)
 
and many, many more ...
 
Cheers,
 
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China
e-mail: click on http://tinyurl.com/3mbzq (temporary, until Gmail resolves their problems; I haven't been able to access my Gmail messages for the past week)
 
http://www.itestrategies.com (current blog postings optimized for MSIE6.x)
http://tinyurl.com/2r3pa (access to blog content archives in China)
http://tinyurl.com/2azkh (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW)
http://tinyurl.com/2hg2e (AvantGo channel)
 
 
To automatically subscribe click on http://tinyurl.com/388yf .
 

          Presentazione        

avatar

Salve,
sono un insegnante di matematica e scienze nella secondaria di I grado appassionato di nuove tecnologie e di informatica.

Ho creato questo sito per condividere tutto il materiale che in questi pochi anni di insegnamento ho preparato e messo da parte. In questo sito potrete trovare file PowerPoint, Notebook, Word e Pdf scaricabili e link a risorse utili per la didattica.

Inoltre, ho inserito anche alcune attività proposte durante il progetto PQM che mi sembrano molto interessanti  anche se non direttamente utilizzabili con le LIM.

Gli elaborati per le LIM (Lavagna Interattiva Multimediale) non hanno la pretesa di essere completi o direttamente applicabili in altre situazioni di insegnamento/apprendimento, ma probabilmente potranno essere utili ad altri docenti che volessero utilizzarli come spunto per ulteriori personalizzazioni e adattamenti.

Spero anche che qualcuno di voi voglia condividere i propri materiali. Sarò assolutamente lieto di poter accogliere i lavori creati da altri docenti in modo da far diventare questo sito un luogo di scambio aperto a tutti.

Tutti i file sono stati catalogati in base al formato (PowerPoint, Notebook, Word e Pdf o weblink) e, successivamente, alla disciplina. Ovviamente, gran parte del materiale riguarda la matematica e le scienze. Quanto prima spero di poter ospitare materiali riguardanti altre materie.

Il download è libero, basterà registrarsi sul sito, inserendo un indirizzo e-mail e scegliendo una password.

Grazie

 



          E-TEAM Education Technical University of Liberec, Czech Republic        


Vincent Nierstrasz, prof. Textile Materials Technology, dept. of Textile Technology, Faculty of Textiles, Engineering and Business


March 12-17 I had the opportinity to teach Textile Biotechnology (a resource effective way of textile processing) at the Technical University of Liberec, (Czech Republic) in the framework of the E-TEAM master program of the AUTEX. University of BorÃ¥s is a member of the AUTEX. The AUTEX has its own ambitious textile technology master program. In this program the students are hosted at 4 different AUTEX member universities for the duration of a semester. The professors teaching the different topics have to teach for 1 week at the hosting university. This time the host for the 2nd semester is the faculty of Textile Engineering in Liberec.
I am teaching in the E-TEAM program since 1999. Teaching in the E-TEAM program is challeging for both students and professors and has the benefit that you in fact become quite familiar with most AUTEX member universities, thereby facilitating European research collaborations.


The TU Liberec is a middle-sized university with 6 faculties:
  • Faculty of Mechanical Engineering 
  • Faculty of Textile Engineering 
  • Faculty of Science-Humanities and Education 
  • Faculty of Economics 
  • Faculty of Arts and Architecture 
  • Faculty of Mechatronics, Informatics and Inter-Disciplinary Studies
  • Faculty of Health Studies
The faculty of Textile Engineering was established in 1960, and is, like the school of textiles in Borås in Sweden, the only academic education in Czech Republic, addressing the whole textile subject, textile materials, technology, marketing and design. The faculty has 6 departments:
  • Textile Technology
  • Nonwovens and nanofibrous materials
  • Clothing Technology
  • Materials Engineering
  • Design
  • Textile Evaluation
In addition to teaching, I had meetings with several staff members from the dept. of Materials Engineering, the dept. of Clothing Technology, as well as the vice dean to discuss research at the faculty of Textile Engineering as well as in my research group in order to strengthen our research collaboration (see e.g. the blog of Sina Seipel, a PhD-student in my group that visited TU liberec in 2016). I also had a meeting with a representative of Inotex, who is sepcialized in Textile Biotecnology to further discuss potential collaboration.

It was an inspiring and busy week.


Vincent Nierstrasz






The Faculty of Textile Technology


Town hall

City centre
 





          Staff mobility at the Technical University of Liberec, Czech Republic         

Sina Seipel - Doctoral Student at the Faculty of Textiles, Engineering and Business, Department of Textile Technology

Dobrý den! (Hello!)

In September I had the opportunity to take part in the Erasmus+ staff mobility program for training. For my exchange I chose the Technical University of Liberec, which was founded in 1953 with a background in mechanical and textile engineering. Today the university has 7 faculties and a total number of approx. 10 000 students. 
  • Faculty of Mechanical Engineering 
  • Faculty of Textile Engineering 
  • Faculty of Science-Humanities and Education 
  • Faculty of Economics 
  • Faculty of Arts and Architecture 
  • Faculty of Mechatronics, Informatics and Inter-Disciplinary Studies
  • Faculty of Health Studies

As I am a doctoral student in the field of Textile Material Technology I visited the Department of Material Engineering at the Faculty of Textile Engineering to learn more about the characterisation of smart dyes applied on textiles with spectrophotometry. The department is, among other areas, specialised in colorimetry and the development of textile sensorical systems based on smart materials. Their educational and research activities are in large extent compatible but at the same time very much complementing to the specialisation of my doctoral field. This match made it ideal for me to learn more in depth about spectrophotometry and its application for photochromic textile materials. Throughout my stay I could gather a lot of new knowledge and ideas by job shadowing of lab work and participating in group workshops and presentations together with the professors Martina Viková and Michal Vik, doctoral students of the department and an exchange student from Turkey. Eventually, I also did some hands-on work at the measurement instrument to get practical experience. 

Research group meeting
Spectrophotometric measurement of photochromic prints 

The city of Liberec is the former textile center of the Czech Republic and lies in the very North of the country. It is not only comparable to BorÃ¥s because of its textile background, but also in size and landscape-wise. With round about 100 000 inhabitants Liberec is the the third largest city in Bohemia with only Prague and Plzeň being larger. Liberec is surrounded by mountains from northeast and southwest with the river Lusatian Neisse flowing through it. For the textile industry it was essential to have access to a river for the production. The climate in the area was ideal for flax to grow, which was the reason for linen weaving mills and cloth manufactures to arise during the 16th century. In the late 19th century, the textile industry and the city's wealth had reached its peak. During this time countless villas and the beautiful town hall were built. 

Villa area around the university 
Former textile worker accommodation area
Liberec town hall 

The city is also known for its abundant sports activities that the beautiful surrounding nature invites to: mountain biking, trail running, paddling, cross country skiing and much more. One of my first impressions of the life in Liberec was when I passed by a nearby lake. The local kayaking club was training for a competition which took place the next weekend, children were swimming in the lake and playing by the beach and at the shore was an inviting restaurant where people enjoyed some food and a refreshing drink (in Sweden it is called beer, but in the Czech Republic it is more referred to as water). I liked the mix of all leisure activities at the same time and felt invited to join in one way or another. 

Kayak training at lake Harcov
A proper glass would have made it perfect 

On the weekend I could not hold myself back from doing what I love to do in my free time - compete in running. On a chilly and rainy Sunday, I took part in a mountain race at Liberec's famous landmark and local mountain Ještěd: 20 km and 1350 m of elevation. Unfortunately, the day was too misty so I could not enjoy the view when I reached the top, but I was very happy anyways when I reached the finish line.

Ještěd peak and tower
Start of Ještěd half marathon
Tired but happy me 

Overall, I am glad for the experiences that I have made during the three weeks in Liberec - both educational and cultural. I am thankful for the knowledge and new perspectives to look at my field of doctoral studies that I have learned from the staff at the Faculty of Textile Engineering. This exchange was important for me as a young researcher and teacher to get to know practices at other educational institutions. Furthermore, I hope that this stay can strengthen collaborations and exchanges between our research teams in the future. And last but not least, I am happy for the personal experience. To stay in a different country, to meet new people and to get to know a new culture is always enriching and nothing I want to miss in life. 


Sina Seipel


          4 intelligent thoughts from a deep geek        

(1) If you don't want to pay for an anti-virus program, at least install a free one.
Your PC probably came with a trial version of an anti-virus program that will stop working after a month unless you upgrade to the paid version. Of course you can do that if you want. Especially if you ever think you might want phone tech support for your anti-virus software, I expect it's better for a product that you've paid money for.
On the other hand, I know people who thought that if they didn't want to pay for the upgrade to their PC's default anti-virus program, their only option was to let it expire and let their computer run unprotected. If you don't want to pay for a non-free program, install a free one -- Wikipedia has a list of 15 different free or freemiumanti-virus products for Windows. PC Magazine gave their "Editor's Choice" award for best free Windows anti-virus to Malwarebytes Anti-Malware 1.70 in 2013 and AVG Anti-Virus Free in 2012, so either of those will work.
(Yes, I know you guys know this. But pass the word on to your Mom or kid brother with the new laptop.)
(2) Save files to a folder that is automatically mirrored to the cloud, for effortless backups.
The era in which everybody talks about backing up, but nobody actually does it, should have ended completely in 2013. Old-style backups, even the incredibly easy options, still mostly required you stop what you were doing for a minute, connect to a remote server or connect a piece of hardware to your computer, and twiddle your thumbs while waiting for some copy process to execute. So nobody bothered.
With cloud-mirrored folders, there's no excuse any more. I found out about Dropbox by asking a mailing list, "I would really like it if there were an online backup service that let me open and close files from a local folder so that there was no delay, but as soon as I made any changes, would automatically be queued to be backed up over the network to a remote host," and my listmates said, "That already exists." Windows 8 comes with the similar SkyDrive service already built in.
You can read a detailed comparison of Dropbox vs. SkyDrive vs. Google Drive, but the key point is to use one of them to mirror one of your local folders to the cloud, and get into the habit of saving stuff to that folder. Obviously this may not apply to you if you have something special going on (if you're creating large multimedia files that won't fit within the several-gigabyte limit imposed by these services, or if your privacy concerns are great enough that you don't want to back up files online), but it's good enough for most people. The horror stories about people saving months or years of writing, and then losing it all in a hard drive crash, should never happen to anyone again.
(3) Create a non-administrator guest account, in case a friend needs to borrow the computer.
Some of my friends and relatives have no problem telling people, "No, I don't care if you need to check the weather, you can't touch my computer!" But if you can't resist the urge to be helpful if someone needs to borrow your laptop for a few minutes, then eventually one of those people will mess it up somehow -- either by installing a game, or visiting a website that installed malware on your computer, or just changing a system setting that you can't figure out how to change back.
When the day comes when someone needs to borrow your computer, you may be too rushed or might not know how to create an unprivileged non-administrator account that they can log in under. So go ahead and do it when your computer is brand new, while the thought is still fresh in your mind. Then if people who borrow your computer sign in under that account, in almost all cases, nothing that they do while logged in should interfere with your user experience when you log them off and log back in as yourself.
That's not a completely secure solution to stop someone from accessing private files on your computer. (There are many pages describing how to boot up a Windows machine from a Linux CD, in order to access files on the computer -- they are usually described as "disaster recovery" options, but they can also be used to access files on a PC without the password.) However, it will stop most casual users from messing up your computer while they borrow it.
(4) Be aware of your computer's System Restore option as a way of fixing mysterious problems that arose recently.
I say "be aware" because, unlike the other three tips, this may not ever be something that you have to actually do. However, intermediate-level computer users just need to understand what it means: to restore your computer's settings and installed programs to a recently saved snapshot, while leaving your saved files untouched. This means if your computer has started acting funny in the last couple of days, you may be able to fix the problem by restoring to a snapshot that was saved before the problems started.
Intermediate users sometimes confuse this with either (a) restoring files from backup, or (b) doing a system recovery (which generally refers to restoring your computer to the state in which it left the factory). So if you're the techie doing the explaining, make sure they understand the difference. (A system recovery will often fix problems, too, but then of course you'll have to re-install all your software; a system restore is more convenient since it only undoes the most recent system changes.)

          Comment on Informatica Tutorial – Part 2 by sudhakar        
hello thanks for the information. Thanks, Sudhakar
          Corso di Informatica        
Per il secondo anno consecutivo, si svolgerà presso la nostra sede un corso di informatica per giovani adulti con sindrome di Down. Lo scorso anno il corso è stato frequentato con ottimi risultati da sette giovani di età compresa tra i 20 e i 25 anni, quest’anno sarà diretto a persone tra i 25 e [...]
          Analista de Sistemas - Soporte de informatica        
Maracaibo, Edo. Zulia - Importante empresa en el ramo papelero, esta en la búsqueda de personas mayores de 18 años para formar parte de nuestras familia papelera, en un ambiente de trabajo en equipo. Empresa del sector Venta al consumidor, localizada en Zulia, De 11 a 50 trabajadores Lee la descrip...
          Agri-Food and Biosciences Institute (AFBI) Northern Ireland Civil Service: Principal Scientific Officer in Quantitative Genetics and Bioinformatics - Belfast        
£47,749 - £52,334: Agri-Food and Biosciences Institute (AFBI) Northern Ireland Civil Service: An experienced Quantitative Geneticist with strong Bioinformatics skills is sought to lead and manage an... Belfast
          Agri-Food and Biosciences Institute (AFBI) Northern Ireland Civil Service: Principal Scientific Officer - Pathogen Genpmics Bioinformatician - Belfast        
£47,749 - £52,334: Agri-Food and Biosciences Institute (AFBI) Northern Ireland Civil Service: This new post offers an exciting opportunity for a skilled and experienced Pathogen Genome Bioinformatician to... Belfast
          Celgene: Senior Scientist, Translational and Diagnostic Informatics        
Celgene: DescriptionTranslational Development at CelgeneCelgene Corporation, headquartered in Summit, New Jersey, is an integrated global biopharmaceutical company engaged in the discovery, development and commercialization of novel therapies for the treatment of Summit, New Jersey (US)
          Google à 9 ans ...        


Google fête ses 9ans!

Et oui! C'est aujourd'hui que Google fête ses 9 ans d'existances.



Larry Page et Sergey Brin sont les 2 créateurs de Google, Larry Page est né le 26 mars 1973 aux Etats-Unis.
Par son père ( informaticien brillant ), il grandit dans le monde de l'informatique.

Comme son père, il obtient le "Bachelor of Science" en ingénierie et informatique dans l'université de Michigan, une des plus réputée (pour l'informatique) aux USA.
C'est dans l'université de Stanford qu'il rencontre Sergey Brin.

Sergey Brin est né à Moscou en Août 1973.

Il émigre avec sa famille dans la silicon valley en 1979.
En 1993, il obtient le fameux "Bachelor of Science" en mathématiques et informatique dans l'université du Maryland puis réalise un Master en informatique dans l'université de Stanford.

Pourquoi le nom Google?
Ce mot vient du terme "googol" qui désigne un chiffre, un 1 suivi de 100 zéros, traduisant l'exaustivité du moteur de recherche.
"L'informatique au féminin" vous souhaite un joyeux anniversaire.........

          Make Your B2B Marketing 10x More Readable… and Take off Your Straightjacket        
Overview

For our 4th Copywriting Tune-up this month, we return to the Silicon Valley 150. I wanted to get outside my comfort zone which is the learning domain. There are times when not being the expert is a plus and this is one of them.

Yours truly knows enough relational database theory to be dangerous and writing dynamic web pages is fun once in a while, but I’m no maven on the enterprise data integration products Informatica offers.

As with many enterprise software companies, the copy on Informatica’s website is stiff. This article proves you can loosen up a bit and still maintain a very respectful corporate tone. Better still, your prospects will respond because you’re talking to them and not some third person academic (no slight to professors or teachers - I’m one myself).

Like our last tune-up, we’ll look at a corporate overview. I consider corporate overviews important because, for many readers, this is their first exposure to the company. This makes it imperative they "put their best foot forward."

Copywriting Tune-up

This tune-up consolidates all of the principles we’ve addressed this month. The challenges are to:

  • Eliminate the passive voice to make it easy to understand

    (for a quick explanation of the passive voice, see my tune-up of the Hewlett Packard white paper on Halo, their collaboration platform)


  • Inject action into the copy so it’s more alive and less like a statue

  • Maintain a corporate tone

Before
After

Informatica Corporate Overview

Informatica Corporation delivers data integration software and services to solve the problem of data fragmentation across disparate systems, helping organizations gain greater business value from all their information assets. Informatica's open, platform-neutral software reduces costs, speeds time to results, and scales to handle data integration projects of any size or complexity. With a proven track record of success, Informatica helps companies and government organizations of all sizes realize the full business potential of their enterprise data. That's why Informatica is known as the data integration company.

Overview: Why Informatica is The Data Integration Company

Solve the problem of fragmented data across disparate systems. Help your organization capture the whole value of its information assets. Reduce costs, speed time to results, and scale for data integration projects of any size or complexity. Use the open, platform-independent solutions of Informatica to make it happen.

Tap into a proven track record of success. Realize the full potential of your enterprise data. This is why organizations of all sizes from every sector trust Informatica to be their "data integration company."

Readability Statistics

The Before snippet is weighed down with ¼ of its sentences in the passive voice. The After snippet switches to active voice and the passage becomes 10 times more readable.

While it may be possible to bring down the grade level some more, given the highly esoteric nature of Informatica products, eliminating jargon could do more harm than good by transforming the piece into training as opposed to selling.

The Headline: Stallion or Statue?

The Before snippet gives us the usual corporate heading and it’s straightforward, for sure. Even in this context, I think the headline should do more than simply label the section it covers. What’s the purpose of an overview in the first place? It must whet the reader’s appetite for more.

I admit, the headline in the After snippet could be catchier and it lacks an action verb. If I were working at Informatica, I’d know who approves this stuff and have an idea of how far to push the limits of "corporate safeness."

Still, this headline performs far more than just labeling. Informatica chose to conclude its overview by referring to itself as "the data integration company." Sounds like an important phrase and maybe it’s a tag line they use elsewhere so, I decided we should get the reader onboard with this notion sooner rather than later.

After all, if your company were branded "the data integration company," you would occupy hallowed ground in the same way Kleenex, Xerox and WebEx do in their niches. Best of all, your competitors would hate you for it.

Treat Readers’ Eyes with Respect

The Before snippet compacts everything into a single paragraph. It’s already a long page with scrolling. No need to get claustrophobic. In fact, given how many headers follow this paragraph, it might have made sense to provide a sub-menu or some in-page links near the top so readers could go directly to their point of interest.

Unlike dead air on radio which can lose listeners instantly, white space on the page arranges information into manageable chunks and supports the reader’s effort to make sense of it. This is complex material – give the brain a chance to catch up with the eye.

Let Readers Catch their Breath – Write Shorter Sentences

The Before snippet immediately bombards readers with lengthy clauses like "delivers data integration software and services to solve the problem of", "data fragmentation across disparate systems" and "helping organizations gain greater business value from…"

Notice the After snippet uses more sentences and how they’re shorter in length. By starting shorter sentences with verbs, we sharpen our focus on a single benefit. Given few reasonably intelligent people can hold on to two or more bullet points in their head at a time, we should avoid packing sentences with lengthy clauses.

Speak from Your Reader’s Point of View Using Action Sentences

Our last tune-up explains how focusing on action forces us to think from the reader’s point of view. In the Before snippet, none of the sentences begin with a verb. The After snippet starts all but one of its sentences with a verb. Doing so forces you, as a writer, to think, "What’s in it for me, the reader?"

Even if this is not a sales letter, it is sales literature and it should promote some call to action whether it’s reading more, entering data into a form, or navigating to another part of the site.

True, the After snippet does not prompt an explicit action but it does accomplish two things. First, it makes a concise yet powerful case for the branding its headline calls out. Second, it creates interest so the reader will read on.

Moreover, action sentences from your reader’s point of view give you license to use the second person voice. Addressing readers with "you" and "your" creates a hotline from your pen to their minds and maybe even their hearts.

Address Readers Using Second Person Voice without Triggering Sales Hype

For some reason, enterprise software companies labor under the pretense they must write in third person and passive voices or risk coming across as wild-eyed hucksters unworthy of further attention.

Thankfully, it’s easy to address your readers directly without losing credibility. The After snippet maintains a corporate tone while using a second person voice throughout.

One little secret to striking this balance – even if you leave out "you" and "your" in a sentence, so long as you start it with an action verb, you’ll achieve what I call, "implied second person voice." This allows for sparing use of "you" and "your."

Implied second person voice with occasional use of "you" and "your" will raise your credibility because readers find your literature easier to understand yet free from sales hype.

Write for Both Kinds of Readers – Scanners and Scrutinizers

Scanners skim the headlines and read a little body copy here and there. Scrutinizers read every line with rapt attention. Satisfying both types of readers makes sub-heads vital to your success.

For both types of readers, sub-heads act as "connective tissue." Scanners want to skim the headline and sub-heads and come away with a meaningful insight into your offer or value proposition. Scrutinizers want continuity as they complete a section of body copy and move on to the next sub-head.

On the Informatica overview page, following the opening paragraph is the sub-head, "Market Leaders Rely on Informatica."

From the scanner’s point of view, the page so far reads, "Informatica Corporate Overview" and "Market Leaders Rely on Informatica." Scanners will view this as lifeless because there are no action verbs or second person voice. Worse still, the two headers have no meaningful connection to each other. They’re nothing more than labels.

From the scrutinizer’s point of view, the Before snippet fares a little better. The last sentence asserts Informatica is the data integration company and then we have the sub-head, "Market Leaders Rely on Informatica." Not tight but not totally disjointed either.

If the After snippet continued on, I would re-write the next sub-head as "Join the Market Leaders Who Rely on Informatica."

Scanners would read, "Overview: Why Informatica is The Data Integration Company" followed by, "Join the Market Leaders Who Rely on Informatica." One head naturally leads into the next. The sub-head starts with an action verb. This "ratchets up" the intensity as we move along. Chances are better a scanner will think, "Hey, I better get on top of this before our competitors do."

Scrutinizers would read "This is why organizations of all sizes from every sector trust Informatica to be their ‘data integration company’" followed by, "Join the Market Leaders Who Rely on Informatica." The flow here is tight. For good measure, scrutinizers will read body copy invoking the word "trust" followed by the sub-head using, "rely." Two very emotion-laden verbs without triggering sales hype.

Evoke More Emotion with your Choice of Words

The Before snippet uses the term "platform-neutral" whereas the After snippet opts for "platform-independent." The term "neutral" is, well, neutral. "Independent" evokes feelings of empowerment. The latter has a far more positive connotation and it reflects better on Informatica.

Never Diminish Thyself

Avoid using your company’s name in the possessive form. In the Before snippet, we read, "Informatica's open, platform-neutral software…" This has the same effect as tilting a movie camera down on its subject – the subject looks diminished because the viewer can "look down" on it.

The After snippet reads, "Use the open, platform-independent solutions of Informatica to make it happen." By placing the item possessed first and following it with "of Informatica," the effect is equivalent to tilting a movie camera up at its subject – the subject looks powerful and important because the viewer must "look up" to it.

Wrap-up

Enterprise software companies need to take off their self-imposed straightjackets when presenting themselves. Sure, one could argue, Informatica, like many enterprise software companies, is doing just fine with stiff copy because their success is a combination of technical innovation, strong management leadership and savvy salespeople in the field.

Then again, clear, crisp copy can make everyone’s job easier with softer landings during lean times and accelerated sales when bull markets run. To me, investing in great copy sounds like buying a call option – you can’t lose anymore than you spent to acquire it and the upside is unlimited.

To your marketing success,

Eric "Rocket" Rosen
Clear Crisp Communications
Tel: 408.506.0719
Fax: 814.253.5142
Email: eric.rosen AT clearcrisp.com
Web: http://www.clearcrisp.com
Blog: http://copywritingtuneup.blogspot.com
ROCKET Response Copywriting Services
Polished Marketing Materials in 24, 48 or 72 Hours

          Pfizer to participate in Human Vaccines Project        

Pfizer has decided to play a part in the Human Vaccines Project, which seeks to develop vaccine protection for infectious diseases such as HIV, dengue fever and cancer.

“Over the last decade, we’ve seen unprecedented technological advances in our understanding of the biology of diseases -- and new tools in designing vaccines including therapeutic vaccines,” Kathrin Jansen, head of vaccines research and development at Pfizer, said. “Yet, the translation from preclinical to clinical vaccine research has often been hampered by a lack of understanding of the desired human immune responses required to obtain optimal vaccine protection. With our strong heritage of translating scientific findings into the development of medicines and vaccines, Pfizer is proud to contribute to the consortium’s research efforts.”

Public and private research groups are teaming up on the project. The groups seek to apply the latest in medical technologies and knowledge to fight infectious diseases.

“We look forward to Pfizer’s contribution to the Human Vaccines Project as we launch an unprecedented public-private partnership in human immunology discovery, to decipher the human immunome and principles of protective immunity, to usher in a new era in global disease prevention,” Wayne Koff, founder of the Human Vaccines Project, said. “The human immune system holds the key to preventing and controlling a broad spectrum of infectious diseases, cancers, autoimmune diseases and allergies. By bringing together leading vaccine researchers, institutions and biopharmaceutical companies -- and harnessing recent technological advances in molecular and cellular biology and bioinformatics -- the project may potentially enable accelerated development of vaccines and immunotherapies for some of the most devastating diseases of our time.”


          FOCA Metadata Analysis Tool        
Written by Pranshu Bajpai |  | LinkedIn

Foca is an easy-to-use GUI Tool for Windows that automates the process of searching a website to grab documents and extract information. Foca also helps in structuring and storing the Metadata revealed. Here we explore the importance of Foca for Penetration Testers


Figure 1: Foca ‘New Project’ Window


Penetration Testers are well-versed in utilizing every bit of Information for constructing sophisticated attacks in later phases.  This information is collected in the ‘Reconnaissance’ or ‘Information gathering’ phase of the Penetration Test. A variety of tools help Penetration Testers in this phase. One such Tool is Foca.
Documents are commonly found on websites, created by internal users for a variety of purposes. Releasing such Public Documents is a common practice and no one thinks twice before doing so. However, these public documents contain important information like the ‘creator’ of the document, the ‘date’ it was written on, the ‘software’ used for creating the document etc.  To a Black Hat Hacker who is looking for compromising systems, such information may provide crucial information about the internal users and software deployed within the organization.

What is this ‘Metadata’ and Why would we be interested in it?
The one line definition of Metadata would be “A set of data that describes and gives information about other data”. So when a Document is created, its Metadata would be the name of the ‘User’ who created it, ‘Time’ when it was created, ‘Time’ it was last modified, the ‘folder path’ and so on. As Penetration Testers we are interested in metadata because we like to collect all possible information before proceeding with the attack. Abraham Lincoln said “Give me six hours to chop down a tree and I will spend the first four sharpening the axe”. Metadata analysis is part of the Penetration Tester’s act of ‘sharpening the axe’. This Information would reveal the internal users, their emails, their software and much more.

Gathering Metadata
As Shown in Figure 1, Foca organizes various Projects, each relating to a particular domain. So if you’re frequently analyzing Metadata from several domains as a Pen Tester, it can be stored in an orderly fashion. Foca lets you crawl ‘Google’, ‘Bing’ and ‘Exalead’ looking for publicly listed documents (Figure 2).


Figure 2: Foca searching for documents online as well as detecting insecure methods
 You can discover the following type of documents:
DOC
DOCX
PPT
PPTX
XLS
XLSX
SWX
SXI
ODT
PPSX
PPS
SXC

Once the documents are listed, you have to explicitly ‘Download All’ (Figure 3).


Figure 3: Downloading Documents to a Local Drive
 Once you have the Documents in your local drive, you can ‘Extract All Metadata’ (Figure 4).

Figure 4: Extracting All Metadata from the downloaded documents
This Metadata will be stored under appropriate tabs in Foca. For Example, ‘Documents’ tab would hold the list of all the documents collected, further classified into ‘Doc’, ‘Docx’, ‘Pdf’ etc. After ‘Extracting Metadata’, you can see ‘numbers’ next to ‘Users’, ‘Folders’, ‘Software’, ‘Emails’ and ‘Passwords’ (Figure 5). These ‘Numbers’ depend on how much Metadata the documents have revealed. If the documents were a part of a database then you would important information about the database like ‘name of the database’, ‘the tables contained in it’, the ‘columns in the tables’ etc.


Figure 5: Foca showing the ‘numbers’ related to Metadata collected


Figure 6: Metadata reveals Software being used internally
Such Information can be employed during attacks. For Example, ‘Users’ can be profiled and corresponding names can be tried as ‘Usernames’ for login panels. Another Example would be that of finding out the exact software version being used internally and then trying to exploit a weakness in that software version, either over the network or by social engineering (Figure 6).
At the same time it employs ‘Fuzzing’ techniques to look for ‘Insecure Methods’ (Figure 2)
Clearly Information that should stay within the organization is leaving the organization without the administrators’ knowledge. This may prove to be a critical security flaw. It’s just a matter of ‘who’ understands the importance of this information and ‘how’ to misuse it.
So Can Foca Tell Us Something About the Network?
Yes and this is one of the best features in Foca. Based on the Metadata in the documents, Foca attempts to map the Network for you. This can be a huge bonus for Pen Testers. Understanding the Network is crucial, especially in Black Box Penetration Tests.

Figure 7: Network Mapping using Foca
As seen in Figure, a lot of Network information may be revealed by Foca. A skilled attacker can leverage this information to his advantage and cause a variety of security problems. For example ‘DNS Snoop’ in Foca can be used to determine what websites the internal users are visiting and at what time.
So is Foca Perfect for Metadata Analysis?
There are other Metadata Analyzers out there like Metagoofil, Cewl and Libextractor. However, Foca seems to stand out. It is mainly because it has a very easy to use interface and the nice way in which it organizes Information. Pen Testers work every day on a variety of command line tools and while they enjoy the smoothness of working in ‘shell’, their appreciation is not lost for a stable GUI tool that automates things for them. Foca does the same.
However, Foca has not been released for ‘Linux’ and works under ‘Windows only’, which may be a drawback for Penetration Testers because many of us prefer working on Linux. The creators of Foca joked about this issue in DEFCON 18“Foca does not support Linux whose symbol is a Penguin. Foca (Seal) eats Penguins”.

Protection Against Such Inadvertent Information Compromise
Clearly, public release of documents on websites is essential. The solution to the problem lies in making sure that such documents do not cough up critical information about systems, softwares and users. Such documents should be internally analyzed before release over the web. Foca can be used to import and analyze local documents as well. It is wise to first locally extract and remove Metadata contained in documents before releasing them over the web using a tool called ’OOMetaExtractor’. Also, a plugin called ‘IIS Metashield Protector’ can be installed in your server which cleans your document of all the Metadata before your server is going to serve it.

Summary

Like many security tools, Foca can be used for good or bad. It depends on who extracts the required information first, the administrator or the attacker. Ideally an administrator would not only locally analyze documents before release, but also take a step ahead to implement a Security Policy within the organization to make sure such Metadata content is minimized (or falsified). But it is surprising how the power of information contained in the Metadata has been belittled and ignored. A reason for this maybe that there are more direct threats to security that the administrators would like to focus their attention on, rather than small bits of Information in the Metadata. But it is to be remembered that if Hackers have the patience to go ‘Dumpster Diving’, they will surely go for Metadata Analysis and an administrator’s ignorance is the Hacker’s bliss.

On the Web


●                     http:// www.informatica64.com/– Foca Official Website


          Our ongoing commitment to support computer science educators in Europe        

The need for employees with computer science (CS) and coding skills is steadily increasing in Europe—by 4 percent every year between 2006 and 2016 according to DigitalEurope.  But educators are struggling to keep up with the demand, often because they lack the professional development, confidence and resources to successfully teach their students. 

Because of these challenges, we’re working to increase the availability of quality computer science education and access to CS skills by empowering CS teachers globally. We’ve recently launched new support in Europe, the Middle East and Africa through CS4HS, a program to fund universities and nonprofits designing and delivering rigorous computer science professional development for teachers.

We’re excited to be working with 79 organizations worldwide, and 28 in the EMEA region, who are committed to increasing the technical and teaching skills of educators, and building communities of ongoing learning. We believe that these organizations are committed to delivering high-quality teacher professional development programs with a deep impact in their local community and a strong potential to increase their reach.

Classroom image

Growing the community of computer science educators  

Over the past 10 years, CS4HS has contributed $10 million to professional development (PD) providers around the world to help develop and empower teachers—like Catrobat, a non-profit initiative based at Graz University of Technology in Austria who created a free online course for students and teachers, and the University of Wolverhampton, who created a free MOOC to empower teachers of computing to teach programming in the new computing syllabuses in England, among others.

We’re excited to support new and future CS educators around the world. Even though computer science is a relatively new discipline for most schools, the enthusiasm is growing and teachers have a critical role to play in fueling their students’ interest and participation. These grants will help universities and nonprofits reach educators with PD opportunities that enhance their CS and technical skills development, improve their confidence in the classroom, and provide leadership training so that they can be advocates for CS education in their communities.

2017 awardees in EMEA

Asociatia Techsoup Romania

Ideodromio, Cyprus

Università degli Studi di Milano, Dipartimento di Informatica, Italy

Lithuanian Computer Society

Dublin City University, Ireland

Adam Mickiewicz University, Poland

EduACT, Greece

Graz University of Technology, Austria

University of Ljubljana, Slovenia

Asociatia Tech Lounge, Romania

Association Rural Internet Access Points, Lithuania

University of Wolverhampton, UK

Universidad de Granada, Spain

University UMK Toruń, Poland

Hasselt University, Belgium

Jednota školských informatiků, Czech Republic

University of Lille - Science and Technology, France

University of Roehampton, UK

University of Urbino, Italy

ETH Zürich, Switzerland

Vattenhallen Science Center, Lund University, Sweden

University College of Applied Sciences, Palestine

Hapa Foundation, Ghana

Let’s Get Ready, Cameroon

Swaziland Foundation for STEM Education

Laikipia University, Kenya

Mobile4Senegal

Peo Ya Phetogo in partnership with University of the Western Cape & Mozilla Foundation, South Africa

To discover more about CS opportunities near you, explore our educator resources, student programs and resources, and tools.



          Friedrich Kittler's Technosublime        
by
Bruce Clarke
1999-12-30

In the 1970s a number of texts came into English translation bearing titles with a 1-2-3 punch, mixing exemplary authors with generic modes and methodological issues; for instance, Roland Barthes’s Sade, Fourier, Loyola and Image, Music, Text, containing the essays “Diderot, Brecht, Eisenstein” and “Writers, Intellectuals, Teachers,” and Michel Foucault’s Language, Counter-Memory, Practice with the essay “Nietzsche, Genealogy, History.” The title Gramophone, Film, Typewriter, a cognate translation of the German, echoes these theoretical signatures. Matt Kirschenbaum tries his hand at this with “Media, Geneology, History,” his review of Bolter and Grusin

In Gramophone, Film, Typewriter Kittler contrasts the restriction of Foucault’s discourse theory to textual archives with his own wider media band, in which phonographic and cinematic data streams decenter the channel of literary writing. But his commentators agree that Kittler’s “media discourse theory” follows from Foucault as the prime member of the triumvirate Foucault, Lacan, Derrida. Lacan runs a close second. Kittler writes: “Lacan was the first (and last) writer whose book titles only described positions in the media system. The writings were called Writings, the seminars, Seminar, the radio interview, Radiophonie, and the TV broadcast, Television ” (170). Gramophone, Film, Typewriter partakes of this same postsymbolic media literalism.

I write about Kittler from the standpoint of a scholar of British and American literature who dropped from the tree of Columbia’s core humanities curriculum to the seed-bed of canonical romanticism and modernism and the theory culture of the 1970s and 1980s, then passed through the forcing house of literature and science in the 1990s, to arrive at the threshold of contemporary media studies. In the process I seem to have become posthuman, but Kittler’s work reassures me that I had no choice in the matter: “media determine our situation” (xxxix). Kittler parlays high poststructuralism into a historical media theory that humbles the subject of humanistic hermeneutics by interpellation into the discrete material channels of communication. Media studies bids to become a hegemonic site within the new academic order of a wired culture. For Kittler, media determine our posthumanity and have been doing so in technological earnest at least since the phonograph broke the storage monopoly of writing.

As a kind of media theory of History, a requiem and good-riddance for the era of so-called Man, Gramophone, Film, Typewriter transmits the tenor of its own historical moment. The German edition appeared in 1986, the year after the opening of MIT’s Media Lab and the release of Talking Heads’ post-hermeneutic concert film and album Stop Making Sense. Other resonant events in American culture include the publication of William Gibson’s Neuromancer (1984), Donna Haraway’s Manifesto for Cyborgs (1985), and Octavia Butler’s Xenogenesis trilogy (1987-89). Memories and premonitions of mushroom clouds loomed over these three speculative and/or scholarly scenarios published during the final decade of the Cold War; each text imagines the form of a posthuman or post-nuclear world. Gramophone, Film, Typewriter posits its posthumanity on the premise that the Strategic Defense Initiative has already set off the fireworks, that the future is always already a prequel to Star Wars. The text begins with the observation that optical fiber networks are “immune…to the bomb. As is well known, nuclear blasts send an electromagnetic pulse (EMP) through the usual copper cables, which would infect all connected computers” (1), and the book ends with before-and-after photos of Hiroshima (262).

Many of Kittler’s sublime effects result from a kind of hyperbolic digitality, i.e., all-or-nothing assertions pressing seemingly local instances into global histories. For instance, Kittler is fond of audacious chronologies that parody the popular media’s demand for appearances of journalistic exactitude: “around 1880 poetry turned into literature” (14), or “around 1900, love’s wholeness disintegrates into the partial objects of particular drives” (70). One thinks of Virginia Woolf’s famous dictum: “in or about December, 1910, human character changed,” and, thanks to Kittler, perhaps now we know why. A related rhetorical scheme mediating the grand transformations of modernism is the from/to formation: “literature defects from erotics to stochastics, from red lips to white noise” (51), or as combined with an audacious chronology: “from imagination to data processing, from the arts to the particulars of information technology and physiology - that is the historic shift of 1900” (73). Again, and as the volume is coming to a conclusion with the arrival of Turing’s universal computer, “the hypothetical determinism of a Laplacian universe, with its humanist loopholes (1795), was replaced by the factual predictability of finite-state machines” (245).

Kittler wrote Gramophone, Film, Typewriter just as chaos theory was arriving to throw a wrench into such stark digital determinism, precisely through the operational finitude as well as non-linear iterations of “finite-state machines.” As John von Neumann pointed out in 1948 in “The General and Logical Theory of Automata,” digital computers could produce perfect results, “as long as the operation of each component produced only fluctuations within its preassigned tolerance limits” (294). But, von Neumann continued, even so, computational error is reintroduced by the lack of the infinite digits necessary to carry out all calculations with perfect precision. Kittler melodramatizes Turing’s work, it seems to me, because he is captivated by the towering image of an informatic colossus.

Such an all-determining and inescapable imago of media induces a productive critical paranoia. The media are always already watching us, putting their needles into our veins: “humans change their position - they turn from the agency of writing to become an inscription surface” (210). Neuromancer ‘s Wintermute is everywhere, or as Kittler phrases it, “data flows…are disappearing into black holes and…bidding us farewell on their way to nameless high commands” (xxxix). At the same time, he enables one to see the particular and pandemic pathologies of modern paranoia precisely as psychic effects driven by the panoptic reach of media technologies in their surveillance and punishment modes. Not for nothing is the apocalypse according to Schreber’s Memoirs a prophetic book of prominent proportions in Kittler’s media cosmos.

In Gramophone, Film, Typewriter the objects of science are subsumed into the will-to-power of media technology. By way of contrast, despite his coinage of “technoscience” to underscore the sociological inextricability of the two, Bruno Latour sorts science and technology into separate treatments and preserves their disciplinary and epistemological distinctions. Yet one should not see Kittler falling under Latour’s blanket indictment of (Baudrillardian) postmodernism: “Instead of moving on to empirical studies of the networks that give meaning to the work of purification it denounces, postmodernism rejects all empirical work as illusory and deceptively scientistic” (Latour 46). Kittler busts open the realm of the real to examine the nonsymbolic and nonimaginary residues of communication technology, all that which cannot be posted: “Bodies themselves generate noise. And the impossible real transpires” (Kittler 46). Where Latour finds the proliferating quasi-objects of mediation, Kittler finds the literal networks of communications media.

For the most part Kittler elides the history of physics concurrent with his media history - the cross-over from late-classical determinism to statistical mechanics, from thermodynamic entropy to information entropy. On the one hand, he scants the ether and the electromagnetic field theories which made possible many developments from analog to digital processing, and from pre-electrical storage technology (photography, phonography) to broadcast transmission (radio, television), electronic storage and manipulation (tape deck, video camera), and digital computation (microprocessor, fiber optic cable) technologies. But on the other hand, that lacuna has opened the door for major efforts among Kittler’s German and American scholarly associates, including the editors of Stanford’s Writing Science series, who have both midwived Kittler’s delivery into North American discourse and paralleled Kittler’s media emphasis with research projects that bring to science studies a thoroughgoing “materiality of communication.”

“Once the technological differentiation of optics, acoustics, and writing exploded Gutenberg’s writing monopoly around 1880, the fabrication of so-called Man became possible” (16). I take it that the “fabrication” in question here is not the discursive construction of the humanist subject but the simulation of its spiritual activities by media devices. One notes Kittler’s detour around physics in the continuation of this passage: so-called Man’s “essence escapes into apparatuses…. And with this differentiation - and not with steam engines and railroads - a clear division occurs between matter and information, the real and the symbolic” (16). Missing from this formulation is the mode of energy, which would correspond by structural default to the Lacanian register of the imaginary. Indeed, Kittler runs up against numerous phantasmagorias of energy, but elides them by metonymic reification in media receivers and inscription devices.

The phantasmagorias of energy I have in mind are those that emanated from the nineteenth-century wave theories connecting the physics of optics and acoustics through an analogy between vibratory media - the air and the luminiferous ether. As sanctioned by the first law of thermodynamics, i.e., the conservation and interconvertibility of energy, the optical imaginary of ether waves is easily displaced to sound waves propagated through the air. We see this concatenation and transposition of physical and technological media in a delightful short story by Salomo Friedlaender, “Goethe Speaks into the Phonograph” (1916), which Kittler republishes in its entirety.

Friedlaender’s comic narrator unveils the thoughts of Professor Abnossah Pschorr, Edisonian inventor-extraordinaire of media gadgetry: “When Goethe spoke, his voice produced vibrations…. These vibrations encounter obstacles and are reflected, resulting in a to and fro which becomes weaker in the passage of time but which does not actually cease” (60). Pschorr extends to the air trapped in Goethe’s study a hypothetical characteristic much discussed in the late nineteenth-century popularization of the ether, its cosmic storage capacity. For instance, in 1875 British thermodynamicists Balfour Stewart and P. G. Tait wrote that the luminiferous ether

may only be an arrangement in virtue of which our universe keeps up a memory of the past at the expense of the present…. A picture of the sun may be said to be travelling through space with an inconceivable velocity, and, in fact, continual photographs of all occurrences are thus produced and retained. A large portion of the energy of the universe may thus be said to be invested in such pictures (156).

While rehearsing the same imaginary accessing of physical (as opposed to technological) media archives, Kittler leaves unmentioned the contemporary vogue connecting the spirits of the dead to the storage and transmission capacities of the luminiferous ether. Kittler cites from another (unnamed) Friedlaender story the assertion that “all the waves of all bygone events are still oscillating in space…. All that happens falls into accidental, unintentional receivers. It is stored, photographed, and phonographed by nature itself,” and comments, “Loyally and deliriously, Friedlaender’s philosophy follows in the wake of media technology” (77). But it also follows from prior scientistic anticipations of new storage capacities projected onto the ether medium. In an 1884 discussion of ether as a surface that forms at the interface of the third and fourth dimensions of space, hyperspace theorist Charles Howard Hinton completed this technoscientific circuit by conceiving the ether medium itself as a cosmic phonograph:

For suppose the æther, instead of being perfectly smooth, to be corrugated, and to have all manner of definite marks and furrows. Then the earth, coming in its course round the sun on this corrugated surface would behave exactly like the phonograph behaves. In the case of the phonograph the indented metal sheet is moved past the metal point attached to the membrane. In the case of the earth it is the indented æther which remains still while the material earth slips along it. Corresponding to each of the marks in the æther there would be a movement of matter, and the consistency and laws of the movements of matter would depend on the predetermined disposition of the furrows and indentations of the solid surface along which it slips (196-97).

My point is that the multiplicity of the concept of “media” extends beyond its particular technological instantiations to include both scientific and spiritualistic registers. A history of media could concern itself as well with the luminiferous ether and the Anima Mundi, the subtle fluids and strange angels that intermingled with the departed souls and trick shots of phonography and cinema; but for the most part, Kittler displaces this business to premodernist media:

the invention of the Morse alphabet in 1837 was promptly followed by the tapping specters of spiritistic seances sending their messages from the realm of the dead. Promptly as well, photographic plates - even and especially those taken with the camera shutter closed - furnished reproductions of ghosts or specters (12).

The telegraph and daguerreotype remain outside Gramophone, Film, Typewriter’s primary historical field. Even here, however, the Kittler effect opens up research corridors by insisting on the material basis, and thus empirical examinability, of the media that mediate the cultural imaginary: “The realm of the dead is as extensive as the storage and transmission capabilities of a given culture” (13).

Beyond that I have nothing but admiration for this volume. Kittler’s fundamental derivation of Lacan’s real, imaginary, and symbolic from the data channels of phonograph, cinema, and typewriter is an astonishing theoretical event. It offers a comprehensive reading of psychoanalysis into technoscience that grows more convincing the more one gets acclimated to Kittler’s methods of channel processing across the cybernetic bridge from the nervous system and its “psychic apparatus” to the Aufschreibesysteme of his media discourse networks. In this reading, the hallucinatory powers and spiritual effects of literature derived from a storage-and-transmission monopoly that could only funnel and traduce the real and the imaginary into the narrow band of the symbolic. As the translators remark in their excellent Introduction: “in short, people were programmed to operate upon media in ways that enabled them to elide the materialities of communication” (xxii). It is both exhilarating and disquieting to submit to Kittler’s deprogramming. But the institutional regimes that sustained the privileges of literary discourse networks (and of us who still inhabit them) are increasingly caught up in the media transformations Kittler describes. The daemonic angel of our history is being driven by the electronic differentiation and digital reintegration of data flows.

At another level, Kittler passes on a wealth of useful engineering expertise: matters of time-axis manipulation from Edison to Jimi Hendrix; the historical mathematics of music and sound: “Overtones are frequencies…. Intervals and chords, by contrast, were ratios” (24); the non-negligible difference between a phonograph and a gramophone (the latter is restricted to playback, the former also records); the physical differences between acoustic and optical waves, such that “cuts stood at the beginning of visual data processing but entered acoustic data processing only at the end” (117-18); the reasons why the first mass-produced typewriters were developed for blind people by arms manufacturers; the pervasive loops between warfare and media, e.g., the revolving cylinder that unites typewriters, film-projectors, and machine-guns, and the collusion of the piano, the typewriter, and Turing’s universal computer; the enigmas of the Enigma machine.

And then, in the midst of this media mayhem, a canny persuasion - a literary core of archival gems. In addition to valuable translations of Friedlaender’s “Goethe Speaks into the Phonograph” and “Fata Morgana Machine” (which limns the eversion of virtual reality eighty years before Marcos Novak), this volume also contains complete texts of Jean-Marie Guyau’s “Memory and Phonograph” (1880); Rilke’s amazing meditation on the phonograph, “Primal Sound” (1919); Maurice Renard’s audio phantasms in “Death and the Shell” (1907); the sonnet “ ‘Radio Wave,’ which the factory carpenter Karl August Düppengiesser of Stolberg submitted to Radio Cologne in 1928”; Richard A. Bermann’s spoof of the sex war between male poets and female typists “Lyre and Typewriter” (1913); and Carl Schmitt’s facetious but telling “world history of inscription,” “The Buribunks: A Historico-Philosophical Meditation” (1918).

In sum, ranging over literature, music and opera from Wagner to acid rock, philosophy, cinema, psychoanalysis classical and structural, history, mathematics, communications technology, and computer science, Kittler’s broadband scholarly panoptics afford a sublime techno-discursive vista, and in particular a point of lucid observation on the ongoing relativization of literary production. Kittler transposes Kant’s mathematical sublime into the mechanical transcendence of communications technology over individual subjects, displacing human psychology into machine being, setting off repeated implosions by which so-called Man is apocalypsed into infinite media loops. His high-prophetic meld of Lacan’s laconism and Zarathustra’s hammer facilitates a neuromantic network of discursive intensities. Under the conditions of technological mediation, however, theory remains viable, or inevitable. Ineluctably funneled through the “bottleneck of the signifier” (4) but pieced out with a tremendous portfolio of period graphics, Kittler’s illuminated writings operate a machine aesthetic tooled to the posthumanist discursivities of his intellectual heroes, but going beyond them to place the stylus of technology on the groove of inscripted bodies.

—————————————————————

works cited

Butler, Octavia. Xenogenesis: Dawn, Adulthood Rites, Imago. New York: Warner, 1987-89.

Gibson, William. Neuromancer. New York: Ace, 1984.

Gumbrecht, Hans Ulrich, and K. Ludwig Pfeiffer, eds. Materialities of Communication. Trans. William Whobrey. Standford: Stanford UP, 1994.

Haraway, Donna. “A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s.” Socialist Review 80 (1985): 65-108.

Hinton, Charles Howard. “A Picture of Our Universe.” Scientific Romances, 1st series. London: Swan Sonnenschein, 1886; reprint 1st and 2nd series. New York: Arno Press, 1976. 1:161-204.

Johnston, John. “Friedrich Kittler: Media Theory After Poststructuralism.” Kittler 2-26.

Kittler, Friedrich. Literature, Media, Information Systems: Essays. Ed. John Johnston. Amsterdam: G+B Arts International, 1997.

Latour, Bruno. We Have Never Been Modern. Trans. Catherine Porter. Cambridge, MA: Harvard UP, 1993.

Lenoir, Timothy, ed. Inscribing Science: Scientific Texts and the Materiality of Communication. Stanford: Stanford UP, 1998.

Schreber, Daniel Paul. Memoirs of my Nervous Illness. Cambridge, MA: Harvard UP, 1988.

Stewart, Balfour, and P.G. Tait. The Unseen Universe or Physical Speculations on a Future State. New York: Macmillan, 1875.

Wellbery, David E. Foreword. Friedrich A. Kittler. Discourse Networks 1800/1900. Trans. Michael Metteer and Chris Cullens. Stanford: Stanford UP, 1990. vii-xxxiii.

Winthrop-Young, Geoffrey, and Michael Wutz. “Translators’ Introduction: Friedrich Kittler and Media Discourse Analysis.” In Kittler, Gramophone, Film, Typewriter, xi-xxxviii.

von Neumann, John. The General and Logical Theory of Automata. In Collected Works. Ed. A. H. Taub. 5 vols. New York: Pergamon Press, 1961-63. 5:288-328.

Woolf, Virginia. “Mr. Bennett and Mrs. Brown.” The Gender of Modernism. Ed. Bonnie Kime Scott. Bloomington: Indiana UP, 1990. 634-41.


          Scaricare in completa sicurezza da eMule, evitare i fake e i server spia!        
EMule, chi non lo conosce! Il software Open Source per scaricare (in modo completamente illegale pirateria informatica) musica, film, programmi, videogiochi e molto altro. Capita veramente spesso però che il file scaricato non sia quello che volevamo ma un fake, allora ecco una guida per scaricare da eMule in completa sicurezza: Apri eMule e fai click sulla ruota dentellata arancione con scritto ‘Opzioni‘; Fai click sulla linguetta con scritto ‘Server‘; Nel campo ‘Elimina i Server inattivi […]
          Noticia Tecnología        
 

telefonia
Android se lleva la mayor cuota de mercado
Android se lleva la mayor cuota de mercado / afp
Los móviles inteligentes representaron el 51,8% de las ventas totales de dispositivos
telefonía
El Moto X es un móvil personalizable
El Moto X es un móvil personalizable / afp
Está disponible en la página web de la compañía por 15 euros
redes
El cambio afecta al cuadro de publicación de «tuits»
redes
. La compañía reconoce en una documentación judicial que los usuarios no deberían tener «expectativas razonables» de que sus comunicaciones sean confidenciales
redes
Un usuario mira Facebook en el móvil
Un usuario mira Facebook en el móvil / reuters

El uso de Facebook socava la felicidad de los usuarios

Un estudio de la Universidad de Michigan encontró que cuanto más las personas usaban Facebook peor se sentían después
redes
/ . BreweryMap es un servicio que busca crear una base de datos sobre las cervecerías de todo el mundo

          ProfeÅ£iile Maya şi Cercurile din Lanuri - Conexiuni Extraordinare        

Multumesc Marian Matei (BLOGUL Dezvaluiri www.antiiluzii.blogspot.com)

"Calendarul Maya, se află în centrul unui fenomen cultural. Pentru unii, 2012 va fi sfârşitul lumii; pentru alţii, aduce promisiunea unui nou început şi tot pentru alţii, 2012 reprezintă explicaţia, pentru unele realităţi tulburătoare - precum schimbările climatice - care par să fie scăpate de sub control, din motive necunoscute."

                                              The New York Times, 1 Iulie, 2007

Maya0

Ca un ultim cadou, de An Nou, vă ofer acest mini documentar, multi premiat - International UFO Congress EBE Awards, Best Documentary, Best Historical Documentary and Peoples Choice Award - realizat de Jaime Maussan, moderatorul unei emisiuni OZN, difuzată la televiziunea mexicană, în ultimii 20 de ani, într-un interval de audienţă maximă, care a descoperit că mesajele codificate, în aşa-zisele "cercuri din lanuri", din ultimii ani, poartă semnătura enigmaticei civilizaţii mayaşe, părând a indica, de asemenea, spre mult disputatul an, 2012, când Kukulcan - sau Quetzalcoatl, al aztecilor - va reveni printre noi.

MayaC1

Ce vor să spună, oare,  toate aceste simboluri, despre care mulÅ£i oameni şi din ce în ce mai mulÅ£i savanÅ£i, încep să creadă că reprezintă un sistem structurat de coduri, acoperind vaste intervale istorice şi chiar spaÅ£iale ? În ultimii ani, cu sprijinul tehnologiei informatice, tot mai avansate şi a muncii asidue a tot mai multor cercetători străluciÅ£i, s-a reuşit conectarea multor puncte, ale marii enigme, numită antichitate, descifrându-se mesaje ce par să prezică un viitor deosebit de luminos, al planetei, prin intrarea într-o nouă Epocă de Aur şi confirmarea originilor extraterestre ale umanităţii.

Maya73P1

După ce au prezis corect, explozia de lumină a Cometei Holmes 17P, din octombrie 2007, când şi-a amplificat emisia de lumină cu un factor de 500.000, devenind, pentru scurtă vreme, cel mai mare obiect din sistemul solar, vizibil chiar cu ochiul liber, se presupunea că aceşti creatori invizibili de cercuri în lanuri, îşi vor îndrepta atenţia spre evenimente astronomice apropiate de data de 21 decembrie, 2012.

Două astfel de imagini, par a confirma direct, evenimente astronomice ce vor avea loc atunci, cel de la Waylan's Smithy din 2006 - care sugerează că razele unei puternice explozii, din adâncul spaţiului, vor atinge Terra cam prin martie, 2013 - şi cea de la West Kennett din 2006 - sugerând că puternice emisii energetice, emanând din centrul galaxiei, vor ajunge pe Terra cam pe 14 decembrie, 2012, la vreme de lună nouă şi în prezenţa unei comete, în interiorul sistemului nostru solar.

Maya4

În ultima vreme, activitatea de decodare a misterelor, acestor figuri din lanurile britanice, pare să fi făcut un puternic salt înainte, dar cu toate corelaÅ£iile, evidente, cu unele evenimente astronomice, precum cometa amintită mai sus sau impactul lui Shoemaker-Levy cu Jupiter, din 1994, comunitatea academică rămâne la fel de încremenită, într-o ignoranţă  agresiv-sfidătoare, de sorginte inchizitorială, triada fiind completată de slugarnica mass media şi desigur, guvernele.

Din nefericire, nu puţini sunt scepticii, care continuă să afirme şi să întrebe: "Asta este imposibil ! Cum este posibil ca sute de oameni să cunoască aceste rezultate, doar la modul privat, fără ca nici unul dintre aceste importante fapte, să nu fie difuzate în principalele mijloace mass media, unde ar putea fi studiate de toată lumea, în condiţiile în care, informaţia de pe Internet, este nesigură şi contradictorie ?"

Maya1

Naivitatea marii majorităţi a oamenilor, este aproape fără limite, când mai cred că o industrie profilată în întregime pe pâine şi circ - pentru care sunt importante doar tacticile de intimidare, reclama făcută celor mai incompetente fiinţe, politicienii şi cel mai proaspăt scandal, cu cât mai abject şi mai penibil, cu atât mai bine - poate da dovadă de o integritate pur şi simplu incompatibilă, cu raţiunea ei de a fi.

Filmul de faţă răspunde magistral neîncrederii şi indiferenţei, prin cercetările făcute de o echipă compusă din Winston Keech, Gary King şi Terje Toftenes (realizatorul incitantului documentar, The Day Before Disclosure, postat deja pe acest blog), care au filmat apariţia, din senin, a unei figuri de peste 300 de metri lungime, în East Field, lângă Avebury, în noaptea de 7 iulie, 2007.

Care a fost reacţia mass media ?

În timp ce un manelist, cu dantură de rechin pensionar, poate face furori, de exemplu, etalându-şi fizicul prin revistele franceze, de parcă ar fi altceva decât o infamă nonvaloare a acestei lumi, un eveniment poate epocal, ce înlătură multe dintre dubiile aruncate de mistificatorii, Doug Bower şi Dave Chorley, asupra originii cercurilor din lanuri, s-a bucurat de notă succintă într-un ziar local din Wiltshire şi de o şi mai mică menţiune pe un website regional, al BBC.

Maya3Fără efortul, meritoriu, al acestui jurnalist de excepţie, care este Jaime Maussan, ce a avut iniţiativa concetizată în acest mini documentar, era foarte posibil ca această descoperire să se scufunde în uitare.

Un aspect poate neaşteptat, al fenomenului, este "conexiunea maya",  unele dintre figurile ce apar anual, preluând parcă, vechi teme din mitologia maya şi cea aztecă.

 

"Când, savanţii europeni, au început să decodifice anticele hieroglife maya, unul dintre primele succese a constat în înţelegerea sistemului numeric, utilizat de mayaşi, a sistemului lor de calendare, bazate pe mişcările aparente ale Lunii, Soarelui sau planetei Venus. Pe scurt, matematica şi ştiinţa au furnizat fundaţia unei noi forme de comunicare, aşa cum savanţii SETI prezic că se va întâmpla, în cazul comunicaţiilor interstelare."

www.seti.org, "Decodarea E.T. în căutarea unei pietre Rosetta, cosmice"

Maya5

Calendarele solar-venusiene, folosite, cu multă vreme în urmă, de ambele civilizaţii meso-americane, Maya şi Aztecă - laolaltă cu faimosul lor calendar pe termen lung - au apărut recent în lanurile engleze, fie lângă Silbury Hill, în 2004, fie lângă Woolstone Hill, în 2005, remarcabilă fiind apariţia lor într-un moment semnificativ, chiar la finalul unui ciclu Maya, de 5.125 de ani.

Calendarul lung, se încheie la 21 decembrie, 2012, marcând finalul celei de-a cincea epoci solare, începută pe 13 august 3114 î.e.n., în timp ce calendarul solar-venusian, se va termina pe 28 martie, 2013, marcând finalul ciclului de 52 de ani, început pe 10 aprilie, 1961 şi totodată, intrarea în cel de-al şaselea ciclu solar, de peste 5.000 de ani.

Vizionati documentarul aici.


          Cercurile gemene de la Wickham Green        

Vineri, 30 Iulie 2010 a fost raportata apariția a doua pictograme identice ca si structura, dar diferite ca mesaj informațional, la Wickham Green (Nord si Sud de autostrada  M4), nr Hungerford, Berkshire, Anglia.

cercuri in lanuri 1

Cei care se ocupa cu decodarea mesajelor din lanuri au părerile impartite, unii dintre ei afirmând ca pictogramele au o tematica religioasa, alții afirma ca sunt mesaje codificate in limbaje informatice. Primii, susțin ca ar fi o replica a imaginii de pe controversatul  Giulgi din Torino, despre care foarte mulți cercetători afirma ca este un fals, suprapunând intr-un anumit fel pictogramele rezulta un chip umanoid asemănător chipului Mântuitorului Iisus Hristos.

cercuri in lanuri 7

Părerea mea este ca cei care afirma ca pictogramele de la Wickham Green ar reprezenta un portret umanoid, e ușor forțata, deoarece autorii cercurilor din lanuri au tehnologia necesara si capacitatea de a realiza portrete intr-o maniera mult mai clara. Amintesc aici “Chipul” de la  Chilbolton Radio Telescope, Hampshire de pe  14 August 2001 si pictograma “Cap de extraterestru” de la Crabwood,  nr Winchester, Hampshire, de pe 15 August 2002.

cercuri in lanuri 13 â€œChipul” de la  Chilbolton Radio Telescope, Hampshire de pe  14 August 2001

cercuri in lanuri 14 â€œCap de extraterestru” de la Crabwood,  nr Winchester, Hampshire, de pe 15 August 2002

Galerie foto:

cercuri in lanuri 2

cercuri in lanuri 3

cercuri in lanuri 4

cercuri in lanuri 5

cercuri in lanuri 12

cercuri in lanuri 8

cercuri in lanuri 9


          Crisismappers Pre-Conference Training 2014        
Crisismappers are converging in New York City this week for the 6th Annual International Conference of CrisisMapping. The term “crisismapping” is fairly loose as the global community includes diversity in maps, data informatics, humanitarian technology and research. We are a collective of people who use maps, data and technology for humanitarian aid and international development. ...
          sos mon ordi        
l'informaticien québécois pierre boucher met ses connaissances à contribution dans une série de chroniques radiodiffusées que l'on peut écouter en différé sur les ondes de ckac; les mordus d'informatique seront ravis de l'entendre répondre aux questions des auditeurs et ils trouveront toutes sortes de trucs à propos du monde des ordinateurs; comme pierre boucher le dit lui-même: il n'y a pas de problème en informatique, il n'y a que de légers désagréments:
www.sosmonordi.com
le site est divisé en 3 parties: courriel - forums - utilitaires
          Practicantes Pasantes Becarios Informaticos Estudiantes tic diseño que requieran hacer practica o proyecto de graduación        
RG R o g u s Consultores - San José - Para ESTUDIANTES próximo a finalizar sus estudios, que requiera realizar su practica profesional, se asignan tareas según el plan de estudio y luego de graduarse, de lograr aportes importantes se le puede ofrece un puesto freelance sobre el proyecto desarrollado. También se da ...
          Bun venit!        
Bine ati venit pe site-ul dedicat Olimpiadei NaÅ£ionale de Informatică!   Aici veti gasi informatii actualizate zilnic referitoare la olimpiada ce va avea loc in Slobozia, jud. Ialomita în perioada 10-14 aprilie 2014. Pentru sugestii sau neclaritati nu ezitati sa comentati sau sa ne trimiteti un e-mail la adresa admin@gsme.ro
          Buscamos socio(a) experiencia en bienes raices y ventas        
Empresa de informatica y bienes raices por abrir busca socio(a) que tenga gusto por las ventas y el trato directo con los clientes tenga intension de convertirse en socio permanente de la empresa y tenga espiritu empresarial de preferencia lic.(a) en leyes contamos ya con oficina y plan de negocios ya empezamos a trabajar si estas interesado lamanos y agendamos una cita
          Licenciado(a) trunco en leyes        
Estamos buscando socio para desarrollar negocios de informatica y bienes raices (inmobiliaria) tenga gusto por las ventas y el trato con los clientes con vocasion de servicio este interesado en desarrollar con nosotros esta nueva empresa y ser parte como socio de la misma si estas interesado llamanos y con gusto platicamos
          A bit of History        
For those wondering, here's the history of JPype :

I always have a lot of projects going on. And in many cases, while I would prefer to use Python to implement them, requirements and/or convenience often steers me toward Java. Let's face it, when it comes to community mindshare, Python is no slouch, but Java definitely is the 500 lbs. gorilla.

But I really wanted to use Python, so I looked around to see how easy it was to mix the two. Jython (JPython at the time) was not an option because of general slowness and lack of feature support. I failed to successfully build the only python/java integration library I found. So I decided to build mine. That was back in may of 2004.

The initial versions (0.1 to 04) were more or less of prototype quality. The C++ code was extensive, with lots of Python extension type and lots of problems making Java classes behave like python classes. Java-specific code and Python-specific code were hopelessly locked together.

0.5 was a complete rewrite, with an eye towards separating the bridging code. Although the amount of C++ code didn't shrink, this saw the introduction of real, dynamically create, Python classes. No more trying to make extension types behave like regular python classes. This was almost perfect.

Major limitations include the inability to raise/except with straight java exception classes (needs to use the member PYEXC instead), and the inability to cleanly shutdown/restart a JVM.

JPype got it's first real test when Chas Emerick of Snowtide Informatics (www.snowtide.com) contacted me about polishing JPype for use in one of their product. I can honestly say the partnership has greatly benefited JPype, with all the improvements made then folded back into the code.

The release of 0.5 has been followed by a lengthy pause in development. Lack of time and interest in other issues being the major reasons. Now time has come to come to resume work towards that almost mythical 1.0 release. 0.6 will be out sometime in the coming months. The details of this, however, will have to be the subject of another post ...

Read back for more info later on.
          Los caligramas        
¿Qué es un caligrama?
         Un caligrama es un texto, generalmente poético en el que se utiliza la disposición de las palabras, la tipografía o la caligrafía para procurar representar el contenido del poema.
        Los caligramas son poemas donde la disposición de los versos sugiere una forma grafica.
        Los caligramas están en el tipo de poesía para mirar, En los caligramas, el poema dibuja un objeto relacionado con las cosas de las que habla.
 Por ejemplo, si el poema habla de una mariposa, se escribe dándole al texto la forma de una mariposa, aunque en ocasiones se da el caso de simples poemas visuales escritos en cierta forma o dibujo que no está relacionado con el caligrama.

Aquí tenemos otros ejemplos:






CÓMO CREAR UN CALIGRAMA
1.  Para crear un caligrama habrá que partir de una idea : una palabra, una expresión , un objeto que habrá que transformar primero en imagen y luego en poesía.
Aunque los programas de tratamiento de imagen y de tratamiento de texto permiten realizar las formas gráficas más complejas, es lógico partir de una realización manual del caligrama, y sólo en una segunda etapa se puede pensar en la adaptación electrónica del mismo.

2. El punto de partida será pues un dibujo sobre papel que represente la idea original. Luego se escribirá el poema siguiendo el contorno del mismo o llenando su perfil de manera que los versos no sobrepasen los bordes fijados por el dibujo.

3. La última operación consistirá en borrar los trazos de lápiz con el el que se fijaron los contornos del dibujo para dejar visibles las palabras y los versos que conforman el caligrama.


Puedes visitar el siguiente enlace:
caligramas  aquí

                                                  AVISO:


NO TE OLVIDES DE PREPARARTE PARA EL CONCURSO DE CALIGRAMAS EN NUESTRA INSTITUCIÓN FOYER DE CHARITÉ SANTA ROSA.









           Automatic localization of the optic disc in retinal fundus images using multiple features         
Qureshi, Touseef Ahmad and Amin, Hassan and Mahfooz, Hussain and Qureshi, Rashid Jalal and Al-Diri, Bashir (2013) Automatic localization of the optic disc in retinal fundus images using multiple features. In: Bioinformatics & Bioengineering (BIBE), 2012 IEEE, 11 - 13 November 2012, Larnaca, Cyprus.
           An online one class support vector machine based person-specific fall detection system for monitoring an elderly individual in a room environment         
Yu, Miao and Yu, Yuanzhang and Rhuma, Adel and Naqvi, Syed Mohsen Raza and Wang, Liang and Chambers, Jonathon A. (2013) An online one class support vector machine based person-specific fall detection system for monitoring an elderly individual in a room environment. IEEE Journal of Biomedical and Health Informatics, 17 (6). pp. 1002-1014. ISSN 2168-2194
          Referendum: guida agli imbrogli che voterete        

Come annunciato nel mio precedente post, scrivo qualche riga di “debunking”, allegando qualche collegamento nel caso qualcuno voglia approfondire, in vista dell’impegno referendario.

referendum bugie

Partiamo da una guida riassuntiva ai referendum abrogativi.
Secondo me non è fatta molto bene, ma può essere utile per avere un vago barlume riguardo ciò che si va a votare, perché è comunque abbastanza imparziale.
Vediamo ora qualcosa sui vari referendum.

Acqua e servizi

Il punto 4 dell’art. 23 bis che viene cancellato con il sì dice che “le reti sono pubbliche”. Votando sì cancellate questo principio, che dunque non potrà essere reintrodotto.
Le privatizzazioni parziali verranno comunque reintrodotte perché derivano dall’art.101-106 del trattato di Lisbona etcetc.

Quindi Votando sì PRIVATIZZI.

Inoltre, cancelli tutti i paletti che introducono le norme vigenti.

Alcuni link:

Nucleare

Premesso che non state andando a votare “sul nucleare”…
Dopo la sentenza della Cassazione, infatti, è stato completamente stravolto il quesito originale.
Su cosa votate, adesso? Piacerebbe saperlo anche a me.
Anzi, no, lo so!

In pratica abrogate la possibilità che il Consiglio dei Ministri possa adottare la «Strategia energetica nazionale, che individua le priorità e le misure necessarie nella produzione di energia» attraverso «la diversificazione delle fonti energetiche e delle aree geografiche di approvvigionamento».

Quindi Votando sì abroghi la possibilità di avere una politica energetica nazionale.

Credendo di votare contro il nucleare, dunque, voti a favore dello status attuale (60+% di produzione di energia elettrica mediante gas, 20+% da petrolio, carbone, etc) e voti  anche contro le rinnovabili.
Ringraziano Eni, Sorgenia (quindi De Benedetti, Repubblica e tutti i soggetti che ti hanno raccontato le bugie/favolette) e amici che, grazie a questa modifica del quesito, hanno ricevuto un favore 10 volte più grande di quello rappresentato dal quesito originario.

Alcuni link:

Legittimo impedimento

Dopo la sentenza della Corte Costituzionale il 13 gennaio 2011, la legge originale è stata svuotata quasi completamente del contenuto originario.
Rispetto alla schifezza originale, i giudici (e non più gli imputati) possono infatti decidere se concedere o no il legittimo impedimento. Vengono anche fissati dei tempi massimi per i “continui rinvii” (6 mesi).
Un po’ la stessa cosa che accade se noi “comuni mortali” chiediamo il rinvio di un’udienza, perché abbiamo un impegno importante.

Inoltre:

Il rinvio dell'udienza per "legittimo impedimento" non influisce sul corso della prescrizione del reato, che rimane sospeso per l'intera durata del rinvio. La prescrizione riprende il suo corso dal giorno in cui è cessata la causa della sospensione (art.1 comma 5).

Quindi Votando sì abroghi… Cosa?

Direi nulla.

Regalate comunque un po’ di soldi in rimborsi elettorali a Di Pietro, nel caso il referendum raggiunga il quorum. Di ‘sti tempi fanno sempre comodo.

Alcuni link:

  • Direi che basta la descrizione su wikipedia. Trovate tutto l’iter legislativo e i link di approfondimento, in cui potete verificare quanto ho scritto.
CONCLUSIONI referendum vado al mare

Perché mai avrebbero dovuto raccontarci tutte queste bugie su questioni così importanti per spingerci a votare?
Le motivazioni possono essere tante… A me ne viene in mente una, però: visto che per ogni firma raccolta, in caso di raggiungimento del quorum, i promotori dei referendum si intascano 0.52 € (pari alle vecchie 1000 Lire)…

Se è vero che solo per l’acqua hanno raccolto UN MILIONE DI FIRME (cosa di cui vanno peraltro molto orgogliosi), significa che si portano a casa 520.000€.

Mica male.


          Siccome sono demente, già che ci sono, continuo a scrivere stronzate.        
Siccome continuo ad essere insultato dalla libera interpretazione di punti presi a caso da quello che scrivo, direi che per correttezza sono autorizzato a riportare le idiozie con cui mi si è risposto, sempre in merito alla storia del latte Torvis inquinato.
prodotti torvis
Ritenevo di aver già concluso qui in maniera inequivocabile la conversazione, ma se uno vuole avere ragione, DEVE avere ragione, poco da fare.
Vediamo cosa scrive "amico A" (ovviamente “amico B” non poteva fare altro che cliccare su “mi piace”). Poi commento (io sono quello che scrive in nero).
image rispondo solo ad una cosa visto che gli urlatori fagocitano solo aria e cagate.
Fai medicina ?
Sai come funziona quel morbo che tu hai tanto evacuato dalle fauci ?
Ebbene io ho una morosa che è al 6° anno di medicina in procinto di laurearsi, ...la malattia di Creutzfeldt-Jakob ( titolo copiato da wikipedia... ormai bisogna sempre citare le fonti ) è una malattia batterica che si trasmette attraverso le proteine prioniche presenti nel midollo delle carni degli animali infetti ed è per questo che fu vietata la vendita ed il consumo per un periodo della fiorentina in italia.
Queste cose le puoi trovare sul BERGAMINI - NEUROLOGIA.
E' una malattia estremamente rara, ma le persone che colpisce hanno una prospettiva di vita di 12-18 mesi e non si sono all'epoca trovate cure. Ci sono stati dei morti, non molti ma ci sono stati tant'è che lo stato britannico ha stanziato 75milioni di sterline come indennizzo alle famiglie che subirono vittime a causa di questo morbo e di colori che ne sono infetti ( gravi deficenze psichiche ).
http://archiviostorico.corriere.it/2001/febbraio/15/Londra_milioni_morti_Bse_co_0_0102158772.shtml
Se pensi che questi dati siano sbagliati, fai l'università di medicina, laureati e poi potrai dire la tua.
Io avro' citato wikipedia ma son cose di cui con la mia morosa ne parlo e dato che ci sto da 3 anni assieme, in questi 3 anni ho avuto occasione di parlare di tutto e di più.

Punto 1: La disonestà intellettuale

Il carissimo “Amico B”, in qualche post precedente, aveva tirato fuori “dal cilindro magico” (ma ora è diventato il “morbo tanto evacuato dalle [mie] fauci”) la questione “Mucca pazza”, con questa frase:
“Quando scoppiò il caso mucca pazza mi ricordo ci fu un incontro tra politicanti e giornalisti conniventi che affermarono con sicurezza la bontà e la non nocività della carne, finché la gente non é iniziata a morire ovunque in Europa…”
…in seguito al quale il sottoscritto ha aggiunto, nella risposta:
“PS: quant'è "la gente che è morta ovunque in Europa" di "mucca pazza"? Da dove è partita la psicosi? Memoria corta.”
Direi che quanto scritto sopra, citando la fantomatica “morosa che studia medicina” è del tutto fuori luogo (e non risponde alle mie domande).
Di mucca pazza sono morte zero persone. Mucca pazza (BSE) e vCJD/nvCJD NON sono la stessa cosa (anche se quasi certamente c’è un legame fra le due).
Scrivere che “la gente è iniziata a morire ovunque in Europa” [di Mucca Pazza] è molto bello dal punto di vista del “marketing controinformativo”, ma trasuda ignoranza, la stessa della quasi totalità dei giornalisti che affrontavano il tema “perché di moda”.
Poi, stranamente, nessuno dei due mi ha risposto alla domanda “da dove è partita la psicosi?”.
Dai controinformatori schierati contro le multinazionali?
No signori: si chiama speculazione economica, che ha portato a un embargo di 10 anni sull’esportazione di carne bovina inglese. Senza interessi economici, vi scordate che le grandi testate diano un simile risalto ad un fenomeno che, numericamente, era marginale. Anche il vostro caro corriere, repubblica e affini.
Non a caso, l’unica cosa che fatta in Italia dopo l’embargo fu di vietare la Fiorentina e dire “mangia carne italiana, che è sicura”. MA… Non sono stati accertati poco più di un centinaio di casi anche qui?
Mi fermo qui.
Certamente ci sarà già qualche argomento da prendere fuori dal contesto per “aggiustarlo” sì da affermare che non capisco un cazzo.
Vuoi che ti dia anche il parere medico sui danni che la diossina o i 3 boia causano sull'organismo UMANO, ultimo anello della catena alimentare ?
Ti rispondo io, NO! mi sono altamente rotto le balle del tuo atteggiamento.
Vivi felice, informa, controinforma, sputtana... fai quel che vuoi. io continuero' a citare e postare i DATI quando mi verranno presentati di qualsiasi genere.

Punto 2: Atteggiamento e ragionamento analitico

Come ribadito più volte, i dati forniti dal tuo amico fanno schifo, così come il modo in cui ha avuto il coraggio di presentare l’informazione.
Se scrivi che il “latte del Friuli Venezia Giulia è inquinato”, DEVI allegare le analisi che hai fatto, oppure essere certo che siano state fatte e confermino la tua tesi.
Altrimenti proponi la notizia con un altro taglio.
Cosa che non è stata fatta.
Nel momento in cui le analisi vengono fatte e c’è un confronto con dell’altro latte che è in commercio, sarò il primo ad impegnarmi per diffondere la notizia.
Ho poi posto un’altra obiezione, fornitami da una persona che abita in loco, la quale (mi son rotto le balle di copia-incollarla, perché tanto non la leggete) sostiene che i 3mila ettari citati nell’articolo contengono vacche che producono latte per un altro gruppo.
Uno di voi geni mi ha scritto: “guarda che ti han dato un’informazione sbagliata, in quanto [link, documentazione]”?
No.
Continuato a rispondere con tutt’altro.
Vediamo di spiegarla con un paragone logico/informatico.
A casa mia, se io dico A e tu non sei d’accordo, dovresti rispondermi NOT A, motivando. Se mi rispondi con B C D E F, concludendo con “non sai leggere”, come avete fatto entrambi, dimostrate di non essere stato in grado di sostenere un ragionamento di tipo analitico e che le tue argomentazioni sono di tipo emotivo.
Un tipo di conversazione del genere, non ha l’effetto di accrescere la conoscenza degli interlocutori, ma di logorarmi le palle e ci siete riusciti benissimo, per cui mi complimento.
Non ho alcun bisogno di sputtanare, sarebbe crudele. La qualità e portata dei vostri ragionamenti si auto-commenta.

Che poi è buffo perchè tu dici di vivere in "loco" ma mi sembra ( poi correggimi ) che non hai mai lavorato dentro la CAFFARO, che è attaccata alle zone di pascolo ( 1 km o 2 km nell'ottica della diffusione di certe particelle vuol dire ATTACCATO ed è irrilevante ) e non sai cosa c'è dentro...
IO che cito wikipedia, al contrario tuo c'ho passato 3 mesi e so come venivano lavati i contenitori di parafine solfate, che acqua veniva usata per il trattamento soda-cloro, dove finivano gli scarichi del lavaggio TIR ( lavaggio delle cisterne )... di tutti i lavaggi in generale....
ma cmq veramente basta. Il mio gatto ascolterebbe più di quello che sei disposto ad ascoltare te

Punto 3: La tuttologia

Niente, quello che ho scritto sopra sulla proprietà dei pascoli proprio non merita attenzione.
Hai lavorato alla Torvis, hai lavorato (per ben 3 mesi, wow) alla Caffaro; evidentemente sai tutto.
Parafrasando “amico B”, verrebbe spontanea la domanda: “Cosa c’è, forse non ti sei trovato bene con queste due aziende? Questo spiegherebbe un simile arroccamento nelle proprie posizioni!

Ma lascio questo tipo di dialettica a voi.
Sono felice tu possieda un gatto che ascolta più di quello che sono disposto ad ascoltare io; mi dispiace però per lui, che deve sorbirsi queste valanghe senza capo né  coda ogni volta che ha la malaugurata idea di contestarti quando hai ragione “perché hai ragione”.
Povera bestiola. Gatto

          Nuchange Informatics Hiring For Freshers : Associate Software Engineer @ Bangalore        
Nuchange Informatics [www.nuchange.com] Hiring For Freshers : Associate Software Engineer @ Bangalore Job Description: Associate Software Engineer would get challenging and innovative coding and implementation assignments in a fast paced environment. They would work incollaboration with Senior Developers, directly interacting with customers and thus acquiring end-to-end project exposure to quickly graduate to the SoftwareEngineer role. ...
          â€œInformatica” Off-Campus For Freshers : BE/ BTech/ MTech/ MCA – 2016 & 2017 Pass outs : Associate Software Engineer : Last Date : 30 Jun 2017        
Informatica [www.informatica.com] Informatica Off-Campus Recruitment Drive For Freshers : BE/ BTech/ MTech/ MCA – 2016 & 2017 Pass outs : Associate Software Engineer @ Bangalore Job Description: Job Title :  Associate Software Engineer Eligibility: BE/ B. Tech/ ME/ M.Tech (CS/ IT), MCA from 2016 and 2017 batches with an aggregate of 70% throughout acedemics. No ...
          Lowongan Kerja Backend Developer        
Bachelor Degree in Informatics technology /Information System /any related fieldsHave experience in similar position or individual project is preferredFresh graduate are welcome to applySkill : PHP/CI MySQL HTML Javascript OOPWilling to work in Yogyakarta

          Lowongan Kerja Frontend Developer        
Bachelor Degree in Informatics technology /Information System /any related fieldsHave experience in similar position or individual project is preferredFresh graduate are welcome to applySkill : Javascript CSS AngularJs PHP jQuery HTML GIT APIWilling to work ...

           A cross-platform approach to the treatment of ambylopia         
Wei, H. and Zhao, Y. and Dong, F. and Saleh, G. and Ye, X. and Clapworthy, G. (2013) A cross-platform approach to the treatment of ambylopia. In: 13th IEEE International Conference on BioInformatics and BioEngineering, IEEE BIBE 2013, 10 - 13 November 2013, Chania; Greece.
          Foto - PHOTOFORUM: il bar fotografico dove fare 4 chiacchiere in compagnia (steve_ventu)        
steve_ventu scrive nella categoria Foto che: Ho aperto questo blog l’estate scorsa senza nessuna pretesa. Sono un informatico con la grande passione per la fotografia e, cercando il modo di far convivere le 2 cose, ho provato ad aprire questo p
vai agli ultimi aggiornamenti su: forum fotografia
1 Voti

Vai all'articolo completo » .PHOTOFORUM: il bar fotografico dove fare 4 chiacchiere in compagnia.
PHOTOFORUM: il bar fotografico dove fare 4 chiacchiere in compagnia
           Generic active appearance models revisited         
Tzimiropoulos, Georgios and Alabort-I-Medina, Joan and Zafeiriou, Stefanos and Pantic, Maja (2013) Generic active appearance models revisited. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7726 L (PART 3). pp. 650-663. ISSN 0302-9743
          Lowongan Kerja Bussiness Analyst for Product Development        
Bachelor degree from information system informatics engineering computer science or engineering with minimum 2 years of experience in product developmentStrong passion in education business development and managementFamiliar with analytics tools Google Analytics Inspectlet and AppseeProficient ...

           Front line of OA in humanities and social sciences         
Eve, Martin Paul (2013) Front line of OA in humanities and social sciences. In: The 2nd SPARC Japan Seminar 2013 "Latest Developments in Open Access ― Humanities and Social Sciences ―", 23rd August 2013, National Institute for Informatics, Tokyo, Japan.
           Building recognition using local oriented features         
Li, J. and Allinson, Nigel (2013) Building recognition using local oriented features. IEEE Transactions on Industrial Informatics, 9 (3). pp. 1697-1704. ISSN 1551-3203
           Neural circuit models and neuropathological oscillations         
Coyle, Damien and Sen Bhattacharya, Basabdatta and Zou, X. and Wong-Lin, K. and Abuhassan, K. and Maguire, L. (2013) Neural circuit models and neuropathological oscillations. In: Handbook of bio- and neuro-informatics. Springer Verlag. ISBN 9783642305733
           A psychological perspective on virtual communities supporting terrorist & extremist ideologies as a tool for recruitment         
Bowman-Grieve, Lorraine (2013) A psychological perspective on virtual communities supporting terrorist & extremist ideologies as a tool for recruitment. Security Informatics, 2 (9). ISSN UNSPECIFIED
          Lowongan Kerja Software Project Manager        
3 years min project management experience in IT industry.Bachelor39s degree Informatic Technology or Computer Science or Computer Engineering.Knowledge and have experience Website amp Mobile apps programming.Must have excellent client management skills in order to ...

          Lowongan Kerja System Engineer        
Bachelor degree in informatics engineering information system computer science or equivalent with minimum 2 years of working experience in system administrator or similar positionStrong background in Linux administrationExperience working with configuration management like Chef Puppet ...

          How is California preparing for the effects of climate change?        
On the next Your Call, we'll rebroadcast a conversation we had about the impacts of climate change on California. How do we need to adapt our natural and built environments and our policies? The California Air Resources Board has adopted the nation's first cap-and-trade regulations. Will these air pollution controls work? What other policies are needed to adapt to climate change effects such as rising sea level? How is climate change affecting your area? It's Your Call with Rose Aguilar, and you.

Guests:
Healy Hamilton, former director of the Center for Applied Biodiversity Informatics at the California Academy of Sciences

Steve Goldbeck, chief deputy director of the SF Bay Conservation and Development Commission

Charlie Knox, public works and community development director for the City of Benicia

Click to Listen: How is California preparing for the effects of climate change?
          Efficient Identification of Protein-Coding Variants in a Model Organism Through Exome Sequencing, New Webinar Hosted by Xtalks        

During a live webinar on September 17, 2014, Mick Watson, Head of Bioinformatics, Edinburgh Genomics, University of Edinburgh, will discuss how he developed probe sets to capture domestic pig (Sus scrofa) exonic sequences based upon the current Ensembl pig gene annotation supplemented with mapped expressed sequence tags (ESTs). The broadcast begins at 12pm EDT.

(PRWeb August 22, 2014)

Read the full story at http://www.prweb.com/releases/2014/08/prweb12113624.htm


          Global Foundation For Human Aging Research Contributes $50,000 To Support The Wellomics™ Bioinformatics Effect Positioning System For Formulating Effective Natural Products        


          Tasse!!        
Nonostante i tanti rinvii alla fine ci siamo duvuti arrendere a sacrificare un weekend per la compilazione delle tasse AMERICANE!!

Come funziona la compilazione delle tasse qui?! Mistero...
Ti arrivano a casa (in un periodo tra Gennaio e Marzo) diversi moduli e alcuni libretti con istruzioni. Nel frattempo l'azienda per cui lavori ti intasa di mail (circa 2 a settimana) per ricordarti che entro il 16 Aprile devi aver fatto la dichiarazione dei redditi! Paga/Dichiara... Paga/Dichiara... Paga/Paga...

All'inizio non ci fai caso, poi di punto in bianco con il passare dei giorni iniziano a girare voci strane: qualcuno non ha ricevuto il modulo 1042-S, altri non hanno il 1099-HC dell'assicurazione sanitaria... o ancora che la convenzione Italia-Usa circa l'articolo X per l'esenzione delle tasse non è applicabile, che fare da soli i moduli delle tasse è un suicidio, che il software funziona solo su PC e non su MAC e non con Chrome... e ancora che il software di calcolo viene 25$ ma poi non è compatibile con il modulo dell' Ospedale e quindi conviene quello da 45$... ecc... ed è subito FOBIA!!

Uno dei più celebri autori fai-da-te dei moduli delle tasse americane (Al Capone)... #he did it wrong!
Quindi nel tentativo di evitare la galera ed adempiere ai nostri doveri di cittadini non-americani lavoratori (meglio noti come alien) oggi io ed un collega ci siamo dedicati alla compilazione tasse!

Pronti.... VIA!! p.s Juve-Napoli sullo sfondo non ha aiutato...


Ecco il processo step by step:

- inviare una mail al sistema informatico dell'ospedale che genera una password per usare il software interno (gratuito);

- con questa password ci si può iscrivere a questo sito che da accesso al software che calcola quante tasse devi dare o quante ne devi ri-avere! Il tutto con un contatore molto grande che ogni volta che si avanza nella compilazione dei moduli calcola in diretta il valore dare/avere: ANSIA!!

- ci sono circa 30 pagine da riempire (20 per le tasse federali e 10 per quelle statali) che chiedono un sacco di cose che a dire il vero non abbiamo neanche capito fino in fondo.... all'inizio leggevamo sempre l'aiuto interattivo poi però abbiamo capito che era un po' fuorviante (una serie di link ad altri link che rimandavano a leggi incomprensibili a noi ed anche a Google Translate!).

Soluzione: per procedere in modo spedito abbiamo capito che il metodo migliore era quello di andare avanti a tentativi: si provava a scrivere qualcosa e poi a turno si andava avanti nell'avanzamento dei moduli. Se il valore restava 0,00$ si poteva procedere!! Italian Style!!

Dopo circa 2.20h questo è stato il nostro risultato:
- FORM CA6: fatto (così pare)!
- FORM 1040NR-EZ: fatto (così pare)!
- FORM 8843: fatto (così pare)!
- Schedule DI: fatto (così pare)!
- Additional Information for treaty Claim: fatto (così pare)!
- Form MA-1NR-PY: fatto (così pare)!
- Schedule NRC: fatto (così pare)!
- Schedule X&Y: fatto (così pare)!
- Form PV: fatto (così pare)!


.... ed infine il totale: nulla da dare e nulla da ricevere!!! Applausi!!




Felicità per il risultato, ma un po' di rammarico perchè ci sono colleghi che raccontano di aver avuto 400$ di rimborso... MISTERO!! Adesso resta solo da capire: a chi diavolo vanno spediti tutti questi moduli?! Abbiamo ancora tempo...

****** ******** ********* ********* ********* ********* ******** ********* ********* *****

Per sfatare qualche mito sulla compilazione delle tasse vi dico quello che ho capito io su tutta questo mega-cinema:

- tutto si fa on-line ed il software genera dei moduli che vanno stampati e spediti via posta normale;

- la voce sui software a pagamento è vera, ce ne sono circa 2 milioni ed il motivo è presto detto: qui non esiste il CAF e una agenzia per la compilazione del modulo tasse chiede minimo 250-300$ ... poi ovviamente si sale (e di molto) a seconda dei possedimenti, delle esenzioni, dei recuperi ecc... ecc. Per questo motivo un software che costa anche 100$ è un grande affare!!

- ho dedotto che compilare i moduli senza software sia più difficile che fare una canestro da 500.000$ da centrocampo (non si trovano video di auto-compilazione di moduli delle tasse su youtube);

- tutti i software sono a pagamento tranne quello base fornito dall'azienda che va bene solo se sei un pezzente come noi, ossia se non hai case, non hai acceso mutui, non paghi rate per TV/auto/computer e non hai moglie e figli a carico.

- se non sei un pezzente o se sei un vero cittadino americano puoi ottenere rimborsi su tutto: dall'affitto alle spese mediche, alle spese scolastiche, ai trasporti... ecc... quindi è bene pagare una persona competente!!

****** ******** ********* ********* ********* ********* ******** ********* ********* *****

Ad ogni modo, qualora avessimo sbagliato, probabilmente finiremo tutti nella stessa cella... per lo stesso "stupido" errore!! Mal comune mezzo gaudio...
           Use of bioinformatics and PCR in the search for ABC transporter homology among various bacteria         
Jauhangeer, B.R., Wren, M.W.D., MacDonald, R.A.C., Perry, David and Greenwell, Pamela (2004) Use of bioinformatics and PCR in the search for ABC transporter homology among various bacteria. British Journal of Biomedical Science, 61 (4). pp. 182-185. ISSN 0967-4845
          #SAEM16 panels        
SAEM's Annual Meeting is in New Orleans this year. While a lot has changed since San Diego, I'm fortunate to again be participating in several didactic sessions this week. The program is available online - links to slides are forthcoming. 
  • Tuesday @ 1:45pm or so in Napoleon Ballroom C2 (3rd floor): As part of the Social Media Bootcamp, I'll be talking with Megan Ranney about using Social Media for research - slides

  • Thursday @ 8am in Napoleon Ballroom B2 (3rd floor): DS-22: I'll speak about conducting EM research using social media tools, in a panel with Megan Ranney & Austin Kilaru - slides & references

  • Thursday @ 9am in Napoleon Ballroom B2 (3rd floor): DS-28: Nidhi Garg moderates a panel featuring me, Esther Choo and Megan Ranney on disseminating research through Social Media - slides & references
If you're interested in any of these topics, and at SAEM, you probably also want to attend the Social Media committee meeting Wednesday at 1pm in Evergreen (4th floor). Also on Friday morning Ken Milne talks about knowledge translation through social media, in DS-58 (Grand Ballroom E, 5th floor).  In the same room, right after, Rob Cooney and others talk about social media as an adjunct to resident conference (DS-62).

So, four social media-related didactic sessions, plus a bootcamp. Meanwhile, I can't help but notice the typical informatics panels (some of which I'd participated in, last year) aren't present this year. I don't even see an Informatics Interest Group meeting. Not sure if anything can be read into this shift, but at the very least there's an opportunity to reintroduce an important topic to SAEM, next year. 

          Business competitivo con le migliori infrastrutture IT!        
Realizziamo obiettivi di sviluppo e crescita delle infrastrutture informatiche, studiati insieme alle aziende, basati su modelli innovativi e personalizzati. In questo modo possiamo condurre il vostro business verso il futuro, agevolando proattivamente le evoluzioni dello scenario di riferimento. Ogni obbiettivo prefissato ...
          Program update        
It's an honor to be included among the high-quality EM blogs and podcasts in Brent Thoma's article, in this month's Annals (second line in the figure). But it's also a reminder that blogborygmi.com content has become sparse. More of my on-the-spot EM (and informatics) opinions are posted at EPMonthly.com, specifically the Crash Cart series.
          #SAEM15 panels        
I'm very happy to be in San Diego for SAEM's Annual Meeting - and fortunate to be participating in a few didactic sessions this week. Here are links to the program, slides and references.

Tuesday  1-5:30pm - Nautilus 3: Social Media Bootcamp - led by Brett Rosen - slides

Wednesday 1:30pm - DS-18 Point Loma Ballroom A: I'll speak on clinical decision support projects for residents, as part of Jeff Nielson's panel called "Emergency Informatics Research: Interesting, Approachable Projects for the Resident or Career Scientist" along with Jason Shapiro and Adam Landman - slides - references

Wednesday 2:30 pm - DS-19 Point Loma Ballroom A: I'll speak on research opportunities in Informatics Education, as part of Ryan Radecki's pane "From Clicks and Complaints to an Informatics Curriculum" along with Jim McClay - slides

Friday 4pm - DS-95 -Harbor Island Ballroom 1: I'll speak about conducting EM research using social media tools, in a panel with Megan Ranney & Austin Kilaru - slides
          Assistenza informatica a 360 gradi? Sistema 3 c’è!        
Le ferie stanno terminando per tutti, ed è davvero ora di rimettersi in carreggiata con il lavoro! Ma dopo settimane di assenza dagli uffici può succedere di aver bisogno di un’assistenza informatica per far ripartire il tutto alla perfezione. E proprio un’assistenza ...
          Order Sets & the Art of Medicine        

When I was part of Jeff Neilson's illustrious Informatics Research panel at SAEM in Dallas this past spring (we were recently invited back for San Diego next year) I spoke on the topic of simple clinical decision support projects, particularly evidence-based order sets. I also talked about incorporating clinical calculators into orders, so trainees could enter discrete patient data into the EHR and see if the test they were considering was appropriate.

These are feasible research projects that can have measurable impacts in utilization or even care, don't require big budgets, and can be done in a resident-friendly timeframe. 

There was a question from the audience. Someone wanted to know if order sets and clinical calculators were antithetical to the idea of resident education - that organizing tests and meds by complaint, and building calculators into the EHR, made it too easy to be a doctor. Might we consider abandoning order sets and focusing on memorizing doses and appropriate indications for tests? By focusing on these things, were we failing to train doctors in the Art of Medicine? 

I was surprised by the question. Perhaps it's because I'm in a bubble - surrounded by colleagues who know as much (or more) than me about patient safety, bedside teaching, EHR usability, and evidence-based guidelines for care. 

I don't remember exactly how I responded. I said something about how order sets and clinical calculators are here to stay, unquestionably reduce errors, improve efficiency and encourage appropriate resource utilization (when implemented well) and the only challenge remaining is making them as current and easy-to-use as possible. 

That was a start, but I should have also pointed the audience member to the Checklist Manifesto, which covers the evidence, obstacles and psychology behind getting doctors to put their ego aside, be humble and make sure everything worth doing is getting done. After all, there was probably a time where pilots complained about losing the artistry of flying, but the public cared about their planes not crashing. Similarly, in an era where we are trying to get 100% compliance on core measures, when we're asked to do more, and see more, with less time and less support, it's imperative we make the EHR work for us as best it can. 

The Art of Medicine may have once involved regaling patients and staff with feats of memory; now it seems more appropriately about forming a fast rapport with patients, and explaining Bayesian algorithms for risk stratification. Let computers do what they're good at - lists and calculators - and let doctors have meaningful conversations with patients. This seems like the new state of the Art. 

          #SAEM14 panel discussions on social media scholarship & clinical decision support        
I was very pleased this year to participate on two panels at SAEM in Dallas. 

On Thursday, I joined Michelle Lin and (remotely) Rob Cooney for the panel led by Jason Nomura, called "From Twitter to Tenure - Use of Social Media to Advance Your Academic Career" (search for DS067 in the program).

Jason has posted our session on his blog at his blog, and on YouTube. 

On Saturday I joined Adam Landman and Jason Shapiro in a didactic session led by Jeff Nielson, called "Emergency Informatics Research: Interesting, Approachable Projects for the Resident or Career Scientist" (search for DS095 in the program). I ended up citing a lot of enlightening papers on clinical decision support; these references are now available; may post a link to the talk or presentation, as well.
          Cele mai frumoase jocuri pentru fetite        
 Nu-mi lasam deloc fetita sa stea pe laptop la cei 6 anisori ai ei,dar la ultima sedinta ,doamna invatatoare ne-a explicat ca aceasta este noua generatie,este foarte normal pentru ei sa stea pe internet si mai ales sa se priceapa mai bine ca noi.
 Incurajata de spusele doamnei invatatoare si de faptul ca inca de la grupa pregatitoare au ore de informatica,cu profesor de specialitate,am inceput sa-mi las fetita maxim o ora la laptop,pe jocuri pentru copii .Cel mai super site si cel mai atractiv pentru ea e acesta: http://www.jocbarbie.ro .Spre surprinderea mea a invatat multe lucruri utile si din aceasta experienta eu am invatat sa nu-mi limitez copilul ,sa-l se deschida spre nou.


          Privacy Lectures @ University of Iowa        
I'm giving two lectures at the University of Iowa on the 21st and 22nd of April as part of the Iowa Informatics Showcase Symposium.

The two lectures are:


The Iowa Informatics Showcase Symposium will focus on new directions in informatics research and involve talks from external and internal scholars. It will also include an informatics fair with a poster session, and booths highlighting research centers, core facilities, centers and institutes. Saturday Workshops will be conducted as part of the symposium with topics including software basics, GIS, mapping and visualization, statistical packages, and others.
          De ce n-o sa-ti gasesti adevaratul sine sub cochilia in care te-ai ascuns        

De fiecare data cand lumea din jur pune presiune asupra ta, sub forma de asteptari, pretentii si "ce ar trebui sa faci", incepi sa te micsorezi putin cate putin, pana in punctul in care habar nu mai ai cine esti cu adevarat. Oamenii aia n-au nici o jena sa calce peste sufletul tau, pentru ca si ei au avut parte de acelasi tratament. Tin minte si acum, cum intr-a 10-a am fost luat cu japca din clasa cu profil informatic si mutat la chimie, pe motiv ... Citeste tot articolul...

Citeste mai departe:

De ce n-o sa-ti gasesti adevaratul sine sub cochilia in care te-ai ascuns


          RE:Guía de usuario de StarOffice 6 en español        

RE:Guía de usuario de StarOffice 6 en español

Respuesta a Guía de usuario de StarOffice 6 en español

Hola soy Profesor de Informatica. Si puedes enviarme esa guia de staroffice me serviria mucho para mis propositos

Gracias.

Publicado el 27 de Enero del 2009 por Gustavo Antonio Pagliari

          Response to To Increase Trust, Change the Social Design Behind Aggregated Biodiversity Data        

Nico Franz and Beckett W. Sterner recently published a preprint entitled "To Increase Trust, Change the Social Design Behind Aggregated Biodiversity Data" on bioRxiv http://dx.doi.org/10.1101/157214 Below is the abstract:

Growing concerns about the quality of aggregated biodiversity data are lowering trust in large-scale data networks. Aggregators frequently respond to quality concerns by recommending that biologists work with original data providers to correct errors "at the source". We show that this strategy falls systematically short of a full diagnosis of the underlying causes of distrust. In particular, trust in an aggregator is not just a feature of the data signal quality provided by the aggregator, but also a consequence of the social design of the aggregation process and the resulting power balance between data contributors and aggregators. The latter have created an accountability gap by downplaying the authorship and significance of the taxonomic hierarchies ≠ frequently called "backbones" ≠ they generate, and which are in effect novel classification theories that operate at the core of data-structuring process. The Darwin Core standard for sharing occurrence records plays an underappreciated role in maintaining the accountability gap, because this standard lacks the syntactic structure needed to preserve the taxonomic coherence of data packages submitted for aggregation, leading to inferences that no individual source would support. Since high-quality data packages can mirror competing and conflicting classifications, i.e., unsettled systematic research, this plurality must be accommodated in the design of biodiversity data integration. Looking forward, a key directive is to develop new technical pathways and social incentives for experts to contribute directly to the validation of taxonomically coherent data packages as part of a greater, trustworthy aggregation process.

Below I respond to some specific points that annoyed me about this article, at the end I try and sketch out a more constructive response. Let me stress that although I am the current Chair of the GBIF Science Committee, the views expressed here are entirely my own.

Trust and social relations

Trust is a complex and context-sensitive concept...First, trust is a dependence relation between a person or organization and another person or organization. The first agent depends on the second one to do something important for it. An individual molecular phylogeneticist, for example, may rely on GenBank (Clark et al. 2016) to maintain an up-to-date collection of DNA sequences, because developing such a resource on her own would be cost prohibitive and redundant. Second, a relation of dependence is elevated to being one of trust when the first agent cannot control or validate the second agent's actions. This might be because the first agent lacks the knowledge or skills to perform the relevant task, or because it would be too costly to check.

Trust is indeed complex. I found this part of the article to be fascinating, but incomplete. The social network GBIF operates in is much larger than simply taxonomic experts and GBIF, there are relationships with data providers, other initiatives, a broad user community, government agencies that approve it's continued funding, and so on. Some of the decisions GBIF makes need to be seen in this broader context.

For example, the article challenges GBIF for responding to errors in the data by saying that these should be "corrected at source". This a political statement, given that data providers are anxious not to ceed complete control of their data to aggregators. Hence the model that GBIF users see errors, those errors get passed back to source (the mechanisms for tis is mostly non-existent), the source fixes it, then the aggregator re-harvests. This model makes assumptions about whether sources are either willing or able to fix these errors that I think are not really true. But the point is this is less about not taking responsibility, but instead avoiding treading on toes by taking too much responsibility. Personally I think should take responsibility for fixing a lot of these errors, because it is GBIF whose reputation suffers (as demonstrated by Franz and Sterner's article).

Scalability

A third step is to refrain from defending backbones as the only pragmatic option for aggregators (Franz 2016). The default argument points to the vast scale of global aggregation while suggesting that only backbones can operate at that scale now. The argument appears valid on the surface, i.e., the scale is immense and resources are limited. Yet using scale as an obstacle it is only effective if experts were immediately (and unreasonably) demanding a fully functional, all data-encompassing alternative. If on the other hand experts are looking for token actions towards changing the social model, then an aggregator's pursuit of smaller-scale solutions is more important than succeeding with the 'moonshot'.

Scalability is everything. GBIF is heading towards a billion occurrence records and several million taxa (particularly as more and more taxa from DNA-barcoding taxa are added). I'm not saying that tractability trounces trust, but it is a major consideration. Anybody advocating a change has got to think about how these changes will work at scale.

I'm conscious that this argument could easily be used to swat away any suggestion ("nice idea, but won't scale") and hence be a reason to avoid change. I myself often wish GBIF would do things differently, and run into this problem. One way around it is to make use of the fact that GBIF has some really good APIs, so if you want GBIF to do something different you can build a proof of concept to show what could be done. If that is sufficiently compelling, then the case for trying to scale it up is going to be much easier to make.

Multiple classifications

As a social model, the notion of backbones (Bisby 2000) was misguided from the beginning. They disenfranchise systematists who are by necessity consensus-breakers, and distort the coherence of biodiversity data packages that reflect regionally endorsed taxonomic views. Henceforth, backbone-based designs should be regarded as an impediment to trustworthy aggregation, to be replaced as quickly and comprehensively as possible. We realize that just saying this will not make backbones disappear. However, accepting this conclusion counts as a step towards regaining accountability.

This strikes me as hyperbole. "They disenfranchise systematists who are by necessity consensus-breakers". Really? Having backbones in no way prevents people doing systematic research, challenging existing classifications, or developing new ones (which, if they are any good, will become the new consensus).

We suggest that aggregators must either author these classification theories in the same ways that experts author systematic monographs, or stop generating and imposing them onto incoming data sources. The former strategy is likely more viable in the short term, but the latter is the best long-term model for accrediting individual expert contributions. Instead of creating hierarchies they would rather not 'own' anyway, aggregators would merely provide services and incentives for ingesting, citing, and aligning expert-sourced taxonomies (Franz et al. 2016a).

Backbones are authored in the sense that they are the product of people and code. GBIF's is pretty transparent (code and some data on github, complete with a list of problems). Playing Devil's advocate, maybe the problem here is the notion of authorship. If you read a paper with 100's of authors, why does that give you any greater sense of accountabily? Is each author going to accept responsibility for (or being to talk cogently about) every aspect of that paper? If aggregators such as GBIF and Genbank didn't provide a single, simple way to taxonomically browse the data I'd expect it would be the first thing users would complain about. There are multiple communities GBIF must support, including users who care not at all about the details of classification and phylogeny.

Having said that, obviously these backbone classifications are often problematic and typically lag behind current phylogenetic research. And I accept that they can impose a certain view on how you can query data. GenBank for a long time did not recognise the Ecdysozoa (nematodes plus arthropods) despite the evidence for that group being almost entirely molecular. Some of my research has been inspired by the problem of customising a backbone classification to better more modern views (doi:10.1186/1471-2105-6-208).

If handling multiple classifications is an obstacle to people using or contributing data to GBIF, then that is clearly something that deserves attention. I'm a little sceptical, in that I think this is similar to the issue of being able to look at multiple versions of a document or GenBank sequence. Everyone says it's important to have, I suspect very few people ever use that functionality. But a way forward might be to construct a meaningful example (in other words an live demo, not a diagram with a few plant varieties).

Ways forward

We view this diagnosis as a call to action for both the systematics and the aggregator communities to reengage with each other. For instance, the leadership constellation and informatics research agenda of entities such as GBIF or Biodiversity Information Standards (TDWG 2017) should strongly coincide with the mission to promote early-stage systematist careers. That this is not the case now is unfortunate for aggregators, who are thereby losing credibility. It is also a failure of the systematics community to advocate effectively for its role in the biodiversity informatics domain. Shifting the power balance back to experts is therefore a shared interest.

Having vented, let me step back a little and try and extract what I think the key issue is here. Issues such as error correction, backbones, multiple classifications are important, but I guess the real issue here is the relationship between experts such as taxonomists and systematists, and large-scale aggregators (note that GBIF serves a community that is bigger than just these researchers). Franz and Sterner write:

...aggregators also systematically compromise established conventions of sharing and recognizing taxonomic work. Taxonomic experts play a critical role in licensing the formation of high-quality biodiversity data packages. Systems of accountability that undermine or downplay this role are bound to lower both expert participation and trust in the aggregation process.

I think this is perhaps the key point. Currently aggregation tends to aggregate data and not provenance. Pretty much every taxonomic name has at one point or other been published by somebody. For various reasons (including the crappy way most nomenclature databases cite the scientific literature) by the time these names are assembled into a classification by GBIF the names have virtually no connection to the primary literature, which also means that who contributed the research that led to that name being minted (and the research itself) is lost. Arguably GBIF is missing an opportunity to make taxonomic and phylogenetic research more visible and discoverable (I'd argue this is a better approach than Quixotic efforts to get all biologists to always cite the primary taxonomic literature).

Franz and Sterner's article is a well-argued and sophisticated assessment of a relationship that isn't working the way it could. But to talk in terms of "power balance" strikes me as miscasting the debate. Would it not be better to try and think about aligning goals (assuming that is possible). What do experts want to achieve? What do they need to achieve those goals? Is it things such as access to specimens, data, literature, sequences? Visibility for their research? Demonstrable impact? Credit? What are the impediments? What, if anything, can GBIF and other aggregators do to help? In what way can facilitating the work of experts help GBIF?

In my own "early-stage systematist career" I had a conversation with Mark Hafner about the Louisiana State University Museum providing tissue samples for molecular sequencing, essentially a "project in a box". Although Mark was complaining about the lack credit for this (a familiar theme) the thing which struck me was how wonderful it would be to have such a service - here's everything you need to do your work, go do some science. What if GBIF could do the same? Are you interested in this taxonomic group, well here's the complete sum of what we know so far. Specimens, literature, DNA sequences, taxonomic names, the works. Wouldn't that be useful?

Franz and Sterner call for "both the systematics and the aggregator communities to reengage with each other". I would echo this. I think that the sometimes dysfunctional relationship between experts and aggregators is partly due to the failure to build a community of researchers around GBIF and its activities. The focus of GBIF's relationship with the scientific community has been to have a committee of advisers, which is a rather traditional and limited approach ("you're a scientist, tell us what scientists want"). It might be better served if it provided a forum for researchers to interact with GBIF, data providers, and each other.

I stated this blog (iPhylo) years ago to vent my frustrations about TreeBASE. At the time I was fond of a quote from a philosopher of science that I was reading, to the effect that we only criticise those things that we care about. I take Franz and Sterner's article to indicate that they care about GBIF quite a bit ;). I'm looking forward to more critical discussion about how we can reconcile the needs of experts and aggregators as we seek to make global biodiversity data both open and useful.


          This is what phylodiversity looks like        

Following on from earlier posts exploring how to map DNA barcodes and putting barcodes into GBIF it's time to think about taking advantage of what makes barcodes different from typical occurrence data. At present GBIF displays data as dots on a map (as do I in http://iphylo.org/~rpage/bold-map/). But barcodes come with a lot more information than that. I'm interested in exploring how we might measure and visualise biodiversity using just sequences.

Based on a talk by Zachary Tong (Going Organic - Genomic sequencing in Elasticsearch) I've started to play with n-gram searches on DNA barcodes using Elasticsearch, an open source search engine. The idea is that we break the DNA sequence into every possible "word" of length n (also called a k-mer or k-tuple, where k = n).

For example, for n = 5, the sequence GTATCGGTAACGAACTT would look like this:


GTATCGGTAACGAACTT

GTATC
TATCG
ATCGG
TCGGT
CGGTA
GGTAA
GTAAC
TAACG
AACGAA
ACGAAC
CGAACT
GAACTT

The sequence GTATCGGTAACGAACTT comes from Hajibabaei and Singer (2009) who discussed "Googling" DNA sequences using search engines (see also Kuksa and Pavlovic, 2009). If we index sequences in this way then we can do BLAST-like searches very quickly using Elasticsearch. This means it's feasible to take a DNA barcode and ask "what sequences look like this?" and return an answer qucikly enoigh for a user not to get bored waiting.

Another nice feature of Elasticsearch is that it supports geospatial queries, so we can ask for, say, all the barcodes in a particular region. Having got such a list, what we really want is not a list of sequences but a phylogenetic tree. Traditionally this can be a time consuming operation, we have to take the sequences, align them, then input that alignment into a tree building algorithm. Or do we?

There's growing interest in "alignment-free" phylogenetics, a phrase I'd heard but not really followed up. Yang and Zhang (2008) described an approach where every sequences is encoded as a vector of all possible k-tuples. For DNA sequences k = 5 there are 45 = 1024 possible combinations of the bases A, C, G, and T, so a sequence is represented as a vector with 1024 elements, each one is the frequency of the corresponding 5-tuple. The "distance" between two sequences is the mathematical distance between these vectors for the two sequences. Hence we no longer need to align the sequences being comapred, we simply chunk them into all "words" of 5 bases in length, and compare the frequencies of the 1024 different possible "words".

In their study Yang and Zhang (2008) found that:

We compared tuples of different sizes and found that tuple size 5 combines both performance speed and accuracy; tuples of shorter lengths contain less information and include more randomness; tuples of longer lengths contain more information and less random- ness, but the vector size expands exponentially and gets too large and computationally inefficient.

So we can use the same word size for both Elasticsearch indexing and for computing the distance matrix. We still need to create a tree, for which we could use something quick like neighbour-joining (NJ). This method is sufficiently quick to be available in Javascript and hence can be computed by a web browser (e.g., biosustain/neighbor-joining).

Putting this all together, I've built a rough-and-ready demo that takes some DNA barcodes, puts them on a map, then enables you to draw a box on a map and the demo will retrieve the DNA barcodes in that area, compute a distance matrix using 5-tuples, then build a NJ tree, all on the fly in your web browser.

Phylodiversity on the fly from Roderic Page on Vimeo.

This is all very crude, and I need to explore scalability (at the moment I limit the results to the first 200 DNA sequences found), but it's encouraging. I like the idea that, in principle, we could go to any part of the globe, ask "what's there?" and get back a phylogenetic tree for the DNA barcodes in that area.

This also means that we could start exploring phylogenetic diversity using DNA barcodes, as Faith & Baker (2006) wanted a decade ago:

...PD has been advocated as a way to make the best-possible use of the wealth of new data expected from large-scale DNA “barcoding” programs. This prospect raises interesting bio-informatics issues (discussed below), including how to link multiple sources of evidence for phylogenetic inference, and how to create a web-based linking of PD assessments to the barcode–of-life database (BoLD).

The phylogenetic diversity of an area is essentially the length of the tree of DNA barcodes, so if we build a tree we have a measure of diversity. Note that this contrasts with other approaches, such as Miraldo et al.'s "An Anthropocene map of genetic diversity" which measured genetic diversity within species but not between (!).

Practical issues

There are a bunch of practical issues to work through, such as how scalable it is to compute phylogenies using Javascript on the fly. For example, could we do something like generate a one degree by one degree grid of the Earth, take all the barcodes in each cell and compute a phylogeny for each cell? Could we do this in CouchDB? What about sampling, should we be taking a finite, random sample of sequences so that we try and avoid sampling bias?

There are also data management issues. I'm exploring downloading DNA barcodes, creating a Darwin Core Archive file using the Global Genome Biodiversity Network (GGBN) data standard, then converting the Darwin Core Archive into JSON and sending that to Elasticsearch. The reason for the intermediate step of creating the archive is so that we can edit the data, add missing geospatial informations, etc. I envisage having a set of archives, hosted say on GitHub. These archives could also be directly imported into GBIF, ready for the time that GBIF can handle genomic data.

References

  • Faith, D. P., & Baker, A. M. (2006). Phylogenetic diversity (PD) and biodiversity conservation: some bioinformatics challenges. Evol Bioinform Online. 2006; 2: 121–128. PMC2674678
  • Hajibabaei, M., & Singer, G. A. (2009). Googling DNA sequences on the World Wide Web. BMC Bioinformatics. Springer Nature. https://doi.org/10.1186/1471-2105-10-s14-s4
  • Kuksa, P., & Pavlovic, V. (2009). Efficient alignment-free DNA barcode analytics. BMC Bioinformatics. Springer Nature. https://doi.org/10.1186/1471-2105-10-s14-s9
  • Miraldo, A., Li, S., Borregaard, M. K., Florez-Rodriguez, A., Gopalakrishnan, S., Rizvanovic, M., … Nogues-Bravo, D. (2016, September 29). An Anthropocene map of genetic diversity. Science. American Association for the Advancement of Science (AAAS). https://doi.org/10.1126/science.aaf4381
  • Yang, K., & Zhang, L. (2008, January 10). Performance comparison between k-tuple distance and four model-based distances in phylogenetic tree reconstruction. Nucleic Acids Research. Oxford University Press (OUP). https://doi.org/10.1093/nar/gkn075

          Guest post: It's 2016 and your data aren't UTF-8 encoded?        

Bob mesibovThe following is a guest post by Bob Mesibov.

According to w3techs, seven out of every eight websites in the Alexa top 10 million are UTF-8 encoded. This is good news for us screenscrapers, because it means that when we scrape data into a UTF-8 encoded document, the chances are good that all the characters will be correctly encoded and displayed.

It's not quite good news for two reasons.

In the first place, one out of eight websites is encoded with some feeble default like ISO-8859-1, which supports even fewer characters than the closely related windows-1252. Those sites will lose some widely-used punctuation when read as UTF-8, unless the webpage has been carefully composed with the HTML equivalents of those characters. You're usually safe (but see below) with big online sources like Atlas of Living Australia (ALA), APNI, CoL, EoL, GBIF, IPNI, IRMNG, NCBI Taxonomy, The Plant List and WoRMS, because these declare a UTF-8 charset in a meta tag in webpage heads. (IPNI's home page is actually in ISO-8859-1, but its search results are served as UTF-8 encoded XML.)

But a second problem is that just because a webpage declares itself to be UTF-8, that doesn't mean every character on the page sings from the Unicode songbook. Very odd characters may have been pulled from a database and written onto the page as-is. In ALA I recently found an ancient rune — the High Octet Preset control character (HOP, hex 81):

http://biocache.ala.org.au/occurrences/6191ca90-873b-44f8-848d-befc29ad7513http://biocache.ala.org.au/occurrences/5077df1f-b70a-465b-b22b-c8587a9fb626

HOP replaces ü on these pages and is invisible in your browser, but a screenscrape will capture the HOP and put SchHOPrhoff in your UTF-8 document.

Another example of ALA's fidelity to its sources is its coding of the degree symbol, which is a single-byte character (hex b0) in windows-1252, e.g. in Excel spreadsheets, but a two-byte character (hex c2 b0) in Unicode. In this record, for example:

http://biocache.ala.org.au/occurrences/5e3a2e05-1e80-4e1c-9394-ed6b37441b20

the lat/lon was supplied (says ALA) as 37°56'9.10"S 145° 0'43.74"E. Or was it? The lat/lon could have started out as 37°56'9.10"S 145°0'43.74"E in UTF-8. Somewhere along the line the lat/lon was converted to windows-1252 and the ° characters were generated, resulting in geospatial gibberish.

When a program fails to understand a character's encoding, it usually replaces the mystery character with a ?. A question mark is a perfectly valid character in commonly used encodings, which means the interpretation failure gets propagated through all future re-uses of the text, both on the Web and in data dumps. For example,

http://biocache.ala.org.au/occurrences/dfbbc42d-a422-47a2-9c1d-3d8e137687e4

gives N?crophores for Nécrophores. The history of that particular character failure has been lost downstream, as is the case for myriads of other question marks in online biodiversity data.

In my experience, the situation is much worse in data dumps from online sources. It's a challenge to find a dump without question marks acting as replacement characters. Many of these question marks appear in author and place names. Taxonomists with eastern European names seem to fare particularly badly, sometimes with more than one character variant appearing in the same record, as in the Australian Faunal Directory (AFD) offering of Wêgrzynowicz, W?grzynowicz and Węgrzynowicz for the same coleopterist. Question marks also frequently replace punctuation, such as n-dashes, smart quotes and apostrophes (e.g. O?Brien (CoL) and L?Échange and d?Urville (AFD)).

Character encoding issues create major headaches for data users. It would be a great service to biodiversity informatics if data managers compiled their data in UTF-8 encoding or took the time to convert to UTF-8 and fix any resulting errors before publishing to the Web or uploading to aggregators.

This may be a big ask, given that at least one data manager I've talked to had no idea how characters were encoded in the institution's database. But as ALA's Miles Nicholls wrote back in 2009, "Note that data should always be shared using UTF-8 as the character encoding". Biodiversity informatics is a global discipline and UTF-8 is the global standard for encoding.

Readers needing some background on character encoding will find this and especially this helpful, and a very useful tool to check for encoding problems in small blocks of text is here.


          GBIF 2016 Ebbe Nielsen Challenge entries        

The GBIF 2016 Ebbe Nielsen Challenge has received 15 submissions. You can view them here: Screenshot 2016 09 30 14 01 05Unlike last year where the topic was completely open, for the second challenge we've narrowed the focus to "Analysing and addressing gaps and biases in primary biodiversity data". As with last year, judging is limited to the jury (of which I'm a member), however anyone interested in biodiversity informatics can browse the submissions. Although you can't leave comments directly on the submissions within the GBIF Challenge pages, each submission also appears on the portfolio page of the person/organisation that created the entry, so you can leave comments there (follow the link at the bottom of the page for each submission to see it on the portfolio page).


          Guest post: Absorbing task or deranged quest: an attempt to track all genus names ever published        

YtNkVT2U This guest post by Tony Rees describes his quest to track all genus names ever published (plus a subset of the species…).

A “holy grail” for biodiversity informatics is a suitably quality controlled, human- and machine-queryable list of all the world’s species, preferably arranged in a suitable taxonomic hierarchy such as kingdom-phylum-class-order-family-genus or other. To make it truly comprehensive we need fossils as well as extant taxa (dinosaurs as well as dinoflagellates) and to cover all groups from viruses to vertebrates (possibly prions as well, which are, well, virus-like). Linnaeus had some pretty good attempts in his day, and in the internet age the challenge has been taken up by a succession of projects such as the “NODC Taxonomic Code” (a precursor to ITIS, the Integrated Taxonomic Information System - currently 722,000 scientific names), the Species 2000 consortium, and the combined ITIS+SP2000 product “Catalogue of Life”, now in its 16th annual edition, with current holdings of 1,635,250 living and 5,719 extinct valid (“accepted”) species, plus an additional 1,460,644 synonyms (information from http://www.catalogueoflife.org/annual-checklist/2016/info/ac). This looks pretty good until one realises that as well as the estimated “target” of 1.9 million valid extant species there are probably a further 200,000-300,000 described fossils, all with maybe as many synonyms again, making a grand total of at least 5 million published species names to acquire into a central “quality assured” system, a task which will take some time yet.

Ten years ago, in 2006, the author participated in a regular meeting of the steering committee for OBIS, the Ocean Biogeographic Information System which, like GBIF, aggregates species distribution data (for marine species in this context) from multiple providers into a single central search point. OBIS was using the Catalogue of Life (CoL) as its “taxonomic backbone” (method for organising its data holdings) and, again like GBIF, had come up against the problem of what to do with names not recognised in the then-latest edition of CoL, which was at the time less than 50% complete (information on 884,552 species). A solution occurred to me that since genus names are maybe only 10% as numerous as species names, and every species name includes its containing genus as the first portion of its binomial name (thanks, Linnaeus!), an all-genera index might be a tractable task (within a reasonable time frame) where an all-species index was not, and still be useful for allocating incoming “not previously seen” species names to an appropriate position in the taxonomic hierarchy. OBIS, in particular, also wished to know if species (or more exactly, their parent genera) were marine (to be displayed) or nonmarine (hide), similar with extant versus fossil taxa. Sensing a challenge, I offered to produce such a list, in my mind estimating that it might require 3 months full-time, or the equivalent 6 months in part time effort to complete and supply back to OBIS for their use.

To cut a long story short… the project, which I christened the Interim Register of Marine and Nonmarine Genera or IRMNG (originally at CSIRO in Australia, now hosted on its own domain “www.irmng.org” and located at VLIZ in Belgium) has successfully acquired over 480,000 published genus names, including valid names, synonyms and a subset of published misspellings, all allocated to families (most) or higher ranks (remainder) in an internally coherent taxonomic structure, most with marine/nonmarine and extant/fossil flags, all with the source from which I acquired them, sources for the flags, and more; also for perhaps 50% of genera, lists of associated species from wherever it has been convenient to acquire them (Catalogue of Life 2006 being a major source, but many others also used). My estimated 6 months has turned into 10 years and counting, but I do figure that the bulk of the basic “names acquisition” has been done for all groups (my estimate: over 95% complete) and it is rare (although not completely unknown) for me to come across genus names not yet held, at least for the period 1753-2014 which is the present coverage of IRMNG; present effort is therefore concentrated on correcting internal errors and inconsistencies, and upgrading the taxonomic placement (to family) for the around 100,000 names where this is not yet held (also establishing the valid name/synonym status of a similar number of presently “unresolved” generic names).

With the move of the system from Australia to VLIZ, completed within the last couple of months, there is the facility to utilise all of the software and features presently developed at VLIZ that currently runs WoRMS, the World Register of Marine Species and its many associated subsidiary databases, as well as (potentially) look at forming a distributed editing network for IRMNG in the future, as already is the case for WoRMS, presuming that others are see a value in maintaining IRMNG as a useful resource e.g. for taxonomic name resolution, detection of potential homonyms both within and across kingdoms, and generally acting as a hierarchical view of “all life” to at least genus level. A recently implemented addition to IRMNG is to hold ION identifiers (also used in BioNames), for the subset of names where ION holds the original publication details, enabling “deep links” to both ION and BioNames wherein the original publication can often be displayed, as previously described elsewhere in this Blog. Similar identifiers for plants are not yet held in the system but could be, (for example Index Fungorum identifiers for fungi), for cases where the potential linked system adds value in giving, for example, original publication details and onward links to the primary literature.

All in all I feel that the exercise has been of value not only to OBIS (the original “client”) but also to other informatics systems such as GBIF, Encyclopedia of Life, Atlas of Living Australia, Open Tree of Life and others who have all taken advantage of IRMNG data to add to their systems, either for the marine/nonmarine and extant/fossil flags or as an addition to their primary taxonomic backbones, or both. In addition it has allowed myself, the founding IRMNG compiler, to “scratch the taxonomic itch” and finally flesh out what is meant by statements that a certain group contains x families or y genera, and what these actually might be. Finally, many users of the system via its web interface have commented over time on how useful it is to be able to input “any” name, known or unknown, with a good chance that IRMNG can tell them something about the genus (or genus possible options, in the case of homonyms) as well as the species, in many cases, as well as discriminate extant from fossil taxon names, something not yet offered to any significant extent by the current Catalogue of Life.

Readers of iPhylo are encouraged to try IRMNG as a “taxonomic name resolution service” by visiting www.irmng.org and of course, welcome to contact me with suggestions of missed names (concentrating at genus level at the present time) or any other ideas for improvement (which I can then submit for consideration to the team at VLIZ who now provide the technical support for the system).


          Media, Genealogy, History        
by
Matthew G. Kirschenbaum
1999-03-15

Remediation is an important book. Its co-authors, Jay David Bolter and Richard Grusin, seem self-conscious of this from the outset. The book’s subtitle, for example, suggests their intent to contend for the mantle of Marshall McLuhan, who all but invented media studies with Understanding Media (1964), published twenty years prior to the mass-market release of the Apple Macintosh and thirty years prior to the popular advent of the World Wide Web. There has also, I think, been advance anticipation for Remediation among the still relatively small coterie of scholars engaged in serious cultural studies of computing and information technology. Bolter and Grusin both teach in Georgia Tech’s School of Language, Communication, and Culture, the academic department which perhaps more than any other has attempted a wholesale make-over of its institutional identity in order to create an interdisciplinary focal point for the critical study of new media. Grusin in fact chairs LCC, and Bolter, who holds an endowed professorship at Tech, is a highly-regarded authority for his work on the hypertext authoring system StorySpace and for an earlier study, Writing Space: The Computer, Hypertext, and the History of Writing (1992), to which Remediation is a sequel of sorts. [Bolter’s book is reviewed by Anne Burdick in ebr, eds.] The book therefore asks to be read and received as something of an event, an extended statement from two senior scholars who have been more deeply engaged than most in defining and institutionalizing new media studies.

Much of Remediation’s importance is lodged in the title word itself. New media studies has been subjected to a blizzard of neologisms and new terminologies - many of them over-earnest at best - as scholars have struggled to invent a critical vocabulary adequate to discuss hypertexts and myriad other artifacts of digital culture with the same degree of cogency found in a field such as film studies. Bolter and Grusin clearly want “remediation” (the word) to stick, and the volume’s rhetorical momentum is often driven by simple declarative clauses like “[b]y remediation we mean…” and “[b]y remediation we do not mean…” Though the cumulative weight of these phrasings helps remind readers that they are in the presence of two critics in full command of their subject matter, the repetitive stress on “remediation” also produces some odd moments, such as this one from the preface:

It was in May 1996, in a meeting in his office with Sandra Beaudin that RG was reported to have coined the term remediation as a way to complicate the notion of “repurposing” that Beaudin was working with for her class project. But, as most origin stories go, it was not until well after the fact, when Beaudin reported the coinage to JB, who later reminded RG that he had coined the term, that the concept of “remediation” could be said to have emerged. Indeed, although the term remediation was coined in RG’s office, neither of us really knew what it meant until we had worked out together the double logic of immediacy and hypermediacy. (viii)

[ Bolter’s more recent collaboration with Diane Gromala, Windows and Mirrors (2003) applies the concept of immediacy/hypermediacy to graphic design. See Jan Baetens’ ebr review ]

This is writing that itself bears the mark of multiple mediations, from the willfully passive construction of its syntax (“that RG was reported to have coined…”) to the flutter of the keyword remediation from an italicized presentation to scare quotes and back again. I dwell on such details not to be clever, but rather because those visible stress-marks, and the placement of this vignette in the volume’s preface (where it is labeled, tongue-in-cheek, as an “origin story”) both underscore the extent to which language itself is about to be recycled and repurposed in the project that follows. For remediation is not in fact a neologism or a new coinage but rather a paleonym, a word already in use that is recast in wider or different terms: remediation is a word commonly encountered in business, educational, and environmental contexts to denote remedy or reform. Bolter and Grusin do acknowledge this later in the book by discussing remediation’s usage by educators (59), but “remediation” (the word’s) status as a paleonym itself becomes questionable when we realize that Bolter and Grusin clearly expect Remediation (the book) to perform exactly this kind of reformative work - most broadly as a corrective to the prevailing notion of the “new” in new media.

For all of this anxiety surrounding its presentation and pedigree, remediation in Bolter and Grusin’s hands is a simple (but not simplistic) concept, and therein lies its appeal:

[W]e call the representation of one medium in another remediation, and we will argue that remediation is a defining characteristic of the new digital media. What might seem at first to be an esoteric practice is so widespread that we can identify a spectrum of different ways in which digital media remediate their predecessors, a spectrum depending on the degree of perceived competition or rivalry between the new media and the old. (45)

This is, as Bolter and Grusin acknowledge, an insight also shared by McLuhan, who famously declared that the first content of any new medium must be a prior medium. But whereas McLuhan once divided the media sphere into “hot” and “cool” media based on the degree of participation they required (non-participatory media were, somewhat paradoxically, “hot and explosive” in McLuhan’s lexicon, while interactive media were termed “cool”), Bolter and Grusin parse various media forms against what they term the logics of immediacy and hypermediacy.

Immediacy denotes media that aspire to a condition of transparency by attempting to efface all traces of material artifice from the viewer’s perception. Immersive virtual reality, photo realistic computer graphics, and film (in the mainstream Hollywood paradigm) are all examples of media forms that obey the logic of immediacy - the expectation is that the viewer will forget that he or she is watching a movie or manipulating a data glove and be “drawn into” the environment or scene that is depicted for them. Hypermediated phenomena, by contrast, are fascinated by their own status as media constructs and thus call attention to their strategies of mediation and representation. Video games, television, the World Wide Web, and most multimedia applications subscribe to the logic of hypermediacy. And, as Bolter and Grusin are quick to claim, “our two seemingly contradictory logics not only coexist in digital media today but are mutually dependent” (6). This co-dependency inaugurates what they refer to as the “double logic of remediation,” which finds expression as follows: “Each act of mediation depends on other acts of mediation. Media are continually commenting on, reproducing, and replacing each other, and this process is integral to media. Media need each other in order to function as media at all” (55).

Once articulated, the ideas behind remediation are quickly grasped and readers may find themselves seeing (I stress because Bolter and Grusin’s critical orientation is overwhelmingly visual) remediations everywhere. It also becomes clear, as Bolter and Grusin themselves suggest, that remediation is the formal analogue of the marketing strategy commonly known as repurposing, whereby a Hollywood film (say) will spawn a vast array of product tie-ins, from video games to action figures to fast-food packages and clothing accessories. This practice raises a daunting set of questions for those concerned with matters of textual theory, for if we grant that a film (or an action figure) can be a text, we are then obliged to re-evaluate much of what we think we know about textual authority and textual transmission in this late age of mechanical reproduction - by what formal, material, or generic logic could we define the ontological horizon of the repurposed text known as “Star Wars?” Likewise, when one refers to “Wired,” is one speaking of just the printed newsstand version of the magazine or is one speaking of the multivalent media property that now cultivates a variety of vertically integrated distribution networks, including: an imprint for printed books about cyberculture, HardWired; an online forum and Web portal, HotWired; separate Web presences for the magazine itself as well as affiliated online ventures (which include WiredNews), LiveWired, and Suck); and two search engines, HotBot and NewsBot. That recognition of this broader media identity is central to any discussion of Wired the magazine is dramatized by the fact that as of this writing the URL http://www.wired.com deflects visitors from the site of the magazine proper to the aforementioned WiredNews - which only then offers a subordinate link to the Web presence for the newsstand version of Wired (which is itself of course an electronic remediation of the printed content). In retrospect, it seems odd that Bolter and Grusin do not make more of Wired, both because of the complex media ecology outlined above and because in it we have an artifact of print culture that, largely on the basis of graphic design and strong marketing, has remediated the experience of “cyberspace” so successfully that the word “wired” itself has become a popular synecdoche for the Information Age.

Some extended case studies of that sort (MTV would have been another natural) might have added much to the book, but instead its middle section is taken up by more generic surveys of various media forms - computer games, photo realistic graphics, film, television, virtual reality, the World Wide Web, and others - and these are a mixed lot. The chapters on computer games, graphics, television, and film are generally strong. Bolter and Grusin have an enviable feel for the subtle relationships that obtain between media forms, and they are at their best during moments such as a discussion of Myst when they argue convincingly that the game - frequently remarked upon for the “realism” of its graphics - succeeds not via the logic of immediacy, but rather by remediating the immediacy of Hollywood film; they press the point home by observing that there are in fact hundreds of examples of video games adapted from mainstream films (98). Their argument about virtual reality’s lineage in film is equally suggestive: “One way to understand virtual reality, therefore, is as a remediation of the subjective style of film, an exercise in identification through offering a visual point of view… In their treatments [ Brainstorm, Lawnmower Man, Johnny Mnemonic, Disclosure, Strange Days ] Hollywood writers grasped instantly (as did William Gibson in his novel Neuromancer) that virtual reality is about the definition of the self and the relationship of the body to the world” (165-166). What is compelling here is not so much the notion that virtual reality is about “the definition of self and the relationship of the body to the world,” but rather the confidence with which Bolter and Grusin are able to identify a specific filmic technique - the subjective camera, prominent in all the titles mentioned above - and align it with the popular rhetoric surrounding virtual reality, thereby foregrounding the artificial imperatives of both media forms.

But at times the middle chapters also seem sparsely developed. That same chapter on virtual reality, for example, is only seven pages long (including illustrations), and it includes no discussion of any functional VR systems beyond mention of research by Georgia Tech’s Larry Hodges. Likewise, the only electronic artist to receive any individual treatment in the chapter on digital arts is Jeffrey Shaw, who is perhaps best know for an installation piece entitled The Legible City, now a decade old. At other times, elements of the historical record which it would have been desirable to have on hand are simply missing. A discussion of the video game Pong, for example, offers the tantalizing suggestion that its fundamentally graphical orientation, compared to contemporary UNIX and DOS command line interfaces, “suggested new formal and cultural purposes for digital technology” (90). Yet we are not given any specific date for Pong’s first release, or for the releases of its many subsequent versions and variations (which it would have been interesting to track across different platforms); nor do we learn who first programmed the game, or where, or why. Absences of this kind detract from the usefulness of the middle sections as basic references for students of new media.

Given the scope of the attempted coverage in Remediation ’s middle sections - where the topics range from Renaissance painting and animated film to telepresent computing and “mediated spaces” (e.g., Disneyland) - lapses of the kind I note above are perhaps inevitable. And indeed, very early on in the book Bolter and Grusin offer a familiar kind of disclaimer: “We cannot hope to explore the genealogy of remediation in detail. What concerns us is remediation in our current media in North America, and here we can analyze specific texts, images, and uses” (21). But this emphasis on the “specific” is itself a scholarly move that, as Alan Liu and others have demonstrated, bears with it deep implications for any critical project conducted under the broad sign of cultural criticism, a point to which I will return (below).

But some remaining features of the book deserve notice first: Remediation is lovingly illustrated, and Bolter and Grusin deserve credit for the care with which the images were selected and reproduced. The juxtaposition of the front page of USA Today’s printed edition with the home page of USA Today on the web (40-41) or the comparison of stills from a 1980 CNN air check with a more contemporary broadcast format from CNN in 1997 (190-191) do as much to underscore the essential rightness of the core remediation concept as any number of expository passages in the text. The first and third sections of the book also include reference pointers to relevant passages from the survey of media forms in the middle section - these are “the printed equivalent of hyperlinks” (14), and some readers may find them occasionally convenient. Remediation ‘s third and final section examines logics of remediation in relation to contemporary conceptions of the self (readers who have already done their homework with Sandy Stone or Sherry Turkle may find themselves skimming these pages). The bibliography, with about 175 entries, is useful. And finally, there is the obligatory glossary; it will mark a significant milestone in the maturity of new media studies as a discipline when one can publish a book in the field without feeling the need to define for the lay-reader “virtual reality” or “MOO” (or “media,” for that matter: “Plural of medium” [274]).

Near the end of the book, Bolter and Grusin offer an account of the media coverage of Princess Diana’s funeral precession: “Because the funeral itself occurred for American audiences in the middle of the night, CBS decided to run a videotape of the whole ceremony later in the morning. At that same time, however, the precession was still carrying Diana’s body to its final resting place. The producers of the broadcast thus faced the problem of providing two image streams to their viewers” (269). The solution CBS adopted was to divide the screen into two separate windows, one displaying the funeral ceremony and the other the procession. Bolter and Grusin point out that this move marks a shift from the desire for immediacy and “authenticity” of experience that normally governs live TV to a logic of hypermediacy that places the emphasis on the media apparatus itself; but the more interesting point, I think, is that this particular broadcast solution was viable because CBS could count on its audience having already been exposed to bifurcated screen-spaces through the assimilation of the computer desktop and its attendant interface conventions into the cultural mainstream. Bracketing technical considerations, it seems reasonable to argue that CBS could not have opted for the two-window solution in an earlier era of television because the visual environment would have simply been too alien from their viewers’ expectations. Bolter and Grusin go on to note that, “other and perhaps better examples (both of hypermediacy and remediation) will no doubt appear, as each new event tops the previous ones in its excitement or the audacity of its claims to immediacy” (270). Had this closing chapter been written today, Bolter and Grusin would have almost certainly chosen as their example the multi-window displays that facilitated the so-called “surreal” split-screen television coverage of the House Judiciary Committee’s impeachment hearings and Operation Desert Fox (the American and British air strikes on Iraq) in December of 1998.

That the conflicting logics of immediacy (in the desire for live “eyewitness” coverage of two major news events transpiring simultaneously) and hypermediacy (in the spectacle of video feeds from Washington and Baghdad both on the screen at the same time, each in a separate content window, the display filled out by a lurid background “wallpaper” graphic) manifested themselves so dramatically in one of the most notable media events of recent memory surely confirms the usefulness of remediation as a critical armature for contemporary media studies. But it is worth noting that Bolter and Grusin explicitly describe their technique in Remediation as genealogical (“a genealogy of affiliations, not a linear history” [55]), and therefore I’d like to close this review with some additional words about genealogy, and its suitability to new media studies by contrast with other varieties of historicism.

Genealogy as a critical mode comes to us from Foucault; it is most closely associated with his later books such as Discipline and Punish and the three volumes of the History of Sexuality. Genealogy is distinct from Foucault’s other famous method, archeology, deployed most fully in works like The Order of Things and The Birth of the Clinic. Foucault’s most sustained articulation of genealogy is to be found in a 1971 essay entitled “Nietzsche, Genealogy, History,” whose opening lines are these: “Genealogy is gray, meticulous, and patiently documentary. It operates on a field of entangled and confused parchments, on documents that have been scratched over and recopied many times” (76). A few pages later, we read:

Genealogy does not resemble the evolution of a species and does not map the destiny of a people. On the contrary, to follow the complex course of descent is to maintain passing events in their proper dispersion; it is to identify the accidents, the minute deviations - or conversely, the complete reversals - the errors, the false appraisals, and the faulty calculations that gave birth to those things that continue to exist and have value for us; it is to discover that truth or being does not lie at the root of what we know and what we are, but the exteriority of accidents. (81)

Bolter and Grusin acknowledge this same essay, and indeed quote from it in their first footnote. Yet it seems questionable how much the “genealogy” of Remediation really resembles what Foucault imagined by the term. True, Bolter and Grusin’s narrative of media forms is not linear (or rather, it is not chronological), but their narrative is also “documentary” only in the most casual sense and it operates at a level of detail far removed from Foucault’s trademark archival research. Indeed, of the many books published on topics related to new media studies in recent years, none of them, it seems to me, has yet matched the level of documentary (archival) research evident in a work such as Michael A. Cusumano and David B. Yoffie’s Competing on Internet Time: Lessons from Netscape and its Battle with Microsoft (1998). A typical passage from Cusumano and Yoffie (who are business professors) reads like this:

In August 1994, the Seattle-based start-up Spry became the first company to market a commercial version of Mosaic. At least half a dozen non-NCSA-based browsers were also available or in the works. In addition to Netscape’s Navigator, competitors also included Cello, developed at Cornell; BookLink’s InterNet Works; the MCC consortium’s MacWeb; O’Reilly and Associates Viola; and Frontier Technologies’s WinTapestry. By early 1995, PC Magazine declared that 10 Web browsers were “essentially complete”[…] In April 1995, Internet World counted 24 browsers, and by the end of the year CNET had found 28 browsers worthy of review. Very few of those products had any appreciable market share. (95-96)

How soon we forget. Cello, WebTapestry, even Mosaic. Where are they now? Whole generations of software technologies (compressed with the week- and month-long micro-cycles of “Internet Time”) are already lost to us. But surely this level of detail - conspicuous in the InterCapped names of bygone products and technologies, punctuated by the antiquarian version numbers of specific hardware and software implementations - ought to be a key element of any historical method, genealogical or otherwise, that critics working in new media studies bring to bear.

Let me suggest that the start-up work of theorizing digital culture has by now largely been done, and that serious and sustained attention to archival and documentary sources is the next step for new media studies if it is to continue to mature as a field. Freidrich Kittler’s Discourse Networks 1800/1900 already does some of this work. And we could also do worse than Internet Time for a summation of the pace of scholarship in new media studies to date, with fresh books (books: the medium signifies) on matters cyber, virtual, or hyper appearing almost weekly. But where in all this are the careful analyses of the white papers and technical reports (for example) that must lie behind the changing broadcast strategies Bolter and Grusin point to at CNN? Where are the interviews with the network’s executives and with their media consultants and market analysts? Rather than speculate broadly about computer graphics or theories of digital reproduction, why not perform a detailed case study of one particular data format, such as JPEG or GIF (both of which have a fascinating history) or a particular software implementation such as QuickTime, which has been enormously influential to multimedia development as it has evolved through multiple versions and generations? Certainly there are practical constraints that might mitigate against such projects: would Apple unlock its technical reports and developers’ notes on QuickTime for a scholar writing a book? It is hard to know, but: Netscape did it for Cusumano and Yoffie.

A few more thoughts in this vein. Compared to other scholarly fields, new media studies has thus far operated within relatively limited horizons of historicism. Historical perspective in books on digital culture generally takes one of two forms: it is either broadly comparative or it is transparently narrative. Bolter’s earlier book, Writing Space, is a classic example of the former mode, contextualizing hypertext (very usefully) within a much longer history of writing. Sandy Stone’s pages describing the final days of the Atari Lab in The War of Desire and Technology at the Close of the Mechanical Age is an example of the latter narrative mode, as is the writing in such pop-history books as Simon and Schuster’s Where Wizards Stay Up Late: The Origins of the Internet. But both the comparative and the narrative modes encourage a relatively casual kind of historiographic writing. N. Katherine Hayles’ just-published How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, which I am reading now, is perhaps the beginning of something new, offering a more rigorous kind of historical inquiry. thREAD to the Linda Brigham’s review of Hayles But Hayles still does not approach the level of self-reflexivity evident in a work like James Chandler’s England in 1819: The Politics of Literary Culture and the Case of Romantic Historicism, published last year by Chicago, in which Chandler historicizes history itself as a peculiarly Romantic category of knowledge, while simultaneously undertaking a meticulous investigation of the events of a single pivotal year in the development of British Romanticism. A brief passage from the preface, to suggest the flavor of the volume:

Within part 1, the first section, “Writing Historicism, Then and Now,” tries to establish a way of talking about “dated-specificity” in literary-cultural studies that makes patent the repetition between the “spirit of the age” discourse of British Romanticism and the contemporary discourse of the “return to history” in the Anglo-American Academy. The second section…moves from the notion of historical culture implicit in that “dated specificity” to consider the representation practices that such a notion of culture presupposes or demands… Then, having established how one might understand England in 1819 as a historical case, its literature as a historicizing casuistry, I turn…to explicate a series of works, all produced or consumed in that year, as cases in respect to that larger frame of reference. (xvi-xvii)

Chandler is ultimately ambivalent about the academy’s current insistence on “dated specificity” (including the sort I have been calling for above), as is his fellow-Romanticist Alan Liu in “Local Transcendence: Cultural Criticism, Postmodernism, and the Romanticism of Detail,” a seminal essay which ought to be required reading for anyone working in a field of cultural study, including media studies. Liu makes the telling point that recent critical-historical modes, from Foucauldian genealogy to cultural anthropology and the literary New Historicism, all thrive on an unexamined rhetoric that consecrates what he terms the “virtuosity of the detail” (80), a rhetoric which Liu is then able to convincingly align with the most familiar tenets of Romantic “local” transcendence, such that: “insignificance becomes the trope of transcendent meaning” (93).

Liu’s critique is too complex and finely-developed to go into here any further, but it underscores a fundamental crisis in new media studies today: the field, having really flourished only since the early nineties, has on the one hand not yet had occasion to undertake the kind of detailed case histories I advocate above; yet case studies (their “dated specificity”) are, on the other hand, already themselves being historicized as of a particular institutional moment. There is, for example, something to be learned from the curious genealogy of the font family known - fateful name - as Localizer (see FontFont). Released in 1996, the Localizer font mimics late-seventies LCD technology in an era when state-of-the-art digital typesetting permits perfect anti-aliasing. (Localizer is of course a classic remediation. Its design notes read in part: “we thought this would be the future, then it wasn’t, but it didn’t matter after all, so here it is.”) Layers and layers of media history are perhaps held in delicate high-res suspension among such exteriorities of accidents. Yet at present, new media studies apparently lacks the deep historical self-reflexivity necessary to undertake a genealogy of the Localizer font that would not also appear naive in the face of a critique such as Liu’s.

All of this is not to be taken as a criticism of Remediation itself, for Bolter and Grusin would surely (and fairly) object that a book engaging the particular issues I have been raising here was simply not the book they set out to write. Nonetheless, the probable success of a book such as Remediation only intensifies the realization that new media studies now faces disciplinary challenges that go far beyond building a critical vocabulary and syntax. I will go on record as saying that in order for new media studies to move beyond its current 1.0 generation of scholarly discourse - a discourse which is still largely, though not exclusively, descriptive and explanatory (all those glossaries!) - the field must make a broad-based commitment to serious archival research. Of course the archive is more likely to be found at venues such as Xerox PARC or IBM or Microsoft or Apple - or in a Palo Alto garage - than at the library and rare book room. But case studies of specific hardware and software implementations, and of the micro-events in the commercial and institutional environments in which those implementations are developed and deployed are absolutely essential if we are to begin achieving deeper understandings of the impact of new media on the culture at large. (An example of one such “micro-event”: March 31, 1998. Netscape Communications Corporation posts the source code for its 5.0 generation of browsers on its public Web site in an attempt to recapture market-share from Microsoft. This, I submit, is the real stuff of which new media history is being made.) Those case studies can - should - be theoretically informed, building on the groundwork of a book such as Remediation.

There is no task more important for new media studies than demystifing the unequivocally material processes of development now at work in the high-tech industry. Doing that work, and doing it right, will take time - archive time, not Internet Time.

>>—> Jan Baetens responds.

——————————————————————

works cited

Bolter, Jay David and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT Press, 1998. [Note: All citations in this review are from a pre-press review copy of Remediation, provided by the MIT Press.]

Chandler, James. England in 1819: The Politics of Literary Culture and the Case of Romantic Historicism. Chicago: University of Chicago Press, 1998.

Cusumano, Michael A. and David B. Yoffie. Competing on Internet Time: Lessons from Netscapes and Its Battle with Microsoft. New York: The Free Press, 1998.

Foucault, Michel. ” Nietzsche, Genealogy, History.” The Foucault Reader. Ed. Paul Rabinow. New York: Pantheon Books, 1984. 76-100.

Hafner, Katie and Matthew Lyon. Where Wizards Stay Up Late: The Origins of the Internet. New York: Simon and Schuster, 1996.

Hayles, N. Katherine. How We Became Post-Human: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press, 1999.

Kittler, Freidrich. Discourse Networks 1800/1900. Trans. Michael Metteer. Stanford, CA: Stanford University Press, 1990.

Liu, Alan. “Local Transcendence: Cultural Criticism, Postmodernism, and the Romanticism of Detail.” Representation 32 (Fall 1990): 75-113.

McLuhan, H. Marshall. Understanding Media: The Extensions of Man. Cambridge, MA: MIT Press, 1964, 1994.

Stone, Allucquere Rosanne. The War of Desire and Technology at the Close of the Mechanical Age. Cambridge, MA: The MIT Press, 1995.


          Metadiversity: On the Unavailability of Alternatives to Information        
by
David Golumbia
2003-08-30

Despite its apparent global variety, the Internet is more linguistically uniform than it is linguistically diverse. Almost all Internet traffic is conducted in one of the world’s 100 or so dominant languages, and the great majority takes place in the top 10 or so languages, with English being especially dominant due, among other reasons, to its use in the Internet’s coding infrastructure. Unwritten and nonstandardized languages, which make up the majority of the world’s approximately 6,700 languages, are hardly accounted for in the structure of Internet communication. On the worldwide distribution of languages see Grimes, Ethnologue. The emphasis in today’s Internet development on informatic models and on structured information reveals a bias characteristic not only of modern technological practices but also of modern standardized languages. This bias may limit the Internet’s effectiveness in being deployed for non-informatic uses of language, which have themselves been significantly underplayed in Western technological development and its theory.

1. Informatics

Much cultural analysis of the Internet focuses on information - loosely, what is typically thought of as “content.” That is, the analytic object is what the user sees most prominently on the page, the words he or she types into a chat interface, the articles displayed and/or the aesthetic possibilities of website creation, and the means for transmitting, storing, and replicating them. See, for example, Landow, Hypertext and Hyper/Text/Theory, Lunenfeld, Digital Dialectic, and Bolter and Grusin, Remediation, all of which problematize the informatic focus while more or less endorsing it. Lessig, Code, and Poster, The Mode of Information are the best recent attempts to think critically about the informatic infrastructure. Turkle, Second Self remains a touchstone in thinking critically about the cultural-psychological consequences of the computing environment. Also see the references in Mann, “What Is Communication.” We refer to the advent of the Internet as an “Information Revolution” and to the computing infrastructure as “Information Technology” (IT). All of this suggests that information was somehow what was in need of technological change and that the inefficient transfer of information was an obvious social problem requiring a revolution. But for the human users of the Internet, information is realized, nearly exclusively, via printed language. So in addition to being part of the computer revolution, the Internet needs also to be seen in the wider frames of human languages and language technologies, where the question of the informatic nature of language is much more highly vexed than the IT revolution would make it appear.

Rather than IT, when we talk about what may be socially transformative about the Internet, we focus just as often on social connection and community. So although the Internet is seen

principally as a valuable reservoir of information, its main contribution may one day be seen as a catalyst for the formation of communities. Since communities bound by common interests existed long before computers, it is not as if we have now entered the next stage in the evolution of society (the `information age’). Rather, computer meshworks have created a bridge to a stable state of social life which existed before massification and continues to coexist alongside it. (DeLanda, A Thousand Years of Nonlinear History, 254)

Yet Manuel De Landa himself points out that it is standardized languages in general and most of all standardized written English as a medium for technical communication that open the possibility of the Internet itself. “English became the language of computers, both in the sense that formal computer languages that use standard words as mnemonic devices (such as Pascal or Fortran) use English as a source and in the sense that technical discussions about computers tend to be conducted in English (again, not surprisingly, since Britain and the United States played key roles in the development of the technology)” (253).

De Landa sees, rightly at least in a limited sense, that the Internet is becoming a place where it can be possible for “pride of the standard [to be] seen as a foreign emotion, where a continuum of neo-Englishes flourishes, protected from the hierarchical weight of `received pronunciations’ and official criteria of correctness” (253-4). But the boundaries of this continuum are narrow precisely because it is neo-Englishes rather than a diversity of world languages that flourish. It is no accident of history that the programming and markup languages that structure the Internet are almost exclusively written in standardized fragments of English, especially as English has been revisioned into the sub-languages of logic and mathematics. I discuss this at greater length in Golumbia, “Computational Object.” Also see Lyotard, Postmodern Condition. It is, rather, characteristic of these historical developments and of their constitutive relation to modern identity itself. It appears, at best, premature to suggest that systems constructed within such highly formalized, abstracted and, in an important sense, fictional structures could be responsible to the texture of human language - a texture whose variety we have scarcely begun to apprehend. Reddy, “The Conduit Metaphor,” remains the single best articulation of the distance between the formalized communicative object and actual linguistic practice; also see Lakoff and Johnson, Metaphors We Live By and Philosophy in the Flesh, and Mann, “What Is Communication.” (But which is at the same time familiar enough that we all understand the degree to which computers continue to fail to do anything very close to producing or understanding spontaneous human language.) For despite the appearance created in no small part by programming languages themselves, human languages need not be abstracted, one-to-one, univocally interpretable, or structured much like systems of propositional logic. In fact, these characteristics are rare across the languages we do find in human history and contemporary (but not, in this case, necessarily modern) social life. See Golumbia, “History of `Language.’” Rather than a medium for transmitting and sharing human language, then, we must be prepared to see the proliferation of computer networks as part of an ongoing effort to shift the basis of language use toward one appropriate for an informatic economy. As discussed in Golumbia, “Hypercapital.” It is the constitutive power of this phenomenon to which we must learn to be especially attentive.

2. Hypertext

There is a curious lack of fit between the phenomenon called hypertext examined as an abstract or theoretical object, and hypertext as it is used on the Internet. As the term has been advanced in academic writing, hypertext refers to what might be thought of as a multidimensional intra-document linking system that helps us to “abandon conceptual systems founded upon ideas of center, margin, hierarchy, and linearity and replace them by ones of multilinearity, nodes, links, and networks” (Landow, Hypertext, 2). Taking as paradigmatic a particular kind of interactive narrative, including the works of Michael Joyce and the program Storyspace, these theories stress the ways in which “hypertext… provides an infinitely re-centerable system whose provisional point of focus depends upon the reader, who becomes a truly active reader in yet another sense” (Hypertext, 11).

To be sure, these distributive, informational networks do exist, but it is also fair to say that they are not the rule in terms of contemporary uses of hypertext. As the Web has matured, another and perhaps much more obvious usage of hypertext dominates, in which stability, centering, order, and logic are not necessarily resisted but may in fact be reinforced. Today’s web pages use hypertextual linking primarily to drive navigation in and among complete, stable, “sticky” application interfaces. This is what drives both standard and personalized portal pages. A personalized news page on a portal site such as Yahoo!, for example, consists of headlines in many areas of world and local news, divided into categories and subcategories that are intensely logical, that are in fact derived from a culturally-preconstructed taxonomy from which dissent is difficult to conceptualize, let alone practice. So the fact that some kinds of interesting and potentially transformative constructions are possible within a given medium should not distract us from understanding how the medium is actually being used, especially when these uses are very large-scale and very directly implicated in the production of contemporary subjectivities.

On our Web, HTML and hypertext are used to create rich, absorbing navigational experiences that instruct the user to stay where they are, with only occasional side glances to alternate information sources. Organizations focus workers’ daily experiences around wide-area websites, confirming exactly the identitarian structures that hypertext might be thought to resist. Every student, teacher, office worker, engineer, professor is compelled to have a relation to these stable, compelling, relentlessly logical interactive presences, in which documents are not so much intercut with each other as presented in orderly, menu-based groups.

In fact, it is odd that, instead of HTML, we speak of hypertext when we try to locate the salient analytic object in digital textuality. On reflection, HTML really does define what happens on the Web to an astonishingly large degree, and HTML is far more defined and linear than the word “hypertext” would suggest. HTML is typically used to structure the page, and the user’s experience of the page, so as to lead the user in a particular direction with particular goals in mind. That these goals are so often commercial and so often transaction-oriented seems to expose, to literalize, the most profound aspects of the Marxist critique of ideology in language. HTML surrounds “written” electronic language with a literal meta-language, whose goal is overt and unavoidable: to structure explicitly the page’s functions.

While the ability of HTML to create links between documents and parts of documents is critical to the Web, it is also merely one of a large set of programmatic features available to the web page writer, all of whose purpose is to help create structure. To some degree this is content-neutral; obviously no particular paragraph of writing is barred from being surrounded with

and

tags. But the entire set of HTML tags is deliberately built up from a system whose purpose is to structure information for cataloging and retrieval: to mark each and every piece of linguistic source data with some kind of markup tag that allows post-processing and redelivery. In this way language is constrained within the informatic paradigm on the Internet to a surprising degree.

3. Structured Information

HTML (HyperText Markup Language) is typically thought of as a kind of design tool, and of course it is. But HTML is also a tool for structuring information: for applying general metadata to all the elements in a presentation set. “Structured information is information that is analyzed…. Only when information has been divided up by such an analysis, and the parts and relationships have been identified, can computers process it in meaningful ways” (DeRose, “Structured Information,” 1). HTML was in fact written originally by Tim Berners-Lee as a kind of simplified version of a language in which contents are explicitly tagged with meaningful metadata, called SGML for Standard Generalized Markup Language. See the World Wide Web Consortium’s (W3C) web pages on HTML, e.g., http://www.w3.org/MarkUp. SGML was developed for engineering and military documentation, in which it is assumed that every piece of information needs to be indexed for rapid retrieval and cross-matching. Robins and Webster, Times of the Technoculture, provides an excellent overview of some of the direct military interests involved in the information revolution; also see De Landa, War, and Poster, Mode of Information.

Today HTML is used to apply structure to the general linguistic environment of the Internet. The primary structuring use of even the specific function known as hyperlinking is not that of connecting disparate documents or alternate paths through multidimensional content. Rather, linking is used for menus and other navigational elements. The big tabs at the top of the Amazon.Com page that allow the user to choose among Books, Video, and Lawn Tools are the meat of hypertext. The categories themselves are not arbitrary, but instead are generated out of much more highly-structured data environments (databases). See Poster, Mode of Information, especially Chapter Three, “Foucault and Databases: Participatory Surveillance.” These tabs can even be thought of as a kind of exposure of the metadata environment of the website. In a commercial Web operation like Amazon.Com, this activity is inherently interactive with the user’s patterns of spending, such that the entire structure of the hypertextual experience is laid in place by explicit logical programming rules, which operate ideally out of the realm of conscious comprehension. You don’t know why the website seems to reflect categories that occasionally grab your interest or reviews of books that you have been wondering about.

The inherent structuring of HTML has been built on in recent technology by the advent of increasingly powerful dynamic web page generation language standards (such as Java Server Pages, Active Server Pages, and Cold Fusion pages - each of which can be identified by noting the presence of the extensions.jsp,.asp, and.cfm respectively in web page URLs). These technologies allow the incorporation of database content directly into what look like static HTML documents. They are very literally the language out of which the Web is largely delivered, for academic journals no less than e-commerce sites. Because these meta-rules are applied within the text of the apparent display language, they further blur the distinction that allows us to think of source code as metalinguistic and web page content as ordinary language - content.

Currently, the W3C has nearly finished the articulation of XHTML, a set of standards that allow all HTML content to be rewritten within XML-based contexts. XML stands for eXtensible Markup Language, and represents an explicit attempt to replicate the meta-linguistic tagging properties of SGML widely throughout the Internet (XML is actually a simplified form of SGML, although it has been extended beyond this original base). The standard pocketbook definition (literally) says that XML is a “meta-language that allows you to create and format your own document markups…. Thus, it is important to realize that there are no `correct’ tags for an XML document, except those you define yourself” (Eckstein, XML, 1).

That is, XML is a set of standards for expressing metadata in any form chosen by the programmer. Any viable set of categories should inherently be able to be realized in an XML implementation. In practice, of course, XML documents, especially their large-scale programmatic elements, are written exclusively in English (although the standard allows content to be written in any language, and some levels of tagging are certainly written today using European languages). More importantly, XML is rarely used by individuals or even community groups to create ad-hoc data structures; to the contrary, XML is most widely used by businesses to structure content for electronic commerce, and also for more directly technological applications. In these applications a standards committee drawn from members of prominent businesses and institutions within the appropriate domain is convened. The committee issues successive standards, which dictate exactly how content issued within the industry should be marked up. The neutral standards-based web page known as XML.org promotes itself as “The XML Industry Portal,” and offers pointers to standards for using XML within social domains as widely dispersed as Data Mining, Defense Aerospace and Distributed Management. See http://www.xml.org. The Oasis-Open project at http://www.oasis-open.org is currently the locus for the promotion of Structured Information on the Internet. In fact, not surprisingly, SGML itself has survived in no small part due to its applicability in military engineering projects, where parts, features and functions are categorized to an exorbitant level.

In practice, then, the proliferation of XML and XML-like markup strategies suggests a remarkable degree of institutionally-controlled standardization. By incorporating display standards like XHTML into current web pages, developers can ensure the thorough categorization of every aspect of Web content. Rather than a page, the screen breaks down into more or less discrete units, served up in interaction with masses of data and statistical sampling that are by definition not available for the user to examine or understand. Instead, through such probability- and category-driven conceptions of “personality,” subjectivity itself is presented whole, pre-analyzed, organized, almost always around a central metaphorical goal, usually an economic one. For examples see Birbeck, Duckett, and Gudmundsson, Professional XML, and Fitzgerald, Building B2B Applications. The user is free to choose whether she is interested in Sports or Finance, Hockey or Baseball, the Detroit Red Wings or the Seattle Seahawks. But she is hardly free to reassemble the page according to different logics, different filtering approaches, applying critical logic or any sort of interpretive strategy to the AP Newswire or Dow Jones news feed. This informatic goal instances itself in every aspect of the web page presentation, cultural-cognitive streambeds in which the water of thought almost unavoidably runs. It is not clear that our society has effective mechanisms for evaluating the repackaging of our language environment in this way, in the sense of allowing a large group of technicians and non-technicians to consider deeply its motivations and consequences.

4. Metadiversity

Metadiversity is a term that fails to mean what we need it to. The term has been introduced by information scientists and conservation biologists to indicate the need for metadata resources about biological diversity, no doubt a critical requirement. But the term metadiversity suggests something else - a diversity of meta-level approaches, or even more directly, a diversity of approaches, of schemes, of general structuring patterns. Seen from the perspective of linguistic history, the linguistic environment of the Internet seems to offer not a plethora of schemes but a paucity of them, clustered around business-oriented and even military-based informatic uses. The language technology developed for the Web is primarily meant to make it easy to complete a transaction, close a deal, accept a payment; it is less clearly meant to facilitate open and full speech, let alone to foster a true diversity of approaches to language.

The history of language is rich with examples of structural alternatives to our current environment. These examples include phenomena found in what are today known as “polysynthetic” and other primarily oral languages. Such languages display grammatical and lexical differences from English, from European languages, and even from some modern non-European languages like the dominant languages of Asia. The languages stand in ambiguous relation to the kind of form/content split that has ground its way thoroughly into Western language practice, so much so that no less a linguist than Roy Harris can suggest that the triumph of computers represents the triumph of a “mechanistic conception of language” (Language Machine, 161). This is not some isolated ideology that can be contained within the technical study of linguistics (whose participation in the system of disciplinary boundaries is already highly problematic), though its presence in linguistics is clear and unambiguous. It extends outward in every way to the culture at large, providing models of subjectivity for a great percentage of those who provide so-called intellectual capital for international business. The ideology precisely provides form for subjectivity, suggesting to many normative individuals that existence itself is subject to binary thinking and unitary pursuit of goals.

In the most curious way, this ideology reveals its power through a kind of strong misreading. Just as the term metadiversity is in effect encapsulated against its most direct lexical content, so the apparent homology between modern information networks is misrendered, resulting in a highly teleological area of research known loosely as bioinformatics. Thinking broadly of the effects various telematic changes have had on the development of modern consciousness, Gayatri Spivak writes that “the great narrative of Development is not dead. The cultural politics of books like Global Village and Postmodern Condition and the well-meaning raps upon raps upon the global electronic future that we often hear is to provide the narrative of development(globalization)-democratization (U.S. Mission) an alibi” (Critique, 371). The marriage of the deep biological/machine metaphor and the development narrative produces a desire to make information live, to replace and translate the units of biological information (genes) with those of an artificial, formal linguistic system, but which somehow manages always to work in accordance with the needs of transnational capital.

We see the marks of this deep ideology everywhere in culture, where it almost unfailingly works to support the processes of globalist, nationalist development (even where it merely comes down to the more local politics of academic disciplines) and against the claims of marginal, deterritorialized, often de-languaged minority groups. See Grenoble and Whaley, Endangered Languages, and Skutnabb-Kangas, Linguistic Genocide. The deep metaphors at the heart of Chomsky’s writings have lately pushed closer to the surface, so that he now thinks of language in terms of “perfection” and “optimality.” “The language faculty might be unique among cognitive systems, or even in the organic world, in that it satisfies minimalist assumptions. Furthermore, the morphological parameters could be unique in character, and the computational system CHL biologically isolated” (Minimalist Program, 221). This bio-computer, unique in nature (but ubiquitous in modern thought and fiction), must be characterizable in terms of algebraic or otherwise formal rules, which take their form not from human language but from the logical abstractions on which computers are built. It is no surprise that Chomsky’s writing has lately started to use as core terms, in addition to abstract words such as Move and Derivation, terms which sound derived directly from programming languages. The Minimalist Program invokes Select (226ff.), Merge (226ff.), Spell-Out (229ff.), and perhaps most tellingly, Crash (230ff.), which happens “at LF [Logical Form], violating FI [Full Interpretation]” (230) - all terms with wide applicability and use in various domains of computer science and programming languages. (From this small historical distance, it now seems hard to construe as accident that just as the use and development of the computer really takes off at MIT, so does the theory that language should be understood primarily as the stuff that computers understand - symbols manipulated by a logical processor. This is made clearest in Huck and Goldsmith, Ideology and Linguistic Theory, and Harris, Linguistics Wars, though it requires some interpretation of either of these works to arrive at the point I am making here. Also see Harris, Language Machine, Lyotard, Postmodern Condition, and Turkle, Second Self. It is also no accident that much of this research was directly funded by the military for the express purpose of getting machines to understand speech, presumably for intelligence purposes. See Harris, Linguistics Wars, and De Landa, War, but also see the footnotes and endnotes of many of the early works of generative grammar in which military funding is explicitly mentioned. It is, for example, an odd note of linguistico-political history that Chomsky’s principal mid-sixties work, Aspects of the Theory of Syntax, “was made possible in part by support extended the Massachusetts Institute of Technology, Research Laboratory of Electronics, by the JOINT SERVICES ELECTRONICS PROGRAM (U.S. Army, U.S. Navy, and U.S. Air Force) under Contract No. DA36-039-AMC-03200(E)…” (Aspects, iv).

Within the field now called bioinformatics, misapplication of the bio-computer metaphoric cluster runs rampant, often mapped very precisely onto the direct-forward telos of capital. Most familiarly, the term refers to the collection of genetic data in computerized databases - where it already bleeds over into the ambition to read the human genome like a book, like a set of explicit and language-like instructions, again construing language explicitly as an information-transfer mechanism. Eugene Thacker discusses this aspect of the phenomenon briefly in his “Bioinformatics.” Perhaps the genes truly are like human language - in which case they would appear full of systemic possibilities, none of which are realized in similar or equipotent or equally meaningful ways. (Or maybe genes really are informatic, in which case the reverse cautions might also apply.) What would seem plainest on a dispassionate consideration of intellectual history is that there are probably all sorts of ways of processing genetic material that will not be at all obvious or literal. This leads implacably to the conclusion that, because we seem unable to consider what we are doing prior to operating, we are no doubt even now rewriting scripts whose meanings we scarcely know.

Would that this were the only place in which the bio-computer ideology drives us forward. But in fact other programs, also referred to as bioinformatic, grow not unfettered but with the explicit prodding of military and capitalist interests. These programs include efforts to create “living” programs, code that repairs itself, genetic algorithms, “artificial life,” and many others. See, for example, Brown, Bioinformatics, Holland, Adaptation in Artificial Systems, and Vose, Simple Genetic Algorithm. Of course many of these programs prove to be nearly as science-fictional as they sound, over time, but the fact that they exist as serious human propositions at all seems to me quite startling, and quite characteristic of the lack of metadiversity in our linguistic environment. In every case the motivation and the justification proceed hand-in-hand from remarkable, in-built assumptions about the inherent good in exploring basic natural phenomena via simulation and mimicry. I am not suggesting that such research is wrong, although I do hope it is less transgressive than it seems to want to appear. But it seems to me that an alternate perspective, derived from a cultural politics of the biological and linguistic-cultural environments, suggests that these research programs are profoundly ideological extensions of the public mind, rather than dispassionate considerations of possible roles for sophisticated linguistic tools in the human environment.

From such a perspective, in fact, what is striking about our world is not the attainments of our one linguistic society but the multiple, variant approaches to social reality encoded in the many thousands of human languages and cultures over time. As emblematic as the Internet is, it can be no more representative of the language environment than are the many linguistic technologies that have been systematically pressed out of modern awareness - and the fact that it is so heavily promoted by institutions of authority should, despite all the Internet’s attractions, give us pause. Reflecting on the natural world it seems hard to understand how human beings could come to any other conclusion but that part of our responsibility is to preserve so that we might understand more deeply the many natural processes that have proven themselves to be, so far, largely beyond our ken. Instead, capital insists on the vivisection - or just outright destruction - of these biological and environmental alternatives. Less well-known is the plight of linguistic variety itself, the pressure exerted by English and standardization and the networked reliance on programming and markup languages on those existing remnants of the world’s lost languages. See Crystal, Language Death, Grenoble and Whaley, Endangered Languages, Maffi, Biocultural Diversity, and Skutnabb-Kangas, Linguistic Genocide. These languages must not be thought of as simple “formal variants,” alternate ways of approaching the same underlying material (which a computational perspective might seem to suggest). Instead, they are true examples of metadiversity - systems or quasi-systems that encode not just methods of approaching social relations but of the history of the self, the constitution of identity and otherness. Thus recent evolutionary theory has begun to point, for example, to social structuring processes as linguistically generative, perhaps more so than the putative features of Universal Grammar - see, e.g., Dunbar, Gossip, and Goody, Social Intelligence and Interaction.

With respect to our linguistic environment, even a dispassionate and so-called scientific perspective, no less a cultural materialist one, suggests that what is most vital to us is our multiplicity of structural alternatives, the heterogeneity of social interpretations whose variance itself is part of what allows society to be flexible, accommodative, meaningful. This is exactly what is suggested in Abram, Spell of the Sensuous, and Maffi, Biocultural Diversity - quite literally that linguistic diversity constitutes a critical feature of the natural environment and even that the environment requires linguistic diversity to sustain biodiversity. We see again and again the record of apparently significant cultural histories characterized as myth, while one central set of metaphors derived from the success of the physical sciences continues to dominate investigation of not just the body but of human culture itself, which is to say language. See Lakoff and Johnson, Metaphors We Live By, and Philosophy in the Flesh.

5. Futures

Perhaps the promise of the Internet lies in the marks within it, even today, of mechanisms leading toward the creation and revitalization of alternate and variable kinds of languages and language-like formations, to some degree beyond and outside of information and communication. Of course a critical part of such formations is the raw assembling of communicative groups, such as newsgroups, chat rooms, website-based communities, and other devices wherein electronic communication is fundamentally multithreaded. Previous innovations in communication have generally been structured either on broadcast (one-to-many) communications, such as print publishing, television and radio broadcasting, where a generally powerful single entity is able essentially to create many copies of its own communications and then to distribute these widely among a population literate in the given medium. Another set of communicative technologies enable one-to-one interactions (the chief examples are letter writing, the telegraph and telephony). The Internet does encourage various and to some extent innovative kinds of both one-to-one and broadcast communications. Even more than these, however, the promise of the Internet seems to reside in its ability to facilitate something like many-to-many communicative formations. This is to approximate something not like the myriad forms of small group and peer communication that are characteristic of social groups.

In both the one-to-one and many-to-many registers we find true arenas for linguistic innovation. One reason there has been such proliferation of language in our world (prior to the work of standardized languages like English) is that both intimate and social communication, when unconstrained by institutional pressures that are especially characteristic of broadcast communicative praxes, provide especially fertile ground for experimentation and performative adoption of linguistic and cultural strategies. This seems to me in line, to at least some degree, with the approach toward identity and cultural politics found, for example, in Butler, “Performative Acts” and Gender Trouble, and Spivak, “Acting Bits/Identity Talk” and Critique of Postcolonial Reason. Outside modern institutionalized standards, language is often perceived less as a set of static elements and rules to be applied according to pre-existing constraints, and more as cognitive medium for live innovation, deconstruction, creation, interaction. See Golumbia, “History of `Language’,” and Harris, Language Machine. One reason for the proliferation of languages the world over may be that linguistic diversity correlates somewhat directly with a kind of local adaptiveness - providing both for certain kinds of local cultural homogeneity but also for a great deal of areal cultural diversity. See Abram, Spell of the Sensuous, and Maffi, Biocultural Diversity. On local cultural homogeneity, see Sapir, Language. On areal diffusion and its influence on linguistic history see Dixon, Rise and Fall of Languages. Derrida, Monolingualism of the Other, offers some provocative reflections on the consequences of monolinguality.

There exists a relatively clear historical line from the monolingual policies and technologies that have been advocated especially by the West to the current relative monolinguality of the Web. On the earlier parts of this history see, for example, Ong, Interfaces, and Orality and Literacy, and, in another register, Anderson, Imagined Communities. On the consequences of the abrupt imposition of such technologies on human societies more generally, see Mander, Absence of the Sacred. At the same time many of the phenomena descried by critics of the Web - the bad spelling caused by typing emails quickly, poor editing of “fan”-created Web pages, apparently vague “emoticons” - demonstrate the power of noncanonical language to rise above the constraints on which standardization insists, usually for the purposes of social interaction, often far above or beyond meaning per se. In addition to the social approach suggested in Dunbar, Gossip, also see the work of more recent language ideology theorists such as Kroskrity, Regimes of Language, and Schieffelin, Woolard, and Kroskrity, Language Ideologies. So does the Web’s ability to draw into interaction communities from many different language groups, including groups whose languages have not been part of the standardization process but who nevertheless wish to use the network to speak in other registers. See Crystal, Language and the Internet. To some extent, then, what seems on the surface least political about the Web may be what is most important: providing raw bandwidth to those whose voices and languages have been pushed away by standardization. (However, the relative difficulty of sustaining broadcast media technologies in nonstandard languages such as low-power radio and television stations lends some caution to this view.)

This is not exactly to argue that we should resist technological innovation altogether (though see Mander, Absence of the Sacred and Abram, Spell of the Sensuous for surprisingly compelling statements in this direction). It is to say that, in the realm of linguistic technology, it may well be the case that the stuff of spoken language itself provides a kind of bare technological matter that can help us to restructure social life in significant ways. A more effective Internet may need to be not merely written, but verbal and visual; it may need to accommodate better the full range of human sight, sound and gesture, to allow us to push beyond the linguistic constraints print and standardization have unwittingly placed on us. It may also be interesting to see if it is possible to encourage the development of new, non-roman-script linguistic representations (such as emoticons) which lack strongly standardized underpinnings. If, in fact, some kind of change in language technology is needed to create a more flexible and diverse society (as the IT revolution seems to suggest on its face), we might look just as fruitfully to the innovations produced over tens of generations by thoughtful speakers of human languages, as we do to the more short-term innovations produced in the name of the general reduction of social language to informatic technologies.

Works Cited

Abram, David. The Spell of the Sensuous: Perception and Language in a More-than-Human World. New York: Pantheon Books, 1996.

Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism. Revised and Expanded Edition, London: Verso, 1991.

Birbeck, Mark, Jon Duckett, Oli Gauti Gudmundsson, et. al. Professional XML. Chicago: Wrox Press, 2001.

Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 1999.

Brown, Stuart M. Bioinformatics: A Biologist’s Guide to Biocomputing and the Internet. Natick, MA: Eaton, 2000.

Butler, Judith. “Performative Acts and Gender Constitution: An Essay in Phenomenology and Feminist Theory.” Theatre Journal 40:4 (December 1988). 519-531.

—. Gender Trouble: Feminism and the Subversion of Identity. New York and London: Routledge, 1990.

Chomsky, Noam. Aspects of the Theory of Syntax. Cambridge, MA and London: The MIT Press, 1965.

—. The Minimalist Program. Cambridge, MA and London: The MIT Press, 1995.

Crystal, David. Language and the Internet. New York: Cambridge University Press, 2001.

—. Language Death. New York: Cambridge University Press, 2000.

De Landa, Manuel. A Thousand Years of Nonlinear History. New York: Swerve Editions/Zone Books, 1997.

—. War in the Age of Intelligent Machines. New York: Swerve Editions/Zone Books/MIT Press, 1991.

Derrida, Jacques. Monolingualism of the Other; or, The Prosthesis of Origin. Trans. Patrick Mensah. Stanford, CA: Stanford University Press, 1998.

DeRose, Steven J. “Structured Information: Navigation, Access, and Control.” Paper presented at the Berkeley Finding Aid Conference, Berkeley, CA, April 4-6, 1995. http://sunsite.berkeley.edu/FindingAids/EAD/derose.html.

Dixon, R. M. W. The Rise and Fall of Languages. Cambridge and New York: Cambridge University Press, 1997.

Dunbar, Robin I.M. Grooming, Gossip, and the Evolution of Language. Cambridge, MA: Harvard University Press, 1996.

Eckstein, Robert. XML Pocket Reference. Sebastopol, CA: O’Reilly, 1999. Fitzgerald, Michael. Building B2B Applications with XML: A Resource Guide. New York: John Wiley & Sons, 2001.

Golumbia, David. “The Computational Object: A Poststructuralist Approach.” Computers and the Humanities (under review).

—. “Hypercapital.” Postmodern Culture 7:1 (September 1996). http://www.mindspring.com/~dgolumbi/docs/hycap/hypercapital.html.

—. “Toward a History of `Language’: Ong and Derrida.” Oxford Literary Review 21 (1999). 73-90.

Goody, Esther N., ed. Social Intelligence and Interaction: Expressions and Implications of the Social Bias in Human Intelligence. Cambridge: Cambridge University Press, 1995.

Grenoble, Lenore A., and Lindsay J. Whaley, eds. Endangered Languages: Current Issues and Future Prospects. Cambridge and New York: Cambridge University Press, 1998.

Grimes, Barbara F., ed. Ethnologue. 14th Edition. CD-ROM. Dallas, TX: SIL International, 2000.

Harris, Randy Allen. The Linguistics Wars. New York and Oxford: Oxford University Press, 1993.

Harris, Roy. The Language Machine. Ithaca, NY: Cornell University Press, 1987.

Holland, John H. Adaptation in Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge, MA: The MIT Press, 1992.

Huck, Geoffrey J., and John A. Goldsmith. Ideology and Linguistic Theory: Noam Chomsky and the Deep Structure Debates. London and New York: Routledge, 1995.

Kroskrity, Paul V., ed. Regimes of Language: Ideologies, Polities, and Identities. Santa Fe, NM: School of American Research Press, 2000.

Lakoff, George. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. Chicago and London: University of Chicago Press, 1987.

— and Johnson, Mark. Metaphors We Live By. Chicago and London: University of Chicago Press, 1980.

— and —. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York: Basic Books, 1999.

Landow, George P. Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore, MD: Johns Hopkins University Press, 1992.

—, ed. Hyper/Text/Theory. Baltimore and London: Johns Hopkins University Press, 1994.

Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999.

Lunenfeld, Peter, ed. The Digital Dialectic: New Essays on New Media. Cambridge, MA: The MIT Press, 1999.

Lyotard, Jean-François. The Postmodern Condition: A Report on Knowledge. Geoff Bennington and Brian Massumi, trans. Minneapolis: University of Minnesota Press, 1984.

Maffi, Luisa, ed. On Biocultural Diversity: Linking Language, Knowledge and the Environment. Washington, DC: Smithsonian Institute Press, 2001.

Mander, Jerry. In the Absence of the Sacred: The Failure of Technology and the Survival of the Indian Nations. San Francisco, CA: Sierra Club Books, 1992.

Mann, William. “What Is Communication? A Summary.” Posting to FUNKNET list (February 17, 2001). Archived at http://listserv.linguistlist.org/cgi-bin/wa?A2=ind0102&L=funknet&P=R391.

National Federation of Abstracting and Information Services (NFAIS). Proceedings of the Symposium on Metadiversity, 1998. Philadelphia, PA: NFAIS, 1998.

Ong, Walter J. Interfaces of the Word: Studies in the Evolution of Consciousness and Culture. Ithaca, NY: Cornell University Press, 1977.

—. Orality and Literacy: The Technologizing of the Word. London and New York: Routledge, 1988.

Poster, Mark. The Mode of Information: Poststructuralism and Social Context. Chicago: University of Chicago Press, 1990.

Reddy, Michael J. “The Conduit Metaphor: A Case of Frame Conflict in Our Language about Language.” In Andrew Ortony, ed., Metaphor and Thought. Cambridge: Cambridge University Press, 1979. 284-324.

Robins, Kevin, and Frank Webster. Times of the Technoculture: From the Information Society to the Virtual Life. London and New York: Routledge, 1999.

Sapir, Edward. Language: An Introduction to the Study of Speech. London: Granada, 1921 (Reprinted, 1978).

Schieffelin, Bambi B., Kathryn A. Woolard, and Paul V. Kroskrity, eds. Language Ideologies: Practice and Theory. Oxford: Oxford University Press, 1998.

Skutnabb-Kangas, Tove. Linguistic Genocide in Education – or Worldwide Diversity and Human Rights? Mahwah, NJ and London: Lawrence Erlbaum Associates, 2000.

Spivak, Gayatri Chakravorty. “Acting Bits/Identity Talk.” Critical Inquiry 18:4 (Summer 1992). 770-803.

—. A Critique of Postcolonial Reason: Toward a History of the Vanishing Present. Cambridge, MA: Harvard University Press, 1999.

Thacker, Eugene. “Bioinformatics: Materiality and Data between Information Theory and Genetic Research.” CTheory Article 63 (October 28, 1998).

Turkle, Sherry. The Second Self: Computers and the Human Spirit. New York: Simon and Schuster, 1984.

Vose, Michael D. The Simple Genetic Algorithm: Foundations and Theory. Cambridge, MA: The MIT Press, 1999.


          Definisi Telematika        

Sejarah Telematika
Di dalam bahasa Indonesia dikenal dengan Telematika. Kata telematika berasal dari istilah dalam bahasa Perancis Telematique yang merujuk pada bertemunya sistem jaringan komunikasi dengan teknologi informasi. Istilah telematika merujuk pada hakekat cyberspace sebagai suatu sistem elektronik yang lahir dari perkembangan dan konvergensi telekomunikasi, media dan informatika.
Istilah telematika merupakan adopsi dari bahasa asing. Kata telematika berasal dari kata dalam bahasa Prancis, yaitu telematique. Istilah telematika juga merujuk pada perkembangan konvergensi antara teknologi telekomunikasi, media, dan informatika yang semula masing-masing berkembang secara terpisah. Konvergensi telematika kemudian dipahami sebagai sistem elektronik berbasiskan digital atau the net. Menurut Kerangka Kebijakan Pengembangan dan Pendayagunaan Telematika di Indonesia, disebutkan bahwa teknologi telematika merupakan singkatan dari teknologi komunikasi, media, dan informatika. Senada dengan pendapat pemerintah, telematika diartikan sebagai singkatan dari
"tele = telekomunikasi"
"ma = multimedia"
"tika = informatika"
Mengacu kepada penggunaan dikalangan masyarakat telematika Indonesia (MASTEL), istilah telematika berarti perpaduan atau pembauran (konvergensi) antara teknologi informasi (teknologi komputer), teknologi telekomunikasi, termasuk siaran radio maupun televisi dan multimedia.Dalam perkembangannya, teknologi telematika ini telah menggunakan kecepatan dan jangkauan transmisi energi elektromagnetik, sehingga sejumlah besar informasi dapat ditransmisikan dengan jangkauan, menurut keperluan, sampai seluruh dunia, bahkan ke seluruh angkasa, serta terlaksana dalam sekejap. Kecepatan transmisi elektromagnetik adalah (hampir) 300.000 km/detik, sehingga langsung dikirim begitu sampai, memungkinkan orang berdialog langsung, atau komunikasi interaktif.
Berdasarkan pendapat-pendapat tersebut, maka dapat disarikan pemahaman tentang telematika sebagai berikut:
1. Telematika adalah sarana komunikasi jarak jauh melalui media elektromagnetik.
2. Kemampuannya adalah mentransmisikan sejumlah besar informasi dalam sekejap, dengan jangkauan seluruh dunia, dan dalam berbagai cara, yaitu dengan perantaan suara (telepon, musik), huruf, gambar dan data atau kombinasi-kombinasinya. Teknologi digital memungkinkan hal tersebut terjadi.
3. Jasa telematika ada yang diselenggarakan untuk umum (online, internet), dan ada pula untuk keperluan kelompok tertentu atau dinas khusus (intranet).
Istilah telematika pertama kali digunakan pada tahun 1978 oleh Simon Nora dan Alain Minc dalam bukunya L'informatisation de la Societe. Istilah telematika yang berasal dari kata dalam bahasa Perancis telematique merupakan gabungan dua kata: telekomunikasi dan informatika.
Telekomunikasi sendiri mempunyai pengertian sebagai teknik pengiriman pesan, dari suatu tempat ke tempat lain, dan biasanya berlangsung secara dua arah. ' Telekomunikasi ' mencakup semua bentuk komunikasi jarak jauh, termasuk radio, telegraf/ telex, televisi, telepon, fax, dan komunikasi data melalui jaringan komputer. Sedangkan pengertian Informatika (Inggris: Informatics) mencakup struktur, sifat, dan interaksi dari beberapa sistem yang dipakai untuk mengumpulkan data, memproses dan menyimpan hasil pemrosesan data, serta menampilkannya dalam bentuk informasi.
Jadi pengertian Telematika sendiri lebih mengacu kepada industri yang berhubungan dengan penggunakan komputer dalam sistem telekomunikasi. Yang termasuk dalam telematika ini adalah layanan dial up ke Internet maupun semua jenis jaringan yang didasarkan pada sistem telekomunikasi untuk mengirimkan data. Internet sendiri merupakan salah satu  contoh telematika. 
Secara umum, istilah telematika dipakai juga untuk teknologi Sistem Navigasi/Penempatan Global atau GPS (Global Positioning System) sebagai bagian integral dari komputer dan teknologi komunikasi berpindah (mobile communication technology). 
Secara lebih spesifik, istilah telematika dipakai untuk bidang kendaraan dan lalulintas (road vehicles dan vehicle telematics).
Frame dalam dunia teknologi informasi mempunyai beberapa pengertian sesuai dengan masing-masing kapasitas pemakaiannya. Adapun pengertian tersebut antara lain adalah sebagai berikut:
a. Grafis Publikasi
Yang dimaksudkan frame pada bidang grafis publikasi ini adalah sebuah kotak area untuk menyisipkan gambar dan teks. Hal ini dimaksudkan untuk memudahkan peletakkan obyek pada bagian-bagian tertentu.
b. Website
Pada situs web, frame dapat diartikan sebagai bagian halaman yang tidak akan berubah isinya, sementara setiap pengunjung mengakses isi content dari halaman tersebut akan berubah. Untuk lebih jelasnya, sebuah halaman web akan dibagi menjadi beberapa bagian, yaitu bagian pokok dan bagian isi/content. Bagian pokok halaman web yang nantinya tidak akan ikut berubah setiap kali pengunjung mengakses itulah yang dimaksudkan dengan frame. Frame digunakan untuk memudahkan peletakkan isi/content dalam sebuah situs web.
c. Video / Animasi
Pada dunia film atau animasi, frame menjadi suatu bagian dasar dari rangkaian suatu film atau pun animasi. Frame juga biasa digunakan sebagai satuan dalam penghitungan di dunia film atau animasi. Misalnya jika pada animasi penghitungan dilakukan dengan satuan frame/second atau frame/detik (banyaknya frame per detik).
Satu-satunya alat atau fasilitas yang dipergunakan untuk mengeksplorasi berbagai data, informasi, dan pengetahuan yang ada di Internet adalah mesin pencari atau yang biasa kita sebut search engine. Search engine adalah sebuah program yang dapat diakses melalui Internet yang fungsinya adalah membantu pengguna komputer mencari berbagai hal yang ingin diketahuinya.

Daftar Pustaka :


          Agente di Commercio settore Ho.Re.Ca. - Digitmode Srl - Milano, Lombardia        
*DIGIT* *MODE* , società in forte espansione nell’area della consulenza informatica e nella realizzazione di importanti progetti ad alto contenuto tecnologico
Da Indeed - Fri, 16 Jun 2017 13:03:45 GMT - Visualizza tutte le offerte di lavoro a Milano, Lombardia
          RICERCA AGENTI - Al.Ma.com S.r.l. - Roma, Lazio        
Almacom, azienda informatica italiana con sede in provincia di Brescia produttrice e distributrice di Almabox, gioco interattivo dedicato alle aree giochi dei
Da Indeed - Wed, 03 May 2017 11:52:50 GMT - Visualizza tutte le offerte di lavoro a Roma, Lazio
          GATE Exam Best Coaching Centres In Chennai        
Some of the Best Coaching institutes for GATE examination based on Students Feedback.
Note This feedback is based on Feb 2016. Visit or enquire atleast 3 or 4 four coaching centres, interact with the current students and decide where to join. 

1. The GATE academy
Address: No 1/16, second floor, pinjala subramanium, near KPN travels India Private Limited, Chennai
Contact No.: 044 4212 7775


2. GATE zone
 coaching fees is also reasonable.
Address: 11-A, old rosery school campus , no. 1st St. addambakkam, shanti nagar, Chennai
Contact no.: 0980 40 587794

3. GATEFORUM
Address: No.1/1 first floor, emelem complex, mahalingapuram,main road near aiyppa temple, chennai
Contact no.: 044-42144432

4. T.I.M.E
Address: Durgabai ,deshmukh road, Chennai, Tamil Nadu

5. ACE engineering academy
Address: 1st floor, durai swami road near T. nagar, Chennai
Contact no.: 09343799966

6. Vani Institute
Address: Office no. 19/9 , near canara bank, burkit road, T. nagar, Chennai
Contact No.: 044-24345294

7. IES GATE Academy
Address: New no. 50, , mahalakshmi street, beside City Union Bank ,opposite T nagar bus stand, Chennai
Contact no.: 91- 9445037000


9. G.B Education
Address: no. 336/6, 2nd floor , adjacent to santosh super market. 11 main road , anna nagar, Chennai
Contact no.: 9841114641

10. Visu Acadmey
Address: No. 6, old no. 30, prasanth apartments, 1st main road , gandhi road , ,Chennai
Contact no.: +91-044-4208 2207


01.  Solved Question Paper : GATE-CSE-2016-SET2 - by ACE
02. Solved Question Paper :  GATE-CSE-2016-SET1 - by ACE
03. Solved Question Paper :  GATE-CSE-2014-SET1 - by GATE-ONLINE
04. Solved Question Paper :  GATE-CSE-2013-SET1 - by  GATE-ONLINE
05. Solved Question Paper :  GATE-CSE-2012-SET1 - by  GATE-ONLINE
06. Solved question Paper -  GATE CSE-2015- SET 1-by Arihant


HAL Recruits through GATE Last Date : 7 Feb 2017
Hindustan Aeronautics Limited invites application for the post of 125 Management Trainee & Design Trainee. Apply Online before 07 February 2017.
Job Details : 
Post Name : Management Trainee
No of Vacancy : 75 Posts
Pay Scale : Rs. 16400-40500/-
=====================
Post Name : Design Trainee
No of Vacancy : 50 Posts
Pay Scale : Rs.16400-40500/-
=================
Eligibility Criteria for HAL Recruitment : 
Educational Qualification : BE/B.tech in Relevant Engineering.

Nationality : Indian
Age Limit : 28 years (as on 07.02.2016)
Job Location : Bangalore (Karnataka)
Selection Process : Selection will be made on Through Gate-2017 & Personal Interview.
How to Apply HAL Vacancy : Interested candidates may apply online through the website http://www.hal-india.com from 06.01.2017 to 07.02.2017.
Important Dates to Remember : 
Starting Date for Submission of Online Application : 06.01.2017
Last Date for Submission of Online Application : 07.02.2017
Important Links : 
Detailed  Original  Advt of HAL is here: Link : 




   GATE 2017: Important Dates 

GATE 2017 Online Examination Dates:
February 4 – 5, 2017 & February 11 – 12, 2017  (Saturdays and Sundays only)
GATE Online Application Processing System (GOAPS) Website Opens for Enrollment, Application Filling, Application Submission
 September 1, 2016 (Thursday)
Last Date for Submission of Online Application through Website
October 4, 2016 (Tuesday)
Last Date for Request for Change in the Choice of Examination City via GOAPS login
November 16, 2016 (Wednesday)
Availability of Admit Card on the Online Application Interface for printing
January 5, 2017 (Thursday)
Announcement of Results on the Online Application Website
March 27, 2017 (Monday)




GATE is conducting exam on following disciplines. 
You have to write exam in any one of the papers.
(But please note that, for example, if you apply for BSNL JTO post, you are supposed to have score in CS,EE,EC or IN titles only)

AE: Aerospace Engineering
IN: Instrumentation Engineering
AG: Agricultural Engineering MA: Mathematics
AR: Architecture and Planning
ME: Mechanical Engineering ( ME01 ME02 ME03 )
BT: Biotechnology
MN: Mining Engineering
CE: Civil Engineering ( CE01 CE02)
MT: Metallurgical Engineering
CH: Chemical Engineering PE: Petroleum Engineering
CS: Computer Sc. and Information Technology ( CS01 CS02 )
PH: Physics
CY: Chemistry PI: Production and Industrial Engineering
EC: Electronics and Communication Engg. ( EC01 EC02 EC03)
TF: Textile Engineering and Fibre Science
EE: Electrical Engineering ( EE01 EE02)
XE: Engineering Sciences (XE-A, XE-B, XE-C, XE-D, XE-E, XE-F, XE-G )
EY: Ecology and Evolution
XL: Life Sciences (XL-H, XL-I, XL-J, XL-K, XL-L, XL-M )
GG: Geology and Geophysics

List of Public Sector Undertakings who select through GATE:

Name of PSU
Disciplines 
1
IRCON
Civil/ Electrical
2
DMRC
Electrical / Electronics
3
BSPCL
Electrical/ Electrical & Electronics/Mechanical / Production/ Industrial Engineering/ Production & Industrial Engineering/ Thermal/ Mechanical & Automation/ Power Engineering/ Civil Engineering/ Electronics & Instrumentation/ Instrumentation & Control Electronics/ Electronics & Tele-communication/ Electronics & Power/ Power Electronics / Electronics & Communication/ CSE/ IT
4
DRDO
Electronics & Communication Engg/ Electronics Engg/ Electronics & Computer Engg/ Electronics & Control Engg/ Electronics & Communication/ System  Engg/ Electronics & Instrumentation Engg/Electronics & Tele‐Communication  Engg/ Electronics & Telematics  Engg/ Industrial Electronics  Engg/ Tele Communication  Engg/ Telecommunication & Information Tech/ Applied Electronics & Instrumentation Engg/  Electronics & Electrical Communication Engg/ Mechanical  Engg/ Mechtronics Engg/ Mechanical & Automation Engg/ Mechanical & Production Engg/ Computer Science & Engineering/ Computer Science App Engg.Tech/ Computer Science/Engg. & InfoTech /Computer Science & System Engg/ Computer Software  Engg/ Computer Science & Automation Engg/ Computer Technology/ Computer Technology & Informatics Engg/ Computer Technology & Information Engg/ Information Science /  Technology Engg /Information Technology Engg/ Mathematics/ Applied Mathematics/ Electrical Engg/ Electrical Power System  Engg/ Electrical & Electronics  Engg/  Electrical & Renewable Energy Engg/ Power Engg/ Power Electronics Engg/ Electronics & Electrical Communication Engg/ Electrical with Communication Engg/ Physics/ Applied Physics/ Physics (Electronics)/ Solid State Physics/ Aeronautical Engg/ Aerospace Engg/ Avionics Engg/ Chemical Engg/ Chemical Engg / Chemical Tech/ Chemical Plant Engg/ App. Chemical and Polymer Tech/ Polymer Science & Chemical Technology/ Chemistry/ Organic Chemistry/ Inorganic Chemistry/ Analytical Chemistry/ Physical Chemistry/ Civil  Engg/ Civil & Structural   Engg/ Metallurgy & Material Tech/ Metallurgy/Metallurgical Engg/ Materials Engg
5
RITES
Civil Engineering and Mechanical Engineering
6
ONGC Ltd.
Mechanical/ Petroleum/ Civil/ ELectrical/ Electronics/ Telecom/ E&T/ PG in Physics with Electronics/ Instrumentation/ Chemcical/ Applied Petroleum/ PG in Geo-Physics/ PG in Mathematics/ PG in Petroleum Technology/PG in Chemistry/ PG in Geology/ PG in Petroleum Geo-Science/ PG in Geological Technology/ Auto Engineering/ Computer Engineering/ Information Technology/ MCA/ "B'Level Diploma as per Dept of Electronics, GOI
7
NFL
Chemical/ Mechanical/ Electrical/ Instrumentation
8
Cabinet Secretariat, Govt. of India
Computer Science/ Electronics and Communications or Electronics/Physics or Chemistry/Telecommunication or Electronics
9
IPR
 Physics/Applied Physics/ Engineering Physics/ Mechanical Engineering/ Electrical Engineering/ Electrical & Electronics Engineering
10
PSPCL
Electrical Engineering, Electrical & Electronics Engineering
11
PSTCL
Electrical/ Electrical & Electronics/ Mechanical/ Electronics & Communication/ Instrumentation & Control/ Civil/ Computer Science/ IT
12
MDL
Mechanical/ Mechanical & Industrial Engineering/ Mechanical & Production Engineering/ Production Engineering/ Production Engineering & Management/ Production & Industrial Engineering/Electrical/ Electrical & Electronics/
Electrical & Instrumentation
13
BPCL Limited 
Mechanical Engineering/Chemical Engineering / Chemical Technology/ Instrumentation Engineering / Instrumentation and Control Engineering / Instrumentation and Electronics Engineering / Electronics and Instrumentation Engineering
14
GAIL
Chemical/ Petrochemical/ Chemical Technology/ Petrochemical Technology/ Mechanical/ Production/ Production & Industrial/ Manufacturing/ Mechanical & Automobile/ Electrical/ Electrical & Electronics/ Instrumentation/ Instrumentation & Control/ Electronics & Instrumentation/ Electrical & Instrumentation/ Electronics/ Electrical & Electronics/ Computer
Science/ Information Technology/ Electronics/ Electronics & Communication/ Electronics & Telecommunication/ Telecommunication/ Electrical &
Electronics/ Electrical & Telecommunication
15
NLC Ltd.
Mechanical/Electrical & Electronics Engineering/ Electrical/ Electronics & Communication Engineering/Civil/Instrumentation / Electronics & Instrumentation / Instrumentation & Control /Computer Science Engineering / Computer Engineering / Information Technology/Mining
16
CEL
Electronics & Communication Engineering/ Mechanical Engineering/ Electrical Engineering/ Material Science/ Ceramics
17
Indian Oil
Chemical, Civil, Computer Science & IT, Electrical, Electronics & Communications, Instrumentation, Mechanical, Metallurgy, Petroleum, Polymer
18
HPCL
Mechanical/ Mechanical & Production/ Civil/ Electrical/ Electrical & Electronics/Electronics/ Electronics & Communication/ Electronics &
Telecommunication/ Applied Electronics/ Instrumentation/ Instrumentation & Control/ Electronics & Instrumentation/ Instrumentation & Electronics/ Instrumentation & Process Control/ Chemical/ Petrochemical/ Petroleum Refining & Petrochemical/ Petroleum Refining
19
NBCC Ltd.
Civil Engineering
20
NHPC
Electrical/Electrical & Electronics/Power Systems & High Voltage/Power Engineering/ Civil Engineering/ Mechanical/ Production/ thermal/ Mechanical & Automation Engineering/ Geology/Applied Geology
21
BHEL 
Industrial and Production Engineering/Industrial Engineering/ Mechanical Production and Tool Engineering/ Production Technology Manufacturing  Engineering (NIFFT Ranchi)/ Mechatronics/ Manufacturing Process and Automation/ Power Plant Engineering/ Production Engineering/ Production and Industrial Engineering/ Thermal Engineering/ Manufacturing Technology/ Power Engineering/ Electrical & Electronics/Electrical, Instrumentation & Control/ High Voltage Engg./ Power Systems & High Voltage Engg/ Electrical Machine/  Electronics & Power/ Power Electronics/ Power Plant Engineering/ Energy Engineering/ Power Engineering / Electronics & Telecommunication/ Instrumentation/ Electronics & Instrumentation/ Electronics & Power/ Electronics Design & Technology/ Electronics & Biomedical/ Applied Electronics/ Power Electronics/ Electronics & Communication/ Electrical & Electronics/ Industrial Electronics/ Mechatronics/ Electronics & Control/ Control & Instrumentation /  Metallurgical Engineering/ Metallurgical & Materials Engineering/ Materials Engineering/ Extractive Metallurgy/ Foundry Technology/ Process Metallurgy
*Special Recruitment for PwD candidates (ME & EE) through GATE 2016 - Application starts from October 26 to November 20, 2015
22
NTPC Limited
Electrical / Electrical & Electronics / Electrical, Instrumentation & Control / Power Systems & High Voltage / Power Electronics / Power Engineering/ Mechanical / Production / Industrial Engg./ Production & Industrial Engg./ Thermal / Mechanical & Automation / Power Engineering/ Civil / Construction/ Electronics & Instrumentation / Instrumentation & Control/ Electronics / Electronics & Telecommunication / Electronics & Power/ Power Electronics/ Electronics & Communication/ Electrical & Electronics/ Computer Science Engg. / Information Technology
23
WBSEDCL 
Electrical /Power Engineering/ Mechanical
24
Oil India
Mechanical Engineering / PG in Geo Physics/Geology
25
Power Grid
Electrical/ Electrical (Power)/ Electrical and Electronics/Power Systems
Engineering/Power Engineering (Electrical)/Electronics / Electronics and
Communication/Electronics and Telecommunication/ Civil Engineering/ Computer Science/ Computer Engg./ Information Technology
26
MECL
Mechanical Engineering/ Petroleum Engineering/ Geophysics/ Geology
27
BAARC: OCES/ DGFS
Mechanical Engineering. Civil Engineering, Electrical Engineering. Electronics & Communication engineering/ computer Science/ Metallurgical Engineering/ Chemical Engineering/Instrumentation Engineering
28
THDC India Ltd
Mechanical, Electrical and Civil Engineering
29
OPGC Ltd
Mechanical, Electrical, Civil , C & I
30
BBNL
Electronics & Communication / Computer Science / Information Technology / Electrical / Electrical & Electronics
31
IRCON International Ltd
Civil/ Mechanical/ Electrical/ Electronics/ Electronics & Communication Engineering/ Electrical & Electronics Engineering/ Electronics & Instrumentation Engineering
32
GSECL
Electrical, Mechanical, C & I, Metallurgy and Environment Engineering
33
NHAI
Civil Engineering
34
KRIBHCO
Chemical, Mechanical, Electrical, Civil, Computer, Electronics & Communication and Instrumentation Engineering
35
Mumbai Railway Vikas Corporation Ltd (MRVC Ltd)
Civil Engineering
36
National Textile Corporation
Philippe just let me know of the following fascinating opportunity (deadline is August 8th but it can be extended. In that case you need to get in touch directly with him).

Hi: If you are working on Artificial Intelligence/Machine Learning Applications to Environmental Sciences, we have a terrific conference coming up in Austin Texas, January 7-11, 2018. 
We are organizing the 17th Conference on Artificial and Computational Intelligence and its Applications to the Environmental Sciences as part of the 2018 annual meeting of the American Meteorological Society. 

We have sessions in areas such as weather predictions, extreme weather, energy, climate studies, the coastal environment, health warnings, high performance computing and general artificial intelligence application sessions. 
Two of our sessions, Machine Learning and Statistics in Data Science and Climate Studies, will be headlined by invited talks. 
Several of the sessions are co-organized with other conferences providing opportunities to network with researchers and professionals in other fields. 
We also have a few firsts including sessions focused on Machine learning the Climate studies, and AI Applications to the Environment in Private Companies and Public-private Partnerships and for early health warnings. 
To submit your abstract: AI Abstracts Submission 
For more information on our sessions: AI SessionsMore information on AMS Annual meeting: Overall AMS Annual Meeting Website
See you in Austin. 
The AMS AI Committee
More information on the AMS AI committee: AMS AI Committee Web Page



Here are the  AI Sessions

  • AI Applications to the Environment in Private Companies and Public-private Partnerships. Topic Description: With the rapid development of AI techniques in meteorological and environmental disciplines, a significant amount of research is occurring in the private sector and in collaborations between companies and academia. This session will focus on AI applications in private companies and public-private partnerships, showcasing new approaches and implementations that leverage AI to help solve complex problems.
  • AI Techniques Applied to Environmental Science
  • AI Techniques for Decision Support
  • AI Techniques for Extreme Weather and Risk Assessment
  • AI Techniques for Numerical Weather Predictions
  • AI and Climate Informatics
  • Joint Session: Applications of Artificial Intelligence in the Coastal Environment (Joint between the 17th Conf on Artificial and Computational Intelligence and its Applications to the Environmental Sciences and the 16th Symposium on the Coastal Environment). Topic Description: Contributions to this session are sought in the application of AI techniques to study coastal problems including coastal hydrodynamics, beach and marsh morphology, applications of remote sensing observations and other large data sets.
  • Joint Session: Artificial Intelligence and High Performance Computing (Joint between the 17th Conf on Artificial and Computational Intelligence and its Applications to the Environmental Sciences and the Fourth Symposium on High Performance Computing for Weather, Water, and Climate)
  • Joint Session: Machine Learning and Climate Studies (Joint between the 17th Conf on Artificial and Computational Intelligence and its Applications to the Environmental Sciences and the 31st Conference on Climate Variability and Change)
  • Joint Session: Machine Learning and Statistics in Data Science (Joint between the 17th Conf on Artificial and Computational Intelligence and its Applications to the Environmental Sciences and the 25th Conference on Probability and Statistics)
  • Statistical Learning the Environmental Sciences










Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

          Gases        
Material recopilado en red sobre el tema de gases (Clase pospuesta del miércoles y el sábado por el docente por incapacidad de traslado a la universidad) Disculpen muchachos, ya para la proxima semana debe estar solventado mi problema de mobilidad
Aqui la información: https://dshop.diino.net/getafile/26DELZ700BGRD6JR31DE5372H26MCI7/Gases.doc

Tambien pueden revisar la página http://www.juntadeandalucia.es/averroes/recursos_informaticos/andared02/leyes_gases/guia.html
          RFMP e VTRPE        
Tornando alla primissima volta, in cui sentii parlare di scie chimiche in vita mia, incappai in una raccolta di brevi video prodotti da Tankerenemy. In uno di questi
la voce narrante, spiegava che esistevano dei programmi informatici militari che consentivano di monitorare il campo di battaglia in configurazione tridimensionale. Siccome, il radar che dovrebbe sviluppare le immagini, funziona solo sopra l’acqua ma non sulla terra, verrebbe sparso del bario in atmosfera, per creare un condotto (“duct”) per il segnale radar, così da poter monitorare il terreno.
Sempre sul progetto RFMP VTRPE, lo stesso autore pubblica anche un altro video
Dove ripete bene o male i concetti sopra esposti, ma relegando ai satelliti la capacità di prelevare immagini dell’area, congiungendole a quelle riprese dai mezzi in volo.
E di recente ha pubblicato un articolo su Edward Teller dove ritorna (velocissimamente) sulla questione.
1 – Lo scoop non è suo
Tanto per cambiare, tutta la faccenda, non è frutto di una ricerca indipendente reale (ovvero spulciare tra gli archivi e fare sperimentazioni di verifica), ma di una copiatura dal sito di Carnicom (da dove provengono parecchie cose dei “primi tempi” della bufala “scie chimiche”).
2 – Sono modelli matematici.
R.F.M.P. ovvero Radio Frequency Mission Planner (cioè pianificatore delle radio frequenze per la missione) è un tool software che aiuta a pianificare come gestire le radio frequenze.
E’ talmente segreto, che c’è un tutorial sul web su come usarlo.
Il V.T.R.P.E. (quando non lo scrivono invertendo le lettere ...) è invece un modello matematico che è alla base del tool RFMP. Serve per descrivere la propagazione delle onde radio. Sta per Variable Terrain Radio Parabolic Equation, ed è appunto un modello matematico molto preciso della propagazione radio che tiene conto di un sacco di fattori.
Altrettanto segreto, qui http://www.dtic.mil/dtic/tr/fulltext/u2/a323189.pdf lo studio sull'affidabilità.
3 – Le informazioni del terreno
Le informazioni del terreno, non sono lette dai satelliti durante la battaglia, o fornite dalle immagini degli elicotteri di supporto, ma fanno già parte del software, come confermato anche in “MODERN HF MISSION PLANNING COMBINING PROPAGATION MODELING AND REAL-TIME
ENVIRONMENT MONITORING” di D. Brant, G.K. Lott, S.E. Paluszek, BE. Skimmons, a pag. 333 dove dice: “The user opens a map window of the operating area. Selections include Digital Chart of the World, World Database III, and other chart Products” che tradotto vuol dire “L’utente apre una finestra mappa dell’area operative. La selezione include Digital Chart of the World, World Database III e altri prodotti di cartografia”
Alla fine il tool software, mostra sì una vista in 3D, colorata a seconda della propagazione del segnale. E serve appunto per valutare come funzioneranno le comunicazioni radio (si evince dal colore calcolato dal modello matematico) e di conseguenza anche le funzionalità del radar, a seconda del terreno e in particolari condizioni meteo.
4 – Il “condotto” (“duct”)
Il fatto che si debba creare un condotto in atmosfera è puro misunderstanding. L’effetto condotto, è una anomala propagazione del segnale radio, dovuto ad un effetto di rifrazione per via della densità più grande del normale, del tratto di atmosfera attraversata. Le due immagini che vengono sempre mostrate (una senza l’effetto condotto e l’altra con..) servono in realtà a indicare che il software è in grado di simulare anche quel particolare effetto, fondamentale per capire se le comunicazioni avranno successo oppure no.
Qui http://www.dtic.mil/dtic/tr/fulltext/u2/a248112.pdf a pag. 27, l’esempio del VTRPE col calcolo dell’effetto condotto.
Qui http://www.crh.noaa.gov/Image/ind/APDopplerRadarCuriosities.pdf alla terza pagina, la spiegazione dell’effetto condotto, con esempi.
5 – Bario?
Non c’è nessuna indicazione, né nei paper relativi al RFMP, né in quelli relativi al VTRPE, dell’impiego di bario per il loro utilizzo.
Posso solo immaginare che, dato che una tempesta o le nuvole, portino all'effetto condotto, e quindi a una anomalia del radar, qualcuno abbia collegato uno dei tanti millantanti falsi (ed errati) brevetti per il quali il bario renderebbe asciutta l’atmosfera e quindi il radar ci vedrebbe meglio.
Inoltre Tankerenemy differisce nell'errore da Carnicom. Secondo il primo, sarebbe il particolato di bario, che essendo elettro-conduttivo (??) creerebbe il canale (errata concezione di duct, ovvero condotto) per la trasmissione radio. Secondo Carnicom, il bario impedirebbe alle nubi di piovere, e rimanendo in aria piegherebbero le onde radio, creando un effetto condotto, che permetterebbe di oltrepassare l’orizzonte.
6 – Facciamo ancora più confusione
Nell'articolo su Edward Teller, ad un certo punto, l’autore afferma:
" Il progetto dell’aeronautica militare V.R.T.P. e quello della Marina R.F.M.P. (V.T.R.P.E.) includono l’impiego di particolato metallico composto da fibre dall’alluminio (chaff) ([2025] Weather as a Force Multiplier: Owning the Weather in 2025).”
A parte che in Owning the Weather in 2025, non si parla MAI di VTRPE o di RFMP. E l’unica volta in cui si parla di usare chaff per modificare il tempo atmosferico, è per mitigare i fulmini e non ha nulla a che vedere coi due progetti in questione.
(Oltretutto Ownin the weather non fa nemmeno DIRETTO riferimento all’uso di chaff, ma a pag. 19, quando parla di “triggering lightning” fa riferimto al paper Heinz W. Kasemir, “Lightning Suppression by Chaff Seeding and Triggered Lightning,” in Wilmot
N. Hess, ed., Weather and Climate Modification (New York: John Wiley & Sons, 1974), 623–628.)
7 – Il filmato preso da Red Flag
Nel secondo filmato, proposto all'inizio dell’articolo, per concludere è divertente notare che è stato mescolato un video di un’esercitazione militare, che nulla ha a che vedere coi progetto RFMP e VTRPE, con la voce e le scritte aggiunte da Tankerenemy
Inoltre, se si ascoltano le parole in sottofondo, in inglese, si sente chiaramente che si tratta di una simulazione di guerra su larga scala, che gli awcas (gli aerei col radar a disco sopra) forniscono supporto tattico ma controllano anche la missione (più tardi l’istruttore criticherà un pilota per una quasi collisione) e al minuto 00:41 si sente la frase “Each aircraft broadcasts real-time telemetry that appears as three-dimensional imagery onboard the Awcas”, che tradotta è “Ogni aeromobile trasmette telemetria in tempo reale che appare come immagine tridimensionale a bordo dell’awcas”
Ovvero la situazione dei rispettivi aerei è trasmessa dagli aerei stessi al sistema di supporto dell’awcas. Per il monitoraggio del campo da battaglia, anche nel video non viene menzionato né RFMP né VTRPE.
E se si finisce di ascoltare l’originale voce in inglese sotto (cercando di non sentire la voce italiana aggiunta dopo) si nota che non vengono mai menzionati bario, alluminio, chaff o composti igroscopici.
Mi stupisco ancora di quanta fuffa ci sia all'origine di tutta la vicenda.

          Sign Up for Eclipse Training        

The Eclipse Foundation and Eclipse member companies are pleased to announce the fall 2009 training class series. The training is an excellent opportunity for software developers and architects to learn more about Eclipse Rich Client Platform (RCP), Equinox & OSGi and Modeling technologies. Eclipse experts will lead the sessions, providing practical experience through classroom instruction and hands-on labs. Classes have been scheduled in 23 different cities around the world from October 19 to December 4, 2009.

Students who register at least 3 weeks ahead will receive a 10% discount on the registration price. See the schedule for a complete list of courses and course descriptions.

Eclipse members participating in the training series are AvantSoft (Eclipse University), EclipseSource, Gerhardt Informatics, Industrial TSI, itemis, Obeo, The RCP Company, Soyatec and Weigle Wilczek.

Take advantage of this excellent learning opportunity and register for an Eclipse training class today!


          Spring 2009 Eclipse Training Series Starts April 6        
The Eclipse spring 2009 training class series is starting soon on April 6, 2009. The training is an excellent opportunity for software developers and architects to learn more about Eclipse Rich Client Platform (RCP), Equinox & OSGi and Modeling technologies. Eclipse experts will lead the sessions, providing practical experience through classroom instruction and hands-on labs. Classes have been scheduled in over 30 cities around the world during April and May. See the schedule for a complete list of courses and course descriptions.

Eclipse members participating in the training series are Anyware Technologies, AvantSoft (Eclipse University), EclipseSource, eteration, Gerhardt Informatics, Industrial TSI, itemis, Obeo, The RCP Company, Soyatec, Third Millennium and Weigle Wilczek.

Take advantage of this excellent learning opportunity and register for an Eclipse training class today!
          Register for the Spring 2009 Eclipse Training Series        
The Eclipse Foundation and Eclipse member companies are pleased to announce the spring 2009 training class series. The training is an excellent opportunity for software developers and architects to learn more about Eclipse Rich Client Platform (RCP), Equinox & OSGi and Modeling technologies. Eclipse experts will lead the sessions, providing practical experience through classroom instruction and hands-on labs. Classes have been scheduled in 24 different cities around the world from April 6 to May 29, 2009.

Students who register before March 20, 2009 will receive a 5% discount on the registration price. See the schedule for a complete list of courses and course descriptions.

Eclipse members participating in the training series are Anyware Technologies, AvantSoft (Eclipse University), EclipseSource, eteration, Gerhardt Informatics, Industrial TSI, itemis, Obeo, The RCP Company, Soyatec, Third Millennium and Weigle Wilczek.

Take advantage of this excellent learning opportunity and register for an Eclipse training class today!
          Introduction to Biodiversity Informatics         

Delivered Apr 2, 2013 at Redpath Museum auditorium, McGill University for Science and Museums, REDM400
          Precisando de um Tecnico de Informatica?        
Seu computador está com vírus, está lento, pesado ou travou? A solução é a reinstalação do windows, programas, jogos,remoção de vírus, backup, formataçao, instalações em geral conserto e manutenção de PCs e notebooks .  Em Sao Paulo, atendimento a domicilio, entre em contato.(11) 9 8186 2811 e fale com o Tecnico em Informatica Gilberto.


          Projeto Uniginga: cursos de capacitação        
PROJETO UNIGINGA OFERECE DIVERSOS CURSOS

O Instituto Educacional Ginga IEG, em parceria com a Universidade Cândido Mendes/ Instituto Prominas, está oferecendo diversos cursos, incluindo cursos de aperfeiçoamento, capacitação, de extensão e de pós-graduação. Trata-se do Uniginga, o projeto universitário do Instituto Educacional Ginga IEG de Limeira-SP.

Por ora, os cursos oferecidos em parceria são na modalidade à distância (EAD).

O IEG oferece também alguns cursos e seminários presenciais abordando temas como inglês, redação, português, matemática e administração de pequenos negócios. Muito em breve estaremos ampliando as atividades educativas presenciais.

O atendimento aos alunos, para matrículas e gestão do processo de ensino-aprendizagem será no endereço abaixo.

Rua Octávio Lopes, 579, Centro, Limeira-SP
CEP 13480-021 â€“ Fone: (019) 3011-2120 / 98839-4458
Email: ieglimeira.edu@gmail.com



Eis alguns dos cursos que estamos oferecendo na modalidade à distância (EAD), conforme informação da Ucamprominas (www.ucamprominas.com.br)

I – Cursos de capacitação
B
C
          End of Time The Moment by Dr. Shahid Masood All Episode 2017        

End of time is a popular educational islamic tv show by dr shahid masood with his great research. In end of time the moment which we can ask end of time 4 dr shahid masood compare the current events which are taking place with the islamic sahi ahadith. It is very informatic program with lot […]

The post End of Time The Moment by Dr. Shahid Masood All Episode 2017 appeared first on Jhang Tv.


          tecnologia que sigue matando        

Asi como las descargas matan músicos y wordpress mata creadores de paginas web ahora twitter esta matando periodistas (o por lo menos eso piensa el comité olímpico)

Para proteger el negocio periodístico el comité olímpico desea restringir lo que los atletas pueden o no tuitiar o publicar en sus flogs o facebooks.

Yo me pregunto, los músicos, los periodistas etc.
Se han dado cuenta que existe internet ?
Han valorado el alcance que tiene ?

O vamos a tener que limitar el uso de la tecnología hasta que algunos despierten ?

rr.-

          PESSOAS PORTADORAS DE NECESSIDADES ESPECIAIS        
05/10/2010 17h41

Prefeitura vai capacitar 994 pessoas portadoras de necessidades especiais



http://www.prefeitura.sp.gov.br/cidade/secretarias/pessoa_com_deficiencia/noticias/?p=22133





Entrar no link acima e preencher ficha de inscrição;





São cursos gratuitos de 80 horas, em 7 modalidades. As inscrições

estão abertas.



Até o final deste ano, a Secretaria Municipal da Pessoa com

Deficiência e Mobilidade Reduzida e a Secretaria de Desenvolvimento

Econômico e do Trabalho vão capacitar 994 pessoas com deficiência e

reabilitandos do INSS.



Serão oferecidos módulos de 80 horas (4h por dia), em formações como:



• Auxiliar de Almoxarifado

• Auxiliar de Cabeleireiro

• Auxiliar de Crédito/Cobrança

• Auxiliar de Departamento Pessoal

• Auxiliar de Recrutamento e Seleção

• Informática para o Mercado de Trabalho

• Operador de Caixa



Podem participar candidatos acima de 16 anos, moradores da capital. Os

alunos selecionados receberão auxílio-transporte, material didático e

lanche.



As inscrições estão abertas.







Interessados devem preencher a Ficha de Inscrição e enviar para o e-

mail sembarreiras@prefeitura.sp.gov.br.



As aulas da primeira turma acontecem no período de 25/10 a 24/11. E a

segunda turma terá suas aulas entre 3/11 e 1º/12.



A SMPED investirá mais de R$ 430 mil, suplementando contrato do

Programa Jovens Paulistanos.





Serviço



Curso de Capacitação e Qualificação Profissional



Informatica e Aux. de Recrutamento e Seleção

Centro Universitário Sant’Anna - Norte

Rua Voluntários da Pátria, nº. 257

CEP: 02011-000

Santana – São Paulo



Curso de LIBRAS

Associação dos Funcionários do Banco do Brasil – AFUBESP

Rua Direita, 32

Centro – São Paulo



Aux. de Depto Pessoal, Operador de Caixa e LIBRAS

Paróquia Santos Apóstolos

Avenida Itaberaba, n° 3907

Jardim Maracanã – São Paulo /SP

CEP: 02067-060



Aux de Depto. Pessoal e Aux de Almoxarifado

Cursinho da Poli

Avenida Ermano Marchetti, n°. 576

Lapa





Secretaria Municipal da Pessoa com Deficiência e Mobilidade Reduzida

(SMPED)

Assessoria de Comunicação e Imprensa

Tel.: (011) 3113-8741 // 8778 // 8767 // 8793 // 8794 // 8741

Cel.: 9951-4983 // 8875-9732

lclopes@prefeitura.sp.gov.br

lincolnsilva@prefeitura.sp.gov.br
          CADASTRE SEU CURRÍCULO!        
Encaminhado via e-mail pela amiga Master Mind Alba Franco:

INCLUSÃO DE CURRÍCULOS, acesse:

http://www8.vagas.com.br/home.asp?t=2816

É gratis, muito respeitado no mercado e eles fazem uma avaliação do seu perfil que é bárbara.


VAGAS DE EMPREGO - OPORTUNIDADE /REPASSANDO/REPASSE TBM!!!

Quem sabe podemos auxiliar alguém?


Para o pessoal que está procurando recolocação profissional ou estágio
aí vai uma série de endereços e e-mails e alguns sites que podem
ajudar. Estão separados por segmentos.


BANCOS:
Banco Alfa enviar e-mail para: currículo@bancoalfa.com.br
Banco Axial enviar e-mail para: bancoaxial@bancoaxial.com.br
Banco BBM enviar e-mail para: rh@bbmbank.com.br
Banco BNL enviar e-mail para: fun.cart@bnl.com.br
Banco Fiat enviar e-mail para: fiatrh@fiat.com.br
Banco Hexxa enviar e-mail para: rh@hexxa.com.br
Banco HSBC enviar e-mail para: rh-recrutamento@hsbc.com.br
Banco Indusval enviar e-mail para: banco@indusval.com.br
Banco Real enviar e-mail para: recrutamento@real.com.br
Banco Santander enviar e-mail para: curriculum@santander.com.br
Banco Sogeral enviar e-mail para: sgbrasc@uol.com.br
Bectondickinson enviar e-mail para: recrutamento@bd.com.br
Citi corp enviar e-mail para: rh.selecao@citicorp.com


EDUCAÇÃO:
CNA enviar e-mail para: curriculo@cna.com.br


HOTELARIA:
Accor enviar e-mail para: recrutamento@accor.com.br
Hotel Blue Tree enviar e-mail para: rh@bluetree.com.br
Hotel Cabreuva enviar e-mail para: diretoria@hotelcabreuva.com.br
Hotel Transamerica enviar e-mail para: candidato@transamerica.com.br


INDÚSTRIAS:
7Comm enviar e-mail para: rh@7comm.com.br
AABB (clube social) enviar e-mail para: rh@aabb.esp.br
Algar enviar e-mail para: talentoshumanos@algar.com.br
Apolo enviar e-mail para: rh@tubosapolo.com.br
Arteb enviar e-mail para: selecao@arteb.com.br
Artha enviar e-mail para: rh@arthabr.com
Azaléia enviar e-mail para: rh@azaleia.com.br
Basf enviar e-mail para: recursos.humanos@basf-sa.com.br
Bom Bril enviar e-mail para: selecao@bombril.com.br
Bosch enviar e-mail para: recruta.bosch.rbbr@br.bosch.com
Boucinhas enviar e-mail para: rhboucin@boucinhas.com.br
Brahma enviar e-mail para: gente@brahma.com.br
Brasilata enviar e-mail para: brasilata@brasilata.com.br
Caramuru Alimentos enviar e-mail para: rh@caramuru.com
Cargill enviar e-mail para: recrutamento_cargill@cargill.com
CCE enviar e-mail para: rh@cce.com.br
Cimento Itaú enviar e-mail para: talentos@cimentoitau.com.br
Cerâmica Santana enviar e-mail para: rh@ceramicasantana.com.br
Dell enviar e-mail para: Brasil_HR@Dell.com
DOW enviar e-mail para: recrutamento@dow.com
Embraco enviar e-mail para: rhembraco@embraco.com.br
Estrela enviar e-mail para: dpessoal@estrela.ind.br
Ford enviar e-mail para: selecao@ford.com
Gemini enviar e-mail para: rh@gemini.com.br
Gerdau enviar e-mail para: rh-sp@gerdau.com.br
Goodyear enviar e-mail para: recrutamento.amplant@goodyear.com
Gradiente enviar e-mail para: rh@gradiente.com.br
Grupo Áurea enviar e-m ail para: cv@grupoaurea.com.br
Intelbras enviar e-mail para: rh@intelbras.com.br
Itambé enviar e-mail para: rh@itambe.com.br
Klabin enviar e-mail para: recrutamento@klabin.com.br
Kolumbus enviar e-mail para: rh-kb@kolumbus.com.br
Lupo enviar e-mail para: rh@lupo.com.br
Manah enviar e-mail para: mercado@manah.com.br
Marcopolo enviar e-mail para: inovarh@inovarh.com.br
Mococa enviar e-mail para: rh@mococasa.com.br
Monsanto enviar e-mail para: talentos.novos@monsanto.com
Moore enviar e-mail para: selecao@moore.com.br
Mosane enviar e-mail para: rh@mosane.com.br
Otis enviar e-mail para: selecao@otis.com
Panamco enviar e-mail para: bancodecurriculos@panamco.com.br
Panco enviar e-mail para: selfab@panco.com.br
Perdigão enviar e-mail para: rhvda@perdigao.com.br
Probel enviar e-mail para: drh@probel.com.br
Sabóia enviar e-mail para: selecao@saboia.com.br
Santista Têxtil enviar e-mail para: selecao@santistatextil.com.br
Scania enviar e-mail para: curriculo.br@scania.com
Schincariol enviar e-mail para: rh@schincariol.com.br
Skol enviar e-mail para: gente@skol.com.br
Sony enviar e-mail para: sonyrh@ssp.br.sony.com
Sony Music enviar e-mail para: talentos@sonymusic.com.br
Springer Carrier enviar e-mail para: rh.springer@carrier.utc.com
Tecidos Elizabeth enviar e-mail para: selecao@elizabeth.com.br
Tetrapak enviar e-mail para: recrutamento@tetrapak.com
Vicunha enviar e-mail para: selecao@elizabeth.com.br
Wickbold enviar e-mail para: selecao@wickbold.com.br


INFORMÁTICA:
Activetch enviar e-mail para: rh@activetech.com.br
Alcabyt enviar e-mail para: recrutamento@alcabyt.com.br
Asmi Informática enviar e-mail para: asmi-rh@uol.com.br
ATPS enviar e-mail para: rh@atps.com.br
BF enviar e-mail para: rh@bf.com.br
BHTEC enviar e-mail para: rh@bhtec.com.br
BMS enviar e-mail para: rhsp@bms.com.br
Britos enviar e-mail para: rh@britos.com.br
BRQ enviar e-mail para: rhsp@brq.com
Buildup enviar e-mail para: curriculo@buildup.com.br
Choose enviar e-mail para: rh@choose.com.br
Blakinfo enviar e-mail para: rhsp@blakinfo.com

CCSNET enviar e-mail para: rh@ccsnet.com.br
Cebinet enviar e-mail para: depselecao@cebinet.com.br
CETDR enviar e-mail para: recrutamento@cetrd.com.br
Chadel enviar e-mail para: curriculo@chadel.com
Chiptek enviar e-mail para: selecao_sp@chiptek.com.br
Cidicom enviar e-mail para: rh@cidicom.com.br
Ciser enviar e-mail para: recruta@ciser.com.br
Cisco Systems enviar e-mail para: job-brasil@cisco.com
Compaq enviar e-mail para: cvbrasil@compaq.com
Compuland enviar e-mail para: rhsalut@compuland.com.br
Consoft enviar e-mail para: curriculum@consoft.com.br
Copel enviar e-mail para: rh@copel.com.br
Copesul enviar e-mail para: drh@copesul.com.br
Dglnet enviar e-mail para: curriculos@dglnet.com.br
DGM enviar e-mail para: cv@dgm.com.br
Dialdata enviar e-mail para: rh@dialdata.com.br
Dinheiro.net.com.br enviar e-mail para: rh@dinheironet.com.br
Discover enviar e-mail para: rh@discover.com.br
DLM Info enviar e-mail para: rhumanos@dlminfo.com.br
DPR Sist enviar e-mail para: dprrh@dprsist.com.br
Drive enviar e-mail para: curriculo@drive.com.br
EA enviar e-mail para: recrutamento@ea.com
Eclipse enviar e-mail para: eclipserh@eclipseinformatica.com.br
Elefante.com.br enviar e-mail para: rhumanos@elefante.com.br
Esys enviar e-mail para: rh@esys.com.br
Eyi enviar e-mail para: recursos.humanos@br.eyi.com
Face Virtual enviar e-mail para: selecao@facevirtual.com.br
Fórum Access enviar e-mail para: cvitae@forumaccess.com
Gempi enviar e-mail para: rh@gempi.com.br
GP enviar e-mail para: rh@gpnet.com.br
GPI enviar e-mail para: vagas@gpi.com.br
Ibpinet enviar e-mail para: rh@ibpinet.com.br
Idealyze.com.br enviar e-mail para: curriculum@idealyze.com.br
IFSBR enviar e-mail para: rh@ifsbr.com.br
Impsatrh enviar e-mail para: impsatrh@impsat.com.br
Indebras enviar e-mail para: selecao@indebras.com.br
Info Sis temas enviar e-mail para:recrutamento@infosistemas.com.br
Infoside enviar e-mail para: selecao@infoside.com.br
Inter Commerce enviar e-mail para: rh@intercommerce.com.br
Inter File enviar e-mail para: rh@interfile.com.br
Interamericana enviar e-mail para: cv@interamericana.com.br
Interplus enviar e-mail para: rh@intraplus.com.br
Intertech enviar e-mail para: curriculo@intertech.com.br
ISS enviar e-mail para: curriculo.rh@iss.com.br
Itautec-philco enviar e-mail para: rhumanos@itautec-philco.com.br
Itech enviar e-mail para: rh@itech.inf.br
Linx Brasil enviar e-mail para: rh@linxbrasil.com.br
Logicworld enviar e-mail para: rh@logicworld.com.br
Lowe enviar e-mail para: jobslowe@bol.com.br
Lumina enviar e-mail para: rh@luminacorp.com
Master enviar e-mail para: rhmaster@masterental.com.br
Meta inf enviar e-mail para: rhsp@metainf.com.br
Microsiga enviar e-mail para: curriculum@microsiga.com.br
Microsoft enviar e-mail para: rhbrasil@microsoft.com
Microwan enviar e-mail para: rh@microwan.com.br
MMCafe enviar e-mail para: rh@mmcafe.com.br
Multisis enviar e-mail para: rh@multisis.net
NBS enviar e-mail para: nbsrh@nbs.com.br
Netds enviar e-mail para: rh@netds.com.br
Netpav enviar e-mail para: curriculum@netpav.com.br
New Trend enviar e-mail para: curriculum@newtrend.com.br
New Trend enviar e-mail para: curriculum@newtrend.com.br
Nova América enviar e-mail para: gdrh@novaamerica.com.br
Novacell enviar e-mail para: rh@novacell.com.br
Ntsgsa enviar e-mail para: cv@ntsgsa.com.br
NV enviar e-mail para: rh@nv.com.br
Ogeda enviar e-mail para: rh@ogeda.com.br
Olinux enviar e-mail para: rh@olinux.com.br
Oracle enviar e-mail para: rhonline@br.oracle.com
Origin enviar e-mail para: recursoshumanos.rj@br..origin-it.com
Paramount Lansul enviar e-mail para: paramount@ibm-net.com
PCDI enviar e-mail para: rh@pcdi.com.br
Par Perfeito enviar e-mail para: curriculo@parperfeito.com.br
Persoft enviar e-mail para: rh@persoft.com.br
Plastamp enviar e-mail para: rh@plastamp.com.br
Pluguse enviar e-mail para: rh@pluguse.com.br
Pólen enviar e-mail para: selecao@polen.com.br
Power Plast enviar e-mail para: cv@powerplast,com.br
PP Ware enviar e-mail para: rh-rj@ppware.com.br
Prime Way enviar e-mail para: rh@primeway.com.br
Pró Soluções enviar e-mail para: selecao@professionalrh.com.br
Prodacon enviar e-mail para: rh@prodacon.com.br



Programmers enviar e-mail para: curriculo@programmers.com.br
QA Systems enviar e-mail para: rh@qasystems.com.br
QG enviar e-mail para: cv@qg.com.br
Quadrata enviar e-mail para: selecao@quadrata.com.br

RM Sistemas enviar e-mail para: rmrh@rm.com.br
RR Etiquetas enviar e-mail para: drh@rretiquetas.com.br
Rsinet enviar e-mail para: rh@rsinet.com.br
Scisoft enviar e-mail para: sci.rh@scisoft.com.br
Seal envi ar e-mail para: talentos@seal.com.br
Setempro enviar e-mail para: rh@setempro.com.br
Siscorp enviar e-ma il para: rh@siscorp.com.br
SQA enviar e-mail para: rhsqa@sqa.com.br
Sun enviar e-mail para: rh@brazil.sun.com
Superbid.com.br enviar e-mail para: rh@superbid.com.br
Sygnus enviar e-mail para: curriculo.sygnus@uol.com.br
Synercomm enviar e-mail para: rh@synercomm.com.br
Terphane enviar e-mail para: recursos_humanos@terphane.com.br
Topway enviar e-mail para: rh@topway-software.com.br
Tratem enviar e-mail para: tratemrh@tratem.com.br
Tribal.com.br enviar e-mail para: curriculum@tribal.com.br
Trw enviar e-mail para: recrutamento.limeira@trw.com
Ucar enviar e-mail para: desenvolve@ucar.com
Unisys enviar e-mail para: entorj@br.unisys.com
Uol.com.br enviar e-mail para: rhuol@uol.com.br
Uploadnet enviar e-mail para: rh@uploadnet.com.br
W21 enviar e-mail para: w21.rh@w21.com.br
Wa enviar e-mail para: rh@wa.com.br
Walar enviar e-m ail para: rh@walar.com.br
Wcorp enviar e-mail para: rh@wcorp.com.br
WEG enviar e-mail para: recrutamento@weg.com.br
Westbr enviar e-mail para: rh@westbr.com.br
Witcom enviar e-mail para: cv@witcom.com.br
WL enviar e-mail para: recrutamento@wl.com
Wyma enviar e-mail para: rh@wyma.com
Yahoo enviar e-mail para: br-empregos@yahoo-inc.com


LABORATORIO:
Ache enviar e-mail para: ache@osite.com.br
Ativus enviar e-mail para: dp@ativus.com.br
Bayer enviar e-mail para: bayer-rh.recursoshumanos.br@bayer.com.br
Lilly enviar e-mail para: recrutamento_estrategico@lilly.com
Organon enviar e-mail para: recrutar@organon.com.br
Pfizer enviar e-mail para: talento.recrutamento@pfizer.com
Rhodia enviar e-mail para: curriculum@rhodia.com.br
Roche enviar e-mail para: brasil.rh_curriculo@roche.com
Schering enviar e-mail para: recrutamento.sdb@schering.de
Schering-Plough enviar e-mail para: curriculum@schplo.com.br
York Brasil enviar e-mail para: rh@yorkbrasil.com.br


MIDIA:
DM9DDB enviar e-mail para: trampo@dm9ddb.com.br
Editora Águia enviar e-mail para: cv@editoraguia.com.br
Folha de São Paulo enviar e-mail para: fspselecao@uol.com.br
Folha Metro enviar e-mail para: rh@folhametro.com.br
MTV enviar e-mail para: rh.mtv@mtvbrasil.com.br
SBT enviar e-mail para: culo@sbt.com.br


SAÚDE:
Amil enviar e-mail para: rhamil@ifxbrasil.com.br
Cema Hospital enviar e-mail para: recursoshumanos@cemahospital.com.br
Fleury enviar e-mail para: selecao@fleury.com.br
Hospital Santa Cruz enviar e-mail para: selecao@hospitalsantacruz.com.br


HOSPITAL:
Sírio Libanes enviar e-mail para: selecao@hsl.org.br
Rimed enviar e-mail para: rimedrh@rimed.com.br
          TRAYECTO I PERIODO I, II, III        
  • ARQUITECTURA DEL COMPUTADOR
  • MATEMATICA
  • INTRODUCCION A LA PROGRAMACION
  • INTRODUCCION A LA INFORMATICA
  • PROYECTO SOCIO TECNOLOGICO
  • SOCIO POLITICA
  • ELECTIVA

          TRAYECTO INICIAL        
INTRODUCCION A LA INFORMATICA
MATEMATICA
PROYECTO SOCIO TECNOLOGICO
PROYECTO NACIONAL Y NUEVA CIUDADANIA
LENGUA Y COMUNICACION
ADMINISTRACION Y PREVENCION DE DESASTRES
          Agente di Commercio settore Ho.Re.Ca. - Digitmode Srl - Milano, Lombardia        
*DIGIT* *MODE* , società in forte espansione nell’area della consulenza informatica e nella realizzazione di importanti progetti ad alto contenuto tecnologico
Da Indeed - Fri, 16 Jun 2017 13:03:45 GMT - Visualizza tutte le offerte di lavoro a Milano, Lombardia
          Specialistii sunt INGROZITI: Asa ceva nu s-a mai intamplat NICIODATA!        
Atacul informatic care a vizat aproape 100 de tari din intreaga lume este de "un nivel fără precedent", a recunoscut Oficiul european al poliţiilor Europol. "A fost fara precedent si, pentru identificarea vinovatilor, va fi nevoie de o investigaţie internaţională complexă", a subliniat institutia, al cărui Centru european de cibercriminalitate (EC3) "colaborează cu unităţile de combatere a criminalităţii informatice din ţările afectate şi cu partenerii industriali majori pentru a atenua ameninţarea şi a acorda asistenţă victimelor".
          RICERCA AGENTI - Al.Ma.com S.r.l. - Roma, Lazio        
Almacom, azienda informatica italiana con sede in provincia di Brescia produttrice e distributrice di Almabox, gioco interattivo dedicato alle aree giochi dei
Da Indeed - Wed, 03 May 2017 11:52:50 GMT - Visualizza tutte le offerte di lavoro a Roma, Lazio
          Dell potencia su gama de ordenadores convertibles con tres nuevos equipos de la serie Latitude        

Dell ha anunciado tres nuevos ordenadores 2-en-1 correspondientes a su serie Latitude. La compañía pretende con este triple lanzamiento hacerse un hueco en el creciente mercado de los portátiles convertibles, que han experimentado un crecimiento mundial interanual de un 46% entre los usuarios profesionales, de acuerdo con el estudio 'Worldwide 2017 Q1 Personal Computing Device Tracker' de IDC.


          Lenovo anuncia el lanzamiento de la mayor cartera de productos para Data Center de su historia        

Lenovo ha presentado este miércoles una cartera integral de productos para 'data center' que permite a los clientes aprovechar el poder de la 'revolución de la inteligencia' y crear una sólida base tecnológica que apoye sus capacidades de transformación.


          Donación material informático programa “Microproyectos UGR Solidaria 2016-2017″        

Desde la Federación Andaluza de Asociaciones de ayuda al TDAH (FAHYDA), queremos mostrar nuestro agradecimiento a los organizadores del programa “Microproyectos UGR Solidaria 2016-17”, por el reconocimiento expresado a la labor que desempeñamos a nivel autonómico, con la elección de nuestra organización para la donación de material informático dentro de la Primera Campaña de donación de material […]

The post Donación material informático programa “Microproyectos UGR Solidaria 2016-2017″ appeared first on Fahyda.


          Driving Change with mHealth        

This slide deck is comprised of lectures delivered at Nova Southeastern University Colleges of Medicine (MI) and Pharmacy (PHA) in the following courses: MI 6410 Consumer Health Informatics and Web 2.0 in Healthcare PHA 5203 Consumer Health Informatics and Web 2.0 in Healthcare
          How Informatics Will Change the Future of Pharmacy        

This presentation is part of the Nova Southeastern University 21st Annual Contemporary Pharmacy Issues program.
          KnowItAll U spectra database        

Offers access to the KnowItAll U reference collection of over 1.3 million spectra including structures and chemical property information; includes IR, UV/Vis, NMR, Raman and mass spectra.  ATTENTION KnowItALL Desktop Software Users:  Please update your license at https://kiarasql.informatics.bio-rad.com/KIARA/Forms/KnowItAllU/extend.jsp?code=HM6faMJ0OgJf6Xaf6qcq00aMf60cHOXPHcHfcq6XQJqOHqJXc6OOQcMPOMJEOHqaQEJJqac9cE9EacX0OaJ9aqf6PaOaX6QPJE6Jgf0qEQHJfEcHJP60E0PafMfO9fEaaHJPOMPgqaPHac6MM9qJEMJEaQPqgHgMEHXEMXJXOPJHcaX9M9PMfMOf9f996EH9JEc99OJa9McfXffEXHO9OQMq6EPM0aPXHJaH9fPcfPQQqEEgP9J9qHEgc6P6aQfq


          PiPhone: smartphone basato su Raspberry Pi        
Tra i tanti smanettoni che realizzano progetti utilizzando Raspberry Pi ce ne è uno che ha realizzato una sorta di smartphone basilare chiamandolo PiPhone. PiPhone è stato realizzato grazie ad un Raspberry Pi, un display PiTFT touchscreen da 2.8 pollici della Adafruit ed un modulo SIM900 per collegarsi alle reti GPRS/GSM. Lo “smartphone” di cui parliamo permette di effettuare le due funzioni più basilari ovvero effettuare chiamate audio ed inviare dei messaggi di testo. Colui che ha realizzato il progetto ha fatto sapere che i componenti gli sono costati circa 158 dollari e ovviamente consiglia di comprarsi uno smartphone ad
          Open Melting Points on iPhone via MMDS        
As Alex Clark explained on his blog Cheminformatics 2.0, both predicted and experimental melting points from our Open Data collection are now available on iPhones via his MMDS webservices protocol.


Although the app is not free, the web service (#7 from our collection) that Andrew Lang and Alex created for this purpose is Open and available for anyone to use. It reads an XML formatted molfile and returns the average measured melting point, predicted melting point, SMILES, CSID and a link to the ChemSpider entry.


          More Open Melting Points from EPI and other sources: on the path to ultimate curation        
As recently as 2008, Hughes et al published a paper asking: Why Are Some Properties More Difficult To Predict than Others? A Study of QSPR of Solubility, Melting Point, and Log P
The question then is: why do QSPR models consistently perform significantly worse with regard to melting point? In the Introduction, we proposed three reasons for the failure of QSPR models: problems with the data, the descriptors, or the modeling methods. We find issues with the data unlikely to be the only source of error in Log S, Tm, and Log P predictions. Although the accuracy of the data provides a fundamental limit on the quality of a QSPR model, we attempted to minimize its influence by selecting consistent, high quality data... With regards to the accuracy of Tm and Log P data, both properties are associated with smaller errors than Log S measurement. Moreover, the melting point model performed the worst, yet it is by far the most straightforward property to measure...We suggest that the failure of existing chemoinformatics descriptors adequately to describe interactions in the crystalline solid phase may be a significant cause of error in melting point prediction.
Indeed, I have often heard that melting point prediction is notoriously difficult. This paper attempted to discover why and suggested that it is more likely that the problem is related to a deficiency in available descriptors rather than data quality. The authors seem to argue that taking a melting point is so straightforward that the resulting dataset is almost self-evidently high quality.

I might have thought the same before we started collecting melting point datasets.

It turns out that validating melting points can be very challenging and we have found enormous errors - even cases where the same compound in the same dataset is assigned very different melting points. Under such conditions it is mathematically impossible to obtain high correlations between predicted and "measured" values.

Since we have no additional information to go on (no spectral proof of purity, reports of heating rate, observations of melting behavior, etc.) the only way we can validate data points is to look for strong convergence from multiple sources. For example, consider the -130 C value for the melting point of ethanol (as discussed previously in detail). It is clearly an outlier from the very closely clustered values near -114 C.


This outlier value is now highlighted in red to indicate that it was explicitly identified to not be used in calculating the average. Andrew Lang has now updated the melting point explorer to allow a convenient way to select or deselect outliers and indicate a reason (service #3). For large separate datasets - such as the Alfa Aesar collection - this can be done right on the melting point explorer interface with a click. For values recorded in the Chemical Information Validation sheet, one has to update the spreadsheet directly.

This is the same strategy that we used for our solubility data - in that case by marking outliers with "DONOTUSE". This way, we never delete data so that anyone can question our decision to exclude data points. Also by not deleting data, meaningful statistical analyses of the quality of currently available chemical information can be performed for a variety of applications.

The donation of the Alfa Aesar dataset to the public domain was instrumental in allowing us to start systematically validating or excluding data points for practical or modeling applications. We have also just received confirmation that the entire EPI (PhysProp) melting point dataset can be used as Open Data. Many thanks to Antony Williams for coordinating this agreement and for approval and advice from Bob Boethling at the EPA and Bill Meylan at SRC.

In the best case scenario, most of the melting point values will quickly converge as in the ethanol case above. However, we have also observed cases where convergence simply doesn't happen.

Consider the collection of reported melting points for benzylamine.


One has to be careful when determining how many "different" values are in this collection. Identical values are suspicious since they may very well originate from the same ultimate source. Convergence for the ethanol value above is credible because most of the values are very close but not completely identical, suggesting truly independent measurements.

In this case values actually diverge into sources of either +10 C, - 10 C, -30 C or about -45 C. If you want to play the "trusted source" game, do you trust more the Sigma-Aldrich value at +10C or the Alfa Aesar value at -43 C?

Lets try looking at the peer-reviewed literature. A search on SciFinder gives the following ranges:


The lowest melting point listed there is the +10C value we already have in our collection but these references are to other databases. The lowest value from a peer-reviewed paper is 37-38 C.

This is strange because I have a bottle of benzylamine in my lab and it is definitely a liquid. Investigating the individual references reveals a variety of errors. In one, benzylamine is listed as a product but from the context of the reaction it should be phenylbenzylamine:


(In a strange co-incidence the actual intermediate - benzalaniline - is the imine that Evan Curtain has synthesized recently in order to measure its solubility)

In another example, the melting point of a product is incorrectly associated with the reactant benzylamine:

The erroneous melting points range all the way up to 280 C and I suspect that many of these are for salts of benzylamine, as I reported previously for the strychnine melting point results from SciFinder.

With no other obvious recourse from the literature to resolve this issue, Evan attempted to freeze a sample of benzylamine from our lab.(UC-EXP265)


Unfortunately, the benzylamine sample proved to be too impure (<85% by NMR) and didn't solidify even down to -78 C. We'll have to try again from a much more pure source. It would be useful to get reports from a few labs who happen to have benzylamine handy and provide proof of purity by NMR and a pic to demonstrate solidification.

As most organic chemists will attest, amines are notorious for appearing as oils below their melting points in the presence of small amounts of impurities. I wonder if the divergence of melting points in this case is due to this effect. By providing NMR data from various samples subjected to freezing, it might be possible to quantify the effect of purity on the apparent freezing point. I think the images of the solidification are also important because I think that some may mistake very high viscosity with actual formation of a solid. At -78 C we observed the sample to exhibit a viscosity similar to that of syrup.

Our model predicts a melting point of about -38 C for benzylamine and so I suspect that the values of -43 C and -46 C are most likely to be close to the correct range. Lets find out.
          Collaboration using Open Notebook Science in Academia book chapter        
I am very pleased to report that the book chapter that I co-wrote with Andrew Lang, Steve Koch and Cameron Neylon is now available online: Collaboration using Open Notebook Science in Academia. This is the 25th chapter of Collaborative Computational Technologies for Biomedical Research, edited by Sean Ekins, Maggie Hupcey, Antony Williams and Alpheus Bingham.

Our chapter provides some fairly detailed examples of how Open Notebook Science can be used to enhance collaboration between researchers from both similar or distant fields. It also suggests certain paths towards machine/human collaboration in science. Hopefully it will encourage researchers who have an interest in Open Science to experiment with some of the tools and strategies mentioned.

I am also grateful to Wiley for choosing our chapter as the free online sample for the book!
This book discusses the state-of-the-art collaborative and computing techniques for the pharmaceutical industry, the present and future implications and opportunities to advance healthcare research. The book tackles problems thoroughly, from both the human collaborative and the data and informatics side, and is very relevant to the day-to-day activities running a laboratory or a collaborative R&D project. It can be applied to help organizations make critical decisions about managing drug discovery and development partnership. The book follows a “man- methods-machine” format with sections on how to get people to collaborate, collaborative methods, and computational tools for collaboration. This book offers the reader a “getting started guide” or instruction on “how to collaborate” for new laboratories, new companies, and new partnerships, as well as a user manual for how to troubleshoot existing collaborations.



          Towards the automated discovery of useful solubility applications        
Last week, I came across (via David Bradley) a paper by an MIT group regarding the desalination of water using a very clever application of solubility behavior:
Anurag Bajpayee, Tengfei Luo, Andrew Muto and Gang Chen, Energy Environ. Sci., 2011 Very low temperature membrane-free desalination by directional solvent extraction (article, summary)
The technique simply involves the heating of saltwater with molten decanoic acid to 40-80 C. Some water dissolves into the decanoic acid, leaving the salt behind. The layers are then separated and, upon cooling to 34C, sufficiently pure water separates out. Any traces of decanoic acid are inconsequential since this compound is already present in many foods at higher levels.

From a technological standpoint, I can't think of a reason why this solution could not have been discovered and implemented 100 years ago. It makes you wonder how many other elegant solutions to real problems could be uncovered by connecting the right pieces together.

To me, this is where the efforts of Open Science and the automation of the scientific process will pay off first. For this to happen on a global level, two key requirements must be met:
1) Information must be freely available, optimally as a web service (measurements if possible - otherwise a predicted value, preferably from an Open Model)
2) There has to be a significantly automated way of identifying what is important enough to be solved.
Since we have been working on fulfilling the first requirement for solubility data, I first looked at our available services to see if there was anything there that could have pointed towards this solution.

Although we have a measured (0.0004 M) and predicted (0.001 M) room temperature solubility of decanoic acid in water, our best prediction service can't do the opposite: the solubility of water in decanoic acid. For that we would need the Abraham descriptors for decanoic acid as a solvent and those are not yet available as far as I'm aware.

Also, we use a model to predict solubility at different temperatures - but it assumes that the solute is miscible with the solvent at its melting point. This is probably a reasonable assumption for the most part but it fails when the solute and the solvent are too radically dissimilar (e.g. water/hydrophobic organic compounds). In this particular application, decanoic acid melts at 31C and the process occurs in the 34-80 C range.

But even if we had the necessary models (and corresponding web services) for the decanoic acid/water/NaCl system, could it have been flagged in an automated way as being potentially "useful" or even "interesting"?

For utility assessment, humans are still the best source. Luckily, they often record this information tagged with common phrases in the introductory paragraphs of scientific documents. (In fact, this is the origin of the UsefulChem project). For example, if we search for "there is a pressing need for" AND solubility in a Google search, most of the results provide reasonable answers to the question of what a useful application of solubility might be. I have summarized the initial results in this sheet.

The first result is:
"there is a pressing need for new materials for efficient CO2 separation" from a Macromolecules article in 2005. The general problem needing solving would correspond to "global warming/CO2 sequestration" and the modeling challenge would be "gas solubility".

Analyzing the first 9 results in this way gives us the following problem types:
  1. global warming/CO2 sequestration
  2. fire control
  3. global warming/refrigeration fluid
  4. AIDS prevention
  5. Iron absorption in developing countries
  6. agriculture/making phosphate from rock bioavailable
  7. water treatment/flocculation
  8. natural gas purification/environmental
  9. waste water treatment
and the following modeling challenges:
  1. gas solubility
  2. polymer solubility
  3. hydrofluoroether solubility
  4. solubility of drug in gels
  5. inorganics
  6. inorganics/pH dependence of solubility
  7. polymer solubility/flocculation/colloidal dispersions
  8. gas solubility
  9. inorganics
These preliminary results are instructive. The problem types are broad and varied - and I think they will be helpful for keeping in mind as we continue to work on solubility. The modeling challenges can be compared directly with our existing services - and none of them overlap at this time! All of these involve either gasses, polymers, gels, salts, inorganics or colloids while our services are strictly for small, non-ionic organic compounds in liquid solvents.

Part of the reason for our focus on these types of compounds relates to our ulterior objective of assessing and synthesizing drug-like compounds. But a more important consideration is what type of information is available and what can be processed related to cheminformatics. Currently most cheminformatics tools deal only with organic chemicals, with essential sources such as ChemSpider and the CDK providing measurements, models, descriptors, etc.

Even though some inorganic compounds are on ChemSpider, most of the properties are unavailable. Consider the example of sodium chloride:


This doesn't mean that the situation is hopeless but it does make the challenge much more difficult. Solubility measurements and models for inorganic salts do exist (for example see Abdel-Halim et al.) but they are much more fragmented.

With the feedback we obtain from this search phrase approach - and hopefully help from experts in the chemistry community - we can piece together a federated service to provide reasonable estimates for most types of solubility behavior.

I think that this desalination solution will prove to be a good test for automated (or at least semi-automated) scientific discovery in the realm of open solubility information. In order to pass the test, the phrase searching algorithm should eventually identify desalination as a "useful problem to solve" and should connect with the predicted behavior of water/NaCl/decanoic acid (or other similar compound).

Luckily we have Don Pellegrino on board. His expertise on automated scientific discovery should prove quite valuable for this approach.
          Validating Melting Point Data from Alfa Aesar, EPI and MDPI        
I recently reported that Alfa Aesar publicly released their melting point dataset for us to use to take into account temperature in solubility measurements. Since then, Andrew Lang, Antony Williams and I have had the opportunity to look into the details of this and other open melting point datasets. (See here for links and definitions of each dataset)

An initial evaluation by Andy found that the Alfa Aesar collection yielded better correlations with selected molecular descriptors compared to the Karthikeyan dataset (originally from MDPI), an open collection of melting points used by several researchers to provide predictive melting point models. This suggested that the quality of the Alfa Aesar dataset might be higher.

Inspection of the Karthikeyan dataset did reveal some anomalies that may account for the poor correlations. First there were several duplicates - identical compounds with different melting points, sometimes radically different (up to 176 C). A total of 33 duplicates (66 measurements) were found with a difference in melting points greater than 10 C.(see ONSMP008 dataset) Here are some examples.


A second problem we ran into involved difficulty processing the SMILES in the Karthikeyan collection. Most of these involved SO2 groups. An attempt to view this SMILES string in ChemSketch ends up with two extra hydrogens on the sulfur.
[S+2]([O-])([O-])(OCC#N)c1ccc(C)cc1
Other SMILES strings render with 5 bonds on a carbon and ChemSketch draws these with a red X on the problematic atom. See for example this SMILES string:
O=C(OC=1=C2C=CC=CC2=NC=1c1ccccc1)C


Note that the sulfur compounds appear to render correctly on Daylight's Depict site:

In total 311 problematic SMILES from the Karthikeyan collection were removed (see ONSMP009).

With the accumulation of melting point sources, overlapping coverage is revealing likely incorrect values. For example, 5 measurements are reported for phenylacetic acid.

Four of the values cluster very close to 77 C and the other - from the Karthikeyan dataset - is clearly an outlier at 150 C.

In order to predict the temperature dependence for the solutes in our database, Andy collected the EPI experimental melting points, which are listed under the predicted properties tab in ChemSpider (ultimately from the EPA). (There are predicted EPI values there but we only used the ones marked exp).

This collection of 150 compounds was then listed in a spreadsheet (ONSMP010) and each entry was marked as having only an EPI value (44 compounds) or having at least one other measurement from another source (106 compounds). Out of those having at least one more value, 10 reported significant differences (> 5C) between the measurements. Upon investigation, many of these point strongly to the error lying with the EPI dataset. For example, the EPI melting point for phenyl salicylate is over 85 C higher than that reported by both Sigma-Aldrich and Alfa Aesar.


These preliminary results suggest that as much as 10% of the EPI experimental melting point dataset is significantly in error. Only a systematic analysis over time will reveal the full extent of the deficiencies.

So far the Alfa Aesar dataset has not produced many outliers, when other sources are available for comparison. However, even here, there are some surprising results. One of the most well studied organic compounds - ethanol - is listed with a melting point of -130 C by Alfa Aesar, clearly an outlier from the other values clustered around -114 C.

When downloading the Karthikeyan dataset from Cheminformatics.org, a Trust Level field indicates: "High - Original Author Data".

It would be nice if it were that simple. Unfortunately there are no shortcuts. There is no place for trust in science. The best we can do is to collect several measurements from truly independent sources and look for consensus over time. Where consensus is not obvious and information sources are exhausted, performing new measurements will be the only option left to progress.

The idea that a dataset has been validated - and can be trusted completely - simply because it is attached to a peer-reviewed paper is a dangerous one. This is perhaps the rationale used by projects such as Dryad, where datasets are not accepted unless they are associated with a peer-reviewed paper. Peer review was not designed to validate datasets - even if we wanted it to, reviewers don't typically have access to enough information to do so.

The usefulness of a measurement is related much more to the details in the raw data provided by following the chain of provenance (when available) than it is in where it is published. To be fair, in the case of melting point measurements, there really isn't that much additional experimental information to provide, except perhaps an NMR of the sample to prove that it was completely dry. In such a case, we have no choice but to use redundancy until a consensus number is finally reached.
          ONS Solubility Challenge Book cited in a Langmuir nanotechnology paper        
An interesting application of the data from the Open Notebook Science Solubility Challenge has recently been reported in Langmuir: "Enhanced Ordering in Gold Nanoparticles Self-Assembly through Excess Free Ligands" by Cindy Y. Lau, Huigao Duan, Fuke Wang, Chao Bin He, Hong Yee Low and Joel K. W. Yang (Feb 24, 2011).

The context is as follows, and the reference is to Edition 3 of the ONS Solubility Challenge Book.
Although to our best knowledge there lacks literature value of OA solubility in the two solvents, the 10-fold better solubility of 1-otadecylamine (sic), the saturated version of oleylamine, in toluene than hexane is in line with our hypothesis.(33) This increased solubility caused the OA molecules that were originally attached to the AuNPs to gradually detach from the AuNPs, which is supported by our observations in poor AuNP stability and surface-pressure isotherms.
This is a nice application of solubility to understand and control the behavior of gold nanoparticles. It is in line with some of the applications I discussed at a recent Nanoinformatics conference, where I think there is a place for the interlinking of information between solubility and nanotechnology databases.

I have to admit that it is somewhat ironic to see this citation in Langmuir, given the controversy about a year ago (post and FF discussion) regarding the citation of non-traditional literature.
          Clinical EHR Applications Administrator / Adecco / New York, NY        
Adecco/New York, NY

Clinical Applications Administrator – New York, NY
Adecco Engineering and Technical, a division of the world leader in the recruitment of engineering and information technology professionals, has an immediate opening for an EHR/Clinical Applications Administrator on a Full-Time opportunity with a leading institution inNew York,NY.

Job Description
This role will be responsible for the administration of the institution's EHR system and all related clinical applications. This role will serve as the principal subject matter expert for the EHR. Previous experience with dental applications or enterprise scale EHR applications is required.
Responsibilities will include application configuration, upgrades, troubleshooting (in conjunction with vendor) and report writing.
The position supports all EHRend-users including clinical, administrative, and research and will support complete patient care, revenue cycle, and clinical education environments.

Primary Responsibilities:

Application installation/upgrade/troubleshooting, configuration, end-user support for application problems and requests
Serve as subject matter expert for application (specific previous knowledge of application not required)
Report Writing:

Analyze information in EHR system to establish knowledge of data model for accurate retrieval and use in report writing
Utilize available tools such as TSQL, Crystal
Design and produce necessary reports in support of clinical, business, and academic operations
Present report data in meaningful and accessible way
Collaborate with end users to gather report requirements and ensure proper testing and validation


Qualifications

Bachelor's degree (required)
4 years experience in IS, Business Ad, or systems analysis and system design specific to Health Information Technology
Experience with Dental practice management software i.e. axiom Enterprise, Dentrix Enterprise (not required)
Report writing and tool knowledge i.e. Crystal Reports, Oracle PLSQL
Knowledge of DB concepts

Data comm. processes and network design
Knowledge of principles and trends in application software
Ability to interpret and implement data models into relational data bases
Additional database experience preferred

Critical thinking and Decision making

Analyze stakeholder requirements and be able to translate them into rational and cohesive application configuration



Education

Bachelor's required (Medical Informatics or Computer Science preferred)


If you are interested in this opportunity or other opportunities available through Adecco Engineering and Technical, please apply online or email directly to james.stewart@adeccona.com.

Apply To Job
          Comentário sobre vAGAS DE OPERADORES DE CAIXA E ESTAGIO DE INFORMATICA PARA TAUBATÉ por Anonymous        
Procuro por uma vaga de op. de caixa, possuo experiencia, tenho 42 anos, tenho curso de informatica, e resido em Sjcampos, zona sul. Meu e-mail eh : debyisa@hotmail.com. obrigada..Débora
          Plancal Nova, une solution MEP qui combine CAO et calcul        

Plancal Nova, une solution MEP qui combine CAO et calculFrançois Metteil, directeur Trimble MEP France et Julien Brousse responsable produit nous présentent la solution Plancal Nova.



          Lascom dévoile son livre blanc : "la gestion des flux documentaire dans les projets d'ingénierie"        

Lascom dévoile son livre blanc : Législation, normes, organisation industrielle... Le monde du manufacturier est en perpétuelle évolution et les entreprises doivent s'adapter pour rester compétitives. (...)



          La maquette numérique, le BIM et les IFC selon Bentley Systems        

La maquette numérique, le BIM et les IFC selon Bentley SystemsÀ l’occasion du BIM’S Day, la rédaction de btpinformatic.fr a accueilli dans son studio l’ensemble des éditeurs engagés dans le projet eXpert. Dans cette vidéo, Guillaume Picinbono, Chef de Projet Modélisation et environnements Virtuels Enrichis au CSTB (



          Une carte pour optimiser sa productivité graphique        

Une carte pour optimiser sa productivité graphiqueDisposer d’une vision sur le planning de ses ressources tant en personnel qu’en matériels et équipements semble plus qu’essentiel pour toute entreprise du bâtiment. Cette application surprend par sa simplicité de prise en main et sa clarté (...)



          Les infos de mars 2010 - édition n°1, spéciale Interclima+elec        

Les infos de mars 2010 - édition n°1, spéciale Interclima+elecVotre JT de la quinzaine pour découvrir l'essentiel de l'actualité informatique du BTP et les derniers reportages de la rédaction de bptinformatic.fr. JT réalisé sur Interclima + elec 2010 avec, en invité, Philippe Brocart, le directeur du salon (...)



          Le bureau méthode d'Eiffage construction se forme à Revit avec Aricad        

Le bureau méthode d'Eiffage construction se forme à Revit avec AricadDepuis septembre 2012, Eiffage construction fait appel à la SSII AriCad pour assister et former les ingénieurs et techniciens du bureau méthode (...)



          L'expérience BIM, un atout distinctif d'Oger International        

L'expérience BIM, un atout distinctif d'Oger InternationalSociété d'ingénierie de la construction intervenant aux quatre coins du globe, Oger international a rapidement fait de l'innovation BIM un facteur de succès et de stratégie; Un véritable atout distinctif selon Jean-Charles Bangratz (...)



          La Ville de Paris optimise sa stratégie patrimoniale grâce au SIG        

La Ville de Paris optimise sa stratégie patrimoniale grâce au SIGPlanète SIG vous emmène dans différents services de la Ville de Paris qui font de plus en plus appel au système d’informations géographiques non seulement une visée opérationnelle mais aussi décisionnelle (...)



          La gestion de projet vise productivité et stratégie        

La gestion de projet vise productivité et stratégieVinci, Eiffage, Setec, Alstom... de nombreuses sociétés de l’ingénierie et du BTP gèrent leur projets sur des cycles globaux selon l’approche PLM. Jean-Louis Henriot, PDG de Lascom, nous explique pourquoi (...)



          Des tags RFID pour veiller sur les chantiers        

Des tags RFID pour veiller sur les chantiersComment et pourquoi utiliser la RFID dans le BTP ? Si le gouvernement soutient par un plan l’usage d’objets connectés, s’appliquant notamment aux smart-cities, ses bâtiments et réseaux intelligents, la FFB promeut la RFID pour lutter contre les vols (...)



          Le FZ-M1 met fin aux craintes d’utiliser une tablette sur chantier        

Le FZ-M1 met fin aux craintes d’utiliser une tablette sur chantierPanasonic élargit encore sa gamme d’appareils avec le Toughpad FZ-M1, une tablette ultra durcie sans ventilateur aussi puissante qu’un ordinateur de bureau (...)



          Lafarge rend ses bétons intelligents        

Lafarge rend ses bétons intelligentsLe chantier de la tour D2 à La Défense a servi de test dans l’introduction de puces RFID dans les bétons. Un procédé innovant qui ouvre des perspectives très intéressantes pour les bâtiments intelligents (...)



          Bentley Systems s’engage vers le BIM niveau 3        

Bentley Systems s’engage vers le BIM niveau 3Lors de la conférence annuelle de Bentley Systems, Greg Bentley a accordé un entretien exclusif à btpinformatic.fr. L’occasion d’évoquer l’évolution du BIM qui oscillera entre virtuel et réel, et le partenariat avec Siemens dans l’industrie 4.0 (...)



          ClimaWin AEC: BBS lance son premier cheval de Troie        

ClimaWin AEC: BBS lance son premier cheval de TroiePour en terminer avec les ressaisies de données dans Revit, BBS Slama va lancer un module s'installant directement dans le modeleur phare d’Autodesk. Une première attendue devrait être imitée pour d’autres modeleurs orientés BIM (...)



          BRZ remporte la médaille d'or de l'innovation 2013 avec BIM4You        

BRZ remporte la médaille d'or de l'innovation 2013 avec BIM4YouEn phase avec l'un des thèmes majeurs de cette édition de Batimat 2013, la solution BIM4Youproposée par BRZ France remporte le concours de l'innovation 2013, catégorie informatique (...)



          Les bureaux d'études doivent accélérer leur adhésion à la démarche BIM        

Les bureaux d'études doivent accélérer leur adhésion à la démarche BIMLes représentants de Chaix Morel et associés, Patriarche & co et du BET Kleber Daudin expliquent lors d'une table-ronde tout l'intérêt de la maquette numérique pour les BET (...)



          L'approche du BIM est-elle identique en projet neuf ou rénovation ?        

L'approche du BIM est-elle identique en projet neuf ou rénovation ?A travers notamment les projets de la fondation Louis Vuitton et du siège parisien de Google, cette table-ronde dévoile les approches finalement similaires déployées chez Bouygues construction, le BET Bianchi et au sein du cabinet Studios Architecture (..



          La maquette numérique passe en phase construction        

La maquette numérique passe en phase constructionFace à l'utilisation croissante du BIM dans le BTP, btpinformatic.fr a organisé sur Batimat 2013 plusieurs conférences. Celle-ci réunit des représentants de Bouygues bâtiment international, Oger International, Brunet Saunier, ACD Girardet & associés (...)



          AriCad développe sa compétence BIM        

AriCad développe sa compétence BIMUne nouvelle fois présent sur le salon Batimat aux côtés d'Autodesk, AriCad affiche sa détermination à accompagner les acteurs de la construction dans leur découverte du BIM (...)



          Un bilan brouillon pour Batimat 2013        

Un bilan brouillon pour Batimat 2013L'édition 2013 de Batimat 2013 a mis le BIM à l'honneur au plus grand bénéfice des visiteurs avides d'informations à ce sujet. Encore fallait-il comprendre qui fait quoi avec qui. Notre bilan en images (...)



          Account Manager Health Informatics - Philips - Home Based        
Demonstrates superior industry knowledge on market trends, products and current Philips product portfolios and uses this knowledge to his/her advantage....
From Philips - Wed, 10 May 2017 21:04:49 GMT - View all Home Based jobs
          A Bright Future: Innovation Transforming Public Health in Chicago        
imageBig cities continue to be centers for innovative solutions and services. Governments are quickly identifying opportunities to take advantage of this energy and revolutionize the means by which they deliver services to the public. The governmental public health sector is rapidly evolving in this respect, and Chicago is an emerging example of some of the changes to come. Governments are gradually adopting innovative informatics and big data tools and strategies, led by pioneering jurisdictions that are piecing together the standards, policy frameworks, and leadership structures fundamental to effective analytics use. They give an enticing glimpse of the technology's potential and a sense of the challenges that stand in the way. This is a rapidly evolving environment, and cities can work with partners to capitalize on the innovative energies of civic tech communities, health care systems, and emerging markets to introduce new methods to solve old problems.
          What Is “Informatics”?        
No abstract available
          Urgent Challenges for Local Public Health Informatics        
imageNo abstract available
          Health Informatics in the Public Health 3.0 Era: Intelligence for the Chief Health Strategists        
No abstract available
          Electronic Health Records and Meaningful Use in Local Health Departments: Updates From the 2015 NACCHO Informatics Assessment Survey        
imageBackground: Electronic health records (EHRs) are evolving the scope of operations, practices, and outcomes of population health in the United States. Local health departments (LHDs) need adequate health informatics capacities to handle the quantity and quality of population health data. Purpose: The purpose of this study was to gain an updated view using the most recent data to identify the primary storage of clinical data, status of data for meaningful use, and characteristics associated with the implementation of EHRs in LHDs. Methods: Data were drawn from the 2015 Informatics Capacity and Needs Assessment Survey, which used a stratified random sampling design of LHD populations. Oversampling of larger LHDs was conducted and sampling weights were applied. Data were analyzed using descriptive statistics and logistic regression in SPSS. Results: Forty-two percent of LHDs indicated the use of an EHR system compared with 58% that use a non-EHR system for the storage of primary health data. Seventy-one percent of LHDs had reviewed some or all of the current systems to determine whether they needed to be improved or replaced, whereas only 6% formally conducted a readiness assessment for health information exchange. Twenty-seven percent of the LHDs had conducted informatics training within the past 12 months. LHD characteristics statistically associated with having an EHR system were having state or centralized governance, not having created a strategic plan related to informatics within the past 2 years throughout LHDs, provided informatics training in the past 12 months, and various levels of control over decisions regarding hardware allocation or acquisition, software selection, software support, and information technology budget allocation. Conclusion: A focus on EHR implementation in public health is pertinent to examining the impact of public health programming and interventions for the positive change in population health.
          New volume: Rivista di Studi Pompeiani 24        
The latest volume of Rivista di Studi Pompeiani is out, which reports on activities and research that took place in the Vesuvian sites in 2013. The contents are:


Ernesto De Carolis, Maria Rosaria Senatore, Commemorazione di Annamaria Ciarallo
Pietro Giovanni Guzzo, Editoriale: A volte ritornano
Ernesto De Carolis, Giovanni Patricelli, Rinvenimento di corpi umani nel suburbio pompeiano e nei siti di Ercolano e Stabia
Antonella Ciotola, Ancora sul rilievo neoattico di Ercolano: una diversa lettura
Gaetana Boemio, Luana Toniolo, Ceramica da mensa da contesti tardo antichi a Napoli e nel Vesuviano, un confronto tra costa ed entroterra
Marie Tuffreau Libre, Isabel Brunie, Sebastien Daré, Peinture et perspective à Pompei, un ensemble d' objets liés au travail pictural (I,16, 2.3.4)
Maria Rosaria Vairo, Robyn Jennifer Veal, Girolamo Ferdinando De Simone , Lo sfruttamento delle risorse boschive in Campania nel tardo antico: l' evidenza antracologica dai siti vesuviani
Antonio Varone, Titulorum pictorum Pompeianorum imagines. Integrazioni

Attività della Soprintendenza Speciale per i Beni Archeologici di Pompei Ercolano Stabia
Uffico Scavi di Pompei (G.Stefani)
Laboratorio Ricerche applicate (E. De Carolis)
Servizio Secondo tecnico Informatico: Introduzione (V.Papaccio); L' area archeologica di Villa dei Papiri di Ercolano.Un primo intervento di ingegneria naturalistica a protezione di persone e reperti (F. Chiatto). Lavori presso l' antiquarium di Boscoreale, Un esempio di manutenzione ad Ercolano, Messa in sicurezza di regiones in Pompei (I.Bergamasco)
Uffico Editoria ( M.P.Guidobaldi)
Ufficio Scavi di Oplontis (L. Fergola)
Ufficio Scavi di Boscoreale (A.M. Sodo)
Uffico Scavi di Ercolano (M.P.Guidobaldi , A.Wallace Hadrill, J-Thompson, D.Camardo, A. Cinque, G. Irollo, M.Notomista, P.M. Pesaresi, A. Laino, A. D' Andrea, M. Giacobbe Borelli, Ch.Biggi, S.Court)
Vesuviana , Attività Alma Mater Studiorum a Pompei ed Ercolano 2006-13 (A. Coralini)
Ufficio Scavi di Stabia (G. Bonifacio)
Relazione preliminare sulla decorazione marmorea alle Ville Arianna e San Marco, Stabiae (S. Barker, J.Clayton Fant, C.A. Ward)
Stabiae, Villa Arianna, Relazione sulle campagne di scavo e restauro condotte dal Museo Hermitage e Fondazione RAS (P.Gardelli, A.Butyagin)
Preliminary Field Report on the 2012 Excavations at the Villa San Marco, Stabiae (T.Terpsta)
Ufficio Scavi Zone periferiche (C. Cicirelli)
 

Discussioni
Valorizzazione, fruizione sostenibile dei siti, accessibilità ecc. (P.G.Guzzo)
 

Recensioni
Ciceronis filius , testo latino di Ugo Enrico Paoli, edizione italiana e commento di E. Renna , Napoli 2013 (A.Casale)
Davvero. La Pompei di fine 800 nella pittura di Luigi Bazzani, Catalogo Mostra Bologna - Napoli 2013, a cura di D.Scagliarini, A.Coralini, R.Helg, Bologna 2013( L. Jacobelli)
Giuseppe Maggi, Ercolano. Fine di una città, Gorgonzola (Mi) 2013, (V. Castiglione Morelli)
Kristina Milnor, Graffiti & the Literary Landscape in Roman Pompeii, Oxford University Press, Oxford-NewYork 2014 ( A.Varone)
Vita dell' Associazione Amici di Pompei : Lapide in memoria di Amedeo Maiuri (RED. )

           Formation informatique IGS         
Si votre aspiration est de devenir un informaticien capable de travailler et de s'adapter aux évolutions du secteur. Aide a demeurer opérationnel le plus longtemps possible tout en intégrant la notion de la responsabilité. Faîtes confiance à une formation informatique IGS à Paris, Toulouse ou Lyon.
          OpenHelix        

OpenHelix Search Portal provides a mechanism to search for and evaluate online bioinformatics and genomics resources by providing contextual displays of search results. In addition, OpenHelix empowers researchers by distributing extensive and effective tutorials and training materials on the most powerful and popular genomics resources, and by contracting with resource providers to provide comprehensive, long-term training and outreach programs.

Brief Description: 
OpenHelix Search Portal provides a mechanism to search for and evaluate online bioscience resources.
Access: 
Subscription
Campus: 
Ann Arbor
Mobile Version: 
No mobile friendly interface available.
Icons: 
Authorized UM users (+ guests in UM Libraries)
New Resource Indicator: 
Not New
Mark as New Until: 
Wednesday, August 20, 2014
Type: 
Database
Vendor: 
OpenHelix
Internal Note: 
pricek 5/20/14
Creator: 
National Human Genome Research Institute (NHGRI) OpenHelix
High Level Browse: 
Keywords: 
genomics, bioinformatics
Database availability dates: 
Friday, April 3, 2015
Special Message Dates: 
Friday, April 3, 2015 to Saturday, October 3, 2015
Special Message: 

OpenHelix has been cancelled.   If you have questions or concerns, please email thlibrary@umich.edu.

 

Date to Unpublish Database: 
Saturday, October 3, 2015

          La speranza è sempre l'ultima a morire...!        
Torno ora da Padova dove, ad inizio settimana, sono stato testimone di una delle più belle cose che mai mi sarei aspettato di vivere. Parlo dell'incontro con i cittadini da parte della Lista Civica "Padova a 5 stelle" sostenuta dal Blog di Beppe Grillo. La settimana scorsa, girovagando un po' per internet, ho visto che il primo di giugno ci sarebbe stato questo evento con la presenza di Grillo e ho deciso, insieme ad amici, di andarci. Io non sono padovano, ma mi trovo praticamente tutta settimana a vivere tra le vie di Padova come studente universitario. Devo ammettere che la mia era più una curiosità di vedere come questi cittadini si organizzavano e gestivano l'evento, dato che non ho per ovvi motivi l'interesse di seguire un comizio in funzione di un voto in quella provincia. Mai e giuro mai mi sarei aspettato una cosa del genere. Piazza dei Signori già alle 19.30 di sera era stracolma di gente pronta ad offrire qualche soldo per questa Lista Civica, a sentire qual era il programma, a sostenere i candidati, a dare la propria firma per entrare a far parte di alcuni Meet Up presenti in città. Tutto questo senza uno stralcio di pubblicità da giornali locali, senza un mezzo servizio video da parte delle televisioni locali. Solo la Rete! Solo internet ha fatto riempire quella Piazza di gente comune: di tanti giovani come di pensionati, di padri di famiglia con i bambini sulle spalle come di casalinghe affacciate alla finestra. Una cosa fantastica e lo dico perché, sempre in quella Piazza, ho avuto modo di imbattermi in uno dei comizi finali del Partito Democratico per le elezioni politiche appena passate. Quella volta c'era Veltroni ed era al culmine della sua popolarità (per quanto mai rilevante) ma la piazza non era così piena come stavolta. E la distinzione risulta ai miei occhi ancor più nitida tenendo conto che a quel tempo i media nazionali seguivano giorno per giorno la campagna elettorale di Veltroni. Le liste civiche, invece, non se le fila nessuno. Ho sentito le immancabili invettive di Beppe, ma ho sentito anche molte cose nuove che non scorgo in nessuna figura politica italiana. Ad esempio le idee che questi cittadini hanno dell'Acqua pubblica, le idee che hanno di Mobilità, non intesa come costruzione di nuove strade per muoversi ma concepita come miglioramento delle tecnologie informatiche per evitare di muoversi inutilmente, le idee sull'Ambiente, le idee sulla Connettività da dare liberamente e gratuitamente a tutti i cittadini, le idee sullo Sviluppo della città. E a fianco a Beppe non ho visto gente in giacca e cravatta che prendeva in mano il microfono e aizzava la folla parlando di giudici, di veline e di complotti ma ho visto gente tranquilla che esponeva idee. Ho visto quattro o cinque ingegneri, un paio di informatici, due architetti, un commercialista, un Ricercatore universitario ed una mitica imprenditrice agricola che si definiva contadina e che ha deciso di scendere in campo con un bagaglio di conoscenze sull'Ambiente che pochi altri Dottorati hanno. Le idee queste sconosciute mi vien da dire. Esse sono sopite in questo Paese perché vogliono tenerle narcotizzate i politici ed i media. Io pensavo che non ci fossero più idee in questa Italia ed invece Lunedì ho avuto la prova di quanto potente sia il Regime partitocratico e mediatico perché non mi fa mai sentire le persone che espongono idee, bensì quelli che sparano cazzate. Ho deciso di inserirvi alcuni video perché guardiate cosa può fare la Rete. C'è pure il video di questa settimana della rubrica "Grillo 168" in cui potete vedere Grillo in cima al palco allestito a Padova. Io sfortunatamente non ho la possibilità di votare queste persone nel mio Comune ma so che a girare per questo blog siete in tanti e da tutta Italia. Se avete la fortuna di avere cittadini così nel vostro Comune votateli perché sono fantastici e sono e saranno il cambiamento solo se noi saremo il cambiamento. Lo si può fare da domani pomeriggio mettendo una croce nel simbolo delle "Liste Civiche a 5 Stelle". Ognuno sia il leader di se stesso e scelga se veramente vuol cambiare la propria città con un atto di vera democrazia dal basso. Evviva la Rete, evviva i cittadini con l'elmetto!








          Training Tools: Game-Based Assessment/Quiz Template        
Training Tools: Game-Based Assessment/Quiz Template Dev, Parvati; Youngblood, Patricia This interactive template was created for HIBBs module developers or users of HIBBs in training activities as a tool to create a simple game for any content. Game adaptors can identify the content to be covered, create questions and answers for each gameboard block, and paste them into the game template. The game can be used in a classroom setting with teams of players competing against each other or it can be modified for use by an independent learner as an aid in reviewing material. Instructions for adapting the game: 1) Select the content to be learned from a Health Informatics textbook, class lecture, or other learning resource; 2) Create questions and answers for each block on the gameboard; 3) Have questions and answers reviewed by a content specialist; 4) Replace existing questions and answers by pasting your content into the game template.
          Informatica представила новый релиз системы Intelligent Data Lake для монетизации данных        
Informatica IDL обеспечивает поиск и быстрый доступ к данным, возможность построить прототип и проверить гипотезу
          Vice premier Pellegrini: dobbiamo preparare la Slovacchia all’automazione dei lavori        

Le previsioni degli esperti ci dicono che tra 20 anni circa il 50% dei posti di lavoro sarà automatizzato: questo significa che sarà richiesto a chi cerca lavoro di avere una certa competenza nel campo informatico, ha detto martedì . . . → Leggi

           Usage of artificial intelligent technology in healthy food planning for people with disabilities         
Sicurello, Francesco, Salem, Abdel-Badeeh M., De Luca, Anna Rita, Stenta, Urbano, Amon, Tomaz and Revett, Kenneth (2009) Usage of artificial intelligent technology in healthy food planning for people with disabilities. In: EMMIT 2009: Euro-Mediterranean Medical Informatics and Telemedicine, 5th International Conference, 16 - 18 Oct 2009, Beirut, Lebanon. (Unpublished)
          Journal of Proteomics & Bioinformatics        
Volume 10, Issue 7
          Immunoinformatics Prediction of PeptideBased Vaccine Against AfricanHorse Sickness Virus        
Malaz Abdelbagi, Tarteel Hassan, Mohammed Shihabeldin, Sanaa Bashir, Elkhaleel Ahmed, Elmoez Mohamed, Shawgi Hafiz, Abdah Abdelmonim, Tassneem Hamid, Shimaa Awad, Ahmed Hamdi, Khoubieb Ali and Mohammed A Hassan
          Prediction and Conservancy Analysis of Multiepitope Based PeptideVaccine Against Merkel Cell Polyomavirus An ImmunoinformaticsApproach        
Mawadda AbdElraheem AwadElkareem, Soada Ahmed Osman, Hanaa Abdalla Mohamed, Hadeel AbdElrahman Hassan, Ahmed Hamdi Abuharaz, Khoubieb Ali Abdelrahman and Mohamed Ahmed Salih
          Softwares para construir Mapa Mental ou Conceitual:Free Mind,Cmap Tools e Bubbl.us/        
Quebrando Paradigmas na Educação 
(Modelo de Mapa Mental com palavras-chave
e desenhos)

Vídeo todo desenvolvido no formato de Mapa Mental : "Este vídeo ilustrado nos remete a reflexão a cerca da quebra de paradigmas educacionais." Fonte: http://www.youtube.com/watch?v=mAoTdnL9Ifw&feature=related


A diferença entre o mapa mental e o mapa conceitual


Mapas mentais são inegavelmente ferramentas úteis para organizar brainstorms ("toró de parpites", em brasilieiro caipirês) e para também delinear processos e projetos. Já os mapas conceituais, da forma como são propostos por Joseph Novak, buscam, em linhas gerais, detalhar as ligações entre os termos que entram nos esquemas, pelo meio de elos "de ligação”. (Fonte: http://entrezeroeum.blogspot.com.br/2011/09/volto-hoje-ao-tema-dos-mapas.html)
a) Mapa Mental: http://educacaodialogica.blogspot.com.br/2009/02/qual-diferenca-entre-mapa-mental-e-mapa.html
b) Mapa Conceitual: http://pt.wikipedia.org/wiki/Mapa_conceitual
b.1) Quais são os erros mais comuns que a gente comete ao fazer MCs?
b.2) O que são proposições?


Fonte: http://www.youtube.com/watch?v=5ZV8SUu1vHI&feature=player_embedded

Modelo de plano de aula para trabalhar mapa conceitual com alunos:
http://www.educared.org/educa/index.cfm?pg=internet_e_cia.informatica_principal&id_inf_escola=642

Softwares: 
O Freemind constrói mapas a partir de um tema central, com estruturas hierárquicas. Eu o uso para apresentar uma linha de trabalho e a partir dali usar links que chamem outros documentos e planilhas. Eu uso para apresentações mais esturutradas de mapas conceituais já delineados.
Tutorial:Como utilizar o Freemind: http://antoniopassos.com/blog/?p=26
http://freemind.sourceforge.net/wiki/index.php/Main_Page
http://www.baixaki.com.br/download/freemind.htm

Já o CmapTools trabalha em uma perspectiva mas da criação de mapas que não contenham um relação necessariamente hierárquica. Eu posso puxar várias setas de um mesmo objeto e manter o mapa sem um centro definido. Também posso linkar outro documentos a partir dele. Eu o uso mais para o trabalho de grafismo conceitual e brainstorms, por exemplo. Se instalado em servidor, o cmaptools permite a formação de mapas coletivos, utilizando vários computadores.
http://cmap.ihmc.us/
(Fonte: http://entrezeroeum.blogspot.com.br/2011/09/volto-hoje-ao-tema-dos-mapas.html)
Instalação e Utilização do CMAPTOOLS
a) Instalando o Cmap Tools no Linux Educacional 3.0:
Vídeos para instalar com e sem o auxílio do Wine
http://rafaelnink.com/blog/2011/04/20/instalando-o-cmap-tools-no-linux-educacional-3-0/
b) Utilizando o Cmaptools
http://www.youtube.com/watch?v=uJaT9LlKvn4&feature=related
c) Tutorial do Cmap Tools em: http://penta2.ufrgs.br/edutools/tutcmaps/tutindicecmap.htm

O Bubbl.us é um software on-line que não necessita de instalação: https://bubbl.us/

Dicas
- Se você for instalar o Cmaptools ou o Freemind não esqueça que Esses programas precisam do programa JAVA funcionando. Você pode baixá-lo ou atualizar esse programa na sua máquina no seguinte endereço: www.java.com
-Mapa Mental interessante sobre "Como Manter o Foco em uma Era de Distrações”.
http://desafiocriativo.blogspot.com.br/2012/05/como-manter-o-foco-na-era-das.html




          Come le persone, le aziende e gli hacker catturano il vostro indirizzo IP        
Se credete che soltanto il vostro provider (ISP), conosce il vostro indirizzo IP, dovrete ricredervi. (il vostro indirizzo IP è il vostro numero ID elettronico quando siete online). Con il giusto know-how tecnico e qualche trucco informatico qua e là, aziende, istituzioni governative, giovani non [...]
           Beyond the Common-Sense of Practice : A Case for Organizational Informatics         
Henfridsson, Ola, Holmström, Jonny and Söderholm, Anders . (1997) Beyond the Common-Sense of Practice : A Case for Organizational Informatics. Scandinavian Journal of Information Systems, 9 (1). pp. 47-56. ISSN 0905-0167
          ADDETTO/A TEST SOFTWARE PAGHE - Pesaro e Urbino        
Per importante cliente leader nel settore informatico Manpower seleziona un addetto/a ai test software (specifico per gli Studi Professionali dei Consulenti del lavoro) Ricerchiamo un candidato con conoscenza ...
          Tecnico informatico con esperienza - Ascoli Piceno        
Azienda leader nel settore delle telecomunicazioni seleziona un tecnico informatico/sistemista. Il candidato ideale dovrà essere in grado di amministrare e analizzare dati attraverso un CRM, ha un’ottima ...
          Healthcare Informatics: Who's Hiring?        
The past several years I have been touting healthcare informatics technology (HIT) as an alternate career option for life scientists. For those of you who may not know, healthcare informatics is a fie... click here to continue
          February 2017 Meeting        

Speaker: Nathan J. Edwards, Georgetown University

Topic: Beyond Peptide Identification Informatics: Multi-Search, Systems Proteomics, Proteogenomics, Phyloproteomics, and Glycoproteomics

Date: Monday, February 27, 2017

Time: 6:15 pm Dinner, 7:15 pm: Presentation

Location: Shimadzu Scientific Instrument, Inc. Training Center 7100 Riverwood Drive, Columbia, MD 21046 (Directions)

Dinner: Please RSVP to Katherine Fiedler (Katherine.L.Fiedler@fda.hhs.gov) before February 27 if you will be attending the dinner or are a presenting as a vendor.

Abstract: Bottom-up proteomics by LC-MS/MS is one of the most widely used analytical workflows for characterizing the expressed proteins of the cell. With continued improvements in mass spectrometer speed, accuracy, and versatility, we can increase the depth of coverage and detail of protein characterization, but only if our data analysis capabilities keep pace with the size, and complexity, of the collected spectral data. Tens of spectral datafiles with tens of thousands of spectra are now passé. Protein sequence databases provide multiple proteoforms and amino-acid variants per gene, and the number of species and strains with genome sequences continues to grow. Connecting identified proteins with their pathways, to provide a functional context for differentially abundant proteins remains a challenge, and the exploration of non-template driven post-translational modifications, such as glycosylation, requires novel analytical and mass-spectrometry techniques that demand new data-analysis techniques. Crucially, each of these analytical contexts requires a careful consideration of the potential for false conclusions, and strategies for estimating statistical significance.…


          Live Kerala Panchayath Election Results 2015         
Live Kerala Panchayath Election Results 2015 
Kerala election results 2015


National Informatics Center (NIC) Site Trend Publishing Live result of Kerala Local Body Election 2015.

Live Election Results


          Truffe informatiche, a Como una denuncia ogni 572 abitanti        
Truffe, frodi e reati informatici: un fenomeno in crescita costante. All’aumentare della diffusione della tecnologia, cresce anche il numero di chi, proprio attraverso la tecnologia, commette reati. I dati sui reati informatici in Lombardia sono stati diffusi da Das, una compagnia di Generali Italia, che ha calcolato l’aumento del fenomeno nei cinque anni compresi tra […]
          Gestione dati: le sei esigenze chiave per i CIO        

Commvault ha identificato sei distinte esigenze di mercato cui i CIO e i responsabili tecnologici devono rispondere, accompagnandole ai sei principi fondamentali del data management moderno.

Commvault asserisce che la gestione dei dati sta cambiando e che i clienti oggi si trovano a gestire quattro trend di lungo termine: il passaggio al cloud, la crescente domanda di sicurezza e conformità dei dati, l’anywhere computing e la crescita dei dati.

Questi trend hanno cambiato l’industria informatica e il data management, configurando sei esigenze:

  • Infrastrutture aperte e standardizzate: i clienti hanno la necessità di sfruttare i vantaggi in termini di costi e flessibilità di un’infrastruttura commodity
  • Nuovi mandati per il recupero dati: i clienti non possono permettersi la perdita di dati e le finestre di restore diventano sempre più strette
  • Strumenti di analisi estensibili integrate: i clienti richiedono strumenti di ricerca, visualizzazione, grafica e correlazione integrati nelle proprie soluzioni di data management oppure accessibili da linguaggi standard di query forniti da terze parti.
  • Accesso e collaborazione: gli utenti dovrebbero potere accedere a tutte le copie dei loro dai in maniera universale e senza soluzione di continuità, a prescindere da quando e dove sono stati creati. Inoltre essi devono essere in grado di condividere i loro dati con altri utenti in modo sicuro per poter sfruttare al meglio i dati per il business.
  • Governance end-to-end: le aziende devono avere tutti i dati sotto controllo e servono visibilità, sicurezza, accesso e conformità.
  • Il backup non tiene il passo della crescita dei dati: i volumi di dati stanno ormai superando la capacità delle tradizionali soluzioni di backup di soddisfare le attuali esigenze di rispristino.

La visione che Commvault ha di queste esigenze ha guidato i team di progettazione e product management nello sviluppo della undicesima release della soluzione dell’azienda.

I sei principi del data management moderno

Secondo Commvault sono sei i principi che dovrebbero caratterizzare qualsiasi azienda che voglia migliorare la propria strategia di data management:

  1. Accesso standardizzato alle piattaforme: elimina l’obsolescenza e il provider lock in; protegge gli investimenti e le roadmap tecnologiche future dei clienti.
  2. Sicurezza dati integrata: i dati sono sicuri in transito, a riposo e durante l’accesso; consente la comunicazione sicura durante il movimento, lo storage e l’attivazione con cifratura, la gestione delle informazioni fondamentali e controlli basati su ruolo per ogni utente, con controlli e report per il monitoraggio della conformità in tutte le località.
  3. Accesso diretto nativo: i dati sono disponibili in formato nativo; i servizi nativi o on-demand di consegna dei dati offrono un accesso interattivo quasi istantaneo (recovery point) nel formato richiesto dall’applicazione, consentendo una riduzione degli sforzi operativi, dei tempi e dei rischi.
  4. Search & Query estensibile: analizzare, visualizzare e ottimizzare i dati; attivare e sbloccare i dati correnti e storici fornendo, direttamente da una singola postazione, potenti query senza soluzione di continuità su molteplici applicazioni e location di storage, compresi i repository virtuali, le offerte SaaS e le soluzioni Cloud.
  5. Accesso e collaborazioni universali: condividere e sincronizzare in sicurezza App, file e informazioni; migliorare la produttività e la collaborazione dando agli utenti un accesso universale senza soluzione di continuità a tutte le copie dei loro dati, a prescindere da dove e quando sono stati creati.
  6. Governance fin dall’inizio: i dati sono gestiti fin dal momento in cui sono generati; consentendo alle aziende di ridurre significativamente i rischi di violazioni, perdita, furto e mancata conformità.
  7. Cattura incrementale delle variazioni: tutti i dati sono immediatamente utilizzabili; la gestione delle protezione a livelli di blocchi di volume offre l’opportunità di ridurre notevolmente i carichi di lavoro durante le operazioni di protezione dei dati offrendo efficienza a valle dell’utilizzo della rete e dello storage. Vengono infatti spostati solo i delta block e immagazzinati solo i blocchi che presentano variazioni. In questo modo si riducono le esigenze di banda e storage per le operazioni di recupero correnti e si rendono più rapidi RPO e RTO.

L'articolo Gestione dati: le sei esigenze chiave per i CIO è un contenuto originale di 01net.


          Comentario en ¡Ay! ¡ay! ¡ay!… mi «Google ‘SPY’» por conde de montecristo        
como sabes, soy digamos, informatico. regla numero uno, jamas confies en nada de la nube, a mi me quedo claro como miran tus archivos e incluso pueden borrar e inutilizar la cuenta. regla numero dos, poner una cuenta, por ejemplo , prof.cojonciano, y direccion calle la esquina numero 3555555 bajo albiona, freedonia regla numero tres vpn regla numero cuatro tor regla numero cinco , bueno por el momento vale. regla numero seis y la mas importante, la regla gollum, ellos miran mi tessoro, ellos vigilan, recuerda admin cuando nos conocimos.
          Healthy Brains For Healthy Lives Workshop: Neuroinformatics        

Dear Colleagues,
 

Category: 
Medicine Research, Faculty of Medicine, MNI
17Aug201709:30
to
18:00

          Il Massachusetts sempre più vicino a mollare i formati Microsoft        

[ da un post apparso sul forum "Cultura delle certificazioni, pluralismo delle soluzioni informatiche" - USR Lombardia ]

Secondo ElectricNews.net, lo stato del Massachusetts ha proposto di imporre a tutti i propri dipendenti l'uso di formati aperti per i documenti elettronici a partire dall'inizio del 2007. L'iniziativa di questo stato USA potrebbe produrre un effetto domino, inducendo altri stati a mollare i formati proprietari (tipicamente quelli di Microsoft).
opensource vs microsoft
Tim O'Reilly (Open Source) e Richard Stallman (Free software)

La scelta di formati aperti deriva principalmente dalle preoccupazioni dello stato per quanto riguarda la futura accessibilità dei documenti scritti oggi. In parole povere, se un governo o un'amministrazione pubblica oggi scrive un documento in un formato proprietario (per esempio un certificato di nascita o una denuncia in formato Word), come può avere la certezza che quel documento sarà ancora leggibile fra venti, cinquanta, cento anni?. Fra vent'anni Word userà chissà quale formato, e l'unica azienda che sa precisamente com'è fatto il formato Word attuale è Microsoft. Fra venti, cinquanta o cent'anni, Microsoft se ne ricorderà? Microsoft esisterà ancora?


          Colonizzare la noosfera        

Colonizzare la noosfera
di Eric S. Raymond

Aprile 1998 Dopo aver osservato la contraddizione esistente tra l'ideologia “ufficiale” definita dalle licenze open source e il comportamento degli hacker, vengono prese in esame gli usi che in concreto regolano la proprietà e il controllo del software open source. Si scopre così che queste implicano una teoria sottostante dei diritti di proprietà praticamente omologa alla teoria di Locke sulla proprietà terriera. Si mette poi in relazione ciò con l'analisi della cultura hacker in quanto “cultura del dono”, dove i partecipanti competono per il prestigio di regalare tempo, energia, creatività. Per passare infine a considerare le implicazioni di tale analisi riguardo la risoluzione dei conflitti nella cultura in generale, sviluppando alcune utili indicazioni generali.


          Fuss: software libero per le scuole dell'Alto Adige        

(letto su: www.sophia.it)

gnu filosofico a scuola in provincia di Bolzano

Le perplessità dei docenti sono state superate dopo aver provato le principali funzionalità del software libero.

Nella provincia di Bolzano la scuola passa all'open source. Tra luglio e agosto di quest'anno, infatti, su tutti i pc utilizzati nella didattica sarà installata una versione personalizzata di Debian/GNU[ i ]-Linux[ i ], FUSS-Soledad. L'operazione fa parte del progetto FUSS-Free Upgrade Southtyrol's Schools, un'azione di sistema finanziata dal Fondo Sociale Europeo per aggiornare i sistemi informatici per la didattica delle scuole in lingua italiana presenti in provincia di Bolzano. Saranno 2500, tra server e workstation, i pc che diranno addio al software proprietario, sottolinea Paolo Zilotti, coordinatore del progetto: "In quest’ottica si intendono sostituire gli attuali programmi proprietari utilizzando programmi distribuiti con licenza libera conosciuti in maniera generica come software libero. Questa scelta consentirà inoltre di distribuire agli allievi lo stesso software utilizzato a scuola creando in questo modo una cultura informatica basata sulla condivisione e la diffusione delle conoscenze oltre alla partecipazione attiva degli studenti al processo produttivo del software".


          Lettera aperta al Ministro dell'Istruzione Pubblica Moratti        

Tratta da: scuola.linux.it

Software libero per una scuola di qualità Lettera aperta, Milano-Treviso 24 Febbraio 2005

Al Signor Ministro del Miur (comunicazione.uff2@istruzione.it)
Viale Trastevere
00100 ROMA

e p. c. al Signor Ministro della Innovazione e delle Tecnologie (l.stanca@governo.it)
Via Isonzo, 21/b
00198 ROMA

Oggetto: Software libero per una scuola di qualità

Egregio Signor Ministro del Miur,
è in atto nella scuola pubblica italiana un'iniziativa dal titolo "In classe, ora di diritto d'autore: esercizi antipirateria" (indicata anche come "Copy or Love"), a cura della BSA (Business Software Alliance) e di altre organizzazioni in collaborazione con l'Amministrazione da Lei diretta.

Tale iniziativa, secondo i promotori, avrebbe il compito di sensibilizzare gli studenti italiani al rispetto del diritto d'autore nel settore delle nuove tecnologie contro i "fenomeni di pirateria informatica". Ciò dovrebbe avvenire attraverso un insieme di incontri di studio e a tale scopo vengono messi a disposizione delle scuole materiali educativi reperibili presso il sito: http://www.controlapirateria.org


          Power Point        

          USA: mniej recept, ale wydatki na leki większe        
374 miliardów dolarów – tyle Amerykanie wydali w 2014 roku na leki Rx. To o 13 proc. więcej niż w 2013 roku. Rekordowe wyniki podano w raporcie IMS Institute for Healthcare Informatics.
          Dental Informatics: A Practical Use Case        

Have you ever heard of Dental Informatics? Described as the intersection of patient data, computer science and dental providers, it’s an approach focused on providing higher quality patient care through better management and use of information...

The post Dental Informatics: A Practical Use Case appeared first on Dental Marketing Tips For Dentists and Dental Professionals.


          Uno de cada cuatro españoles ve determinante el fútbol a la hora de contratar un paquete de televisión        

Uno de cada cuatro españoles (24,9%) considera los contenidos futbolísticos determinantes a la hora de contratar un paquete de televisión, según se desprende del estudio 'El fútbol por televisión en España', elaborado por Comparaiso.


          Uno de cada cuatro españoles ve determinante el fútbol a la hora de contratar un paquete de televisión        

Uno de cada cuatro españoles (24,9%) considera los contenidos futbolísticos determinantes a la hora de contratar un paquete de televisión, según se desprende del estudio 'El fútbol por televisión en España', elaborado por Comparaiso.


          Italia ancora vittima dell’arretratezza informatica: penultimi in Europa        

Italia ancora penultima, in Europa, per l’uso di Internet: 60 italiani su 100 lo usano ma è ancora troppo poco. Scopriamo insieme i dettagli. Italia ancora fanalino di coda europeo per l’uso di Internet: 60 italiani su 100 usano il Web, 3% in più rispetto all’anno precedente, ma siamo ancora penultimi, in Europa, per l’uso del […]

L'articolo Italia ancora vittima dell’arretratezza informatica: penultimi in Europa sembra essere il primo su Bar Giomba - Pensieri , frasi ed emozioni.


          Altro che Trump e nucleare, la vera minaccia arriva dal web. E la posta in gioco siamo noi        
21 maggio 2017 – Il Fatto Quotidiano Non si fa che parlare di Trump e del fatto che sia il Commander in Chief delle forze armate ed abbia i codici nucleari ma pochi parlano della minaccia informatica, di come sia dal web che arriva gran parte degli attentati alla nostra democrazia clicca qui
          à¸à¸²à¸£à¹€à¸žà¸´à¹ˆà¸¡ Drive ใช้งานมากกว่าหนึ่ง(วิธีการสร้าง Patition)        

How to 


Create a Separate Data Partition for Windows

(การแบ่งพื้นที่เก็บข้อมูลในเครื่องป้องกันความสูญเสีย)



ที่มาของบทความ : 
             1.คงมีใครหลายๆคนที่เคยไปซื้้อคอมพิวเตอร์หรือโน๊ตบุคส์จากร้านค้า กลับมาเปิดเครื่องที่บ้านเมื่อเปิด Explorer à¸›à¸£à¸²à¸à¸à¸§à¹ˆà¸²à¹€à¸„รื่องมีเพียง Drive : C อย่างเดียว (ซึ่งจะเป็นไดร์รวมระบบปฏิบัติการอยู่ด้วยกัน) แต่ไม่ปรากฏ Drive : D ซึ่งพวกเราคุ้นเคยกันเลย บางทีร้านค้าเองเค้าก็ไม่รู้เหมือนกันว่าคุณต้องการให้แบ่ง Partให้อะป่าว ถ้าเจอร้านค้าดีไม่งี่เง่ามีความคิดหน่อยเค้าจะแบ่งแยก Driveออกมาให้เลย ต้องลงโปรแกรมระบบปฏิบัติการใน Drive : C และแบ่ง Partให้คุณอย่างน้อยก็ Drive : D หละ  à¹à¸•à¹ˆà¸–้าร้านค้างี่เง่า....เราก็ต้องมาค้นคว้าหาวิธีเพิ่มไดร์ฟเอง..อันนี้แหละคือ "ปัญหา" 
            เพราะการเพิ่ม Drive มันก็คือการป้องกันความเสี่ยงที่จะเกิดขึ้นในภายหลังเมื่อไดรฟ์ที่มากับเครื่องเพียงDriveเดียวพังลง ข้อมูลทุกอย่างที่อยู่บนเครื่องก็จะสูญสิ้นไปในคราวเดียวกัน ถ้าหากเครื่องของคุณมีเพียง 1 ไดร์ฟคือ Drive :C (ภาษาจีนเค้าเรียกว่า "ซี้แหงแก๋" "อีแมข่อย" ต่อด้วยภาษาลาว) 
               2.เมื่อวันก่อนน้องสาวที่ทำงานข้างๆโต๊ะผมถึงกับน้ำตาล่วง เหตุเพราะไอ้เจ้า External Hartdisk  (ซึ่งมันขึ้นมาโด่งดังในช่วงหลังๆ นี่เองด้วยความสามารถที่สามารถบรรจุข้อมูลได้มหาศาล หลังจากการล่มสลายของพวกแผ่น Disket และในยุคต่อมาที่กำลังจะน้อยลงคงไม่ต้องคาดเดาประเภทพวกแผ่น CD / CD-R / DVD ก็คงจะสูญสิ้นตามลำดับ แม้กระทั้ง Thrumdrive ในที่สุดก็คงจะเปลี่ยนผ่านไปสู่ยุคอื่นๆต่อไป) ปัญหาคือ "เราจะทำอย่างไรเมื่อมันเจ้ง" จะด้วยสาเหตุอะไรก็ตามแต่..สุดท้ายมันก็คือ "เจ้ง" แล้วข้อมูลมหาศาลที่อยู่ข้างในหละ การซ่อม/การกู้ข้อมูล เชื่อได้เลยว่าไม่มีทางสมบูรณ์ 100เปอร์เซนต์ครับแน่นอนคงได้แต่บอกน้องว่า "ทำใจไว้ครึ่งหนึ่งก่อนนะ" พี่แต่งงานแล้ว..เฮ้ย..คนละเรื่อง..อิอิ
                    งั้นข้อมูลที่คิดว่าปลอดภัยที่สุดคือต้องเก็บไว้ใน Drive : D ซึ่งอยู่ในเครื่องคอมฯ (Drive : C เก็บข้อมูลไม่ได้นะครับ หากเครื่องเสียข้อมูลใน Drive : C ก็จะพลอยหายไปด้วย ระวังเถอะพวกที่ชอบเก็บข้อมูลไว้บนหน้า Desktop จะน้ำตาร่วง เค้าเก็บDrive : C ไว้สำหรับระบบปฏิบัติการเท่านั้น)
            คำถาม :  "แล้วDriveไหนหละจะปลอดภัยที่สุด"
            คำตอบ :  "Drive : D ปลอดภัยที่สุด" (ทุกคนก็จะตอบเหมือนกันแน่นอน..เพราะสามารถกู้ข้อมูลได้)
               à¹€à¸¡à¸·à¹ˆà¸­ 3 ปี ที่แล้ว ผมเองก็ถึงกับร้องไห้เช่นกัน ด้วยเพราะคิดเหมือนกับคนทั่วไปนี่แหละครับ สุดท้ายเครื่องเจ้ง ข้อมูลใน Drive : D หายเกลี้ยงไปด้วย แต่ก็นึกสะบายใจเปาะหนึ่งเพราะคิดว่ามันก็จะกู้กลับคืนมาได้ สุดท้าย...ถึงร้องไห้โฮเลยทีนี้ เพราะกู้ได้ครับ..แต่ได้มาแต่โฟล์เดอร์เปล่าๆ บางงานก็กู้ได้แต่เพี้ยนหมดเปิดไม่ได้  à¸‡à¸²à¸™à¸ªà¸²à¸£à¸žà¸±à¸”งานรวมกันไว้ในDrive เดียวหายเกลี้ยง...เซ็งห่านเลย
                        ตั้งแต่วันนั้มาถึงวันนี้ .....
                   à¸œà¸¡à¸£à¸¹à¹‰à¸ªà¸¶à¸à¹€à¸«à¸¡à¸·à¸­à¸™à¸žà¸¶à¹ˆà¸‡à¹€à¸£à¸´à¹ˆà¸¡à¸‡à¸²à¸™à¹ƒà¸«à¸¡à¹ˆà¹„ด้ 3 ปี เริ่มต้นเก็บข้อมูล / ผลงานต่างๆ ที่มีเหลือนิดหน่อยตามเครื่องคอมฯบ้าง แผ่น CD บ้างที่เคยเก็บไว้ แล้วข้อมูล 10 ปีที่ผ่านมาหละ....ด้วยความโมโหจึงตะโกนให้ดังที่สุดเท่าที่จะดังได้ว่า "กูจะเก็บข้อมูลไว้บนฟ้าาาาาาาาาา"
                       à¹‚อ้วพี่น้องครับมันเป็นประสบการณ์ที่แสนจะเจ็บปวด....

                     à¸„ำถาม :    "จะเก็บข้อมูลไว้ที่ไหน...จึงจะอยู่หยั่งยืนยง...แม้เครื่องคอมฯพินาศลง..ข้อมูลจะยังคงเหลืออยู่ด้วย" ? 
                     à¸„ำตอบคือ  "ระบบ Cloud" (นวัตถกรรมเก็บข้อมูลแบบใหม่ในทศวรรษที่ 2012 - 2022 เก็บข้อมูลไว้บนฟ้าจริงๆด้วย..พระเจ้าจอร์จ..มันยอดจริงๆ) 

                     à¸‹à¸¶à¹ˆà¸‡à¸œà¸¡à¸ˆà¸°à¹„ด้พูดในบทความต่อไปในเรื่องของ Cloud แต่สำหรับวันนี้เรามาแก้ปัญหาเฉพาะกันก่อนดีกว่า ด้วยการสร้างพื้นที่เก็บข้อมูลไว้ในเครื่องคอมของพวกเราให้มากกว่าหนึ่ง(Drive) ป้องกันความสุ่มเสี่ยงที่จะเกิดขึ้นกับข้อมูลในการสูญหาย...จากเดิมที่เราเคยสร้างงานเก็บงานแยกตามโฟล์เดอร์..แต่ไง๊ก็อยู่ใน Driveเดียวกันใช่ไหมครับ? สู้ซะ..เรามาสร้างงานและเก็บงานและแยกงานตามDriveหลักๆไปเลยดีกว่า ป้องกันข้อมูลสูญหายเกลี้ยงไปในคราวเดียวกัน.........

 à¸à¸²à¸£à¹€à¸žà¸´à¹ˆà¸¡(แยก) Drive ในเครื่องคอมพิวเตอร์

                      การเพิ่ม Drive ให้เครื่องคอมพิวเตอร์  7 ขั้นตอน (จบ) จะพูดเฉพาะใน Windows 7 นะครับ ส่วน XP กำลังจะล่มสะลาย คงไม่ต้องพูดถึงส่วน Windows 8 ก็ใช้วิธีการเทียบเคียงกันได้ครับ

ขั้นตอนที่ 1 : คลิ๊กขวาที่ My Computer บน Desktop แล้วเลือกเมนู "Manage"ครับ 





ขั้นตอนที่ 2 : เลือกหัวข้อ Storage เมนู "Disk Management"



ขั้นตอนที่ 3  : เลือก Drive ที่ท่านต้องการจะแบ่ง Part ในที่นี่ผมจะแยกจาก Drive : D ซึ่งมีความจุทั้งหมดอยู่ 348.57 GB ให้คลิ๊กขวาที่ Drive : D ครับแล้วเลือกหัวข้อ "Shrink Volume"

หลังจากนั้น ระบบจะประมวลผล (รอแป๊ป..หรือจะเอาฆ้อนดี..อิอิอิ)



ขั้นตอนที่ 4 : ขั้นสำคัญ...เพราะให้ใส่ขนาดความจุของ Drive ใหม่ที่เราต้องการในที่นี้ระบบคอมพิวเตอร์จะนับความจุของพื้นที่เป็น MB นะครับไม่นับเป็น GB ดังนั้น 1000 MB = 1 GB  à¸™à¸°à¸„รับจำไว้(อย่าลืมเชียวนา) สำหรับตัวอย่างที่ผมแยกมาผมจะเอาพื้นที่ Drive ใหม่ 50 GB เพราะจะเก็บงานเยอะเหมือนกัน ก็เท่ากับ 50000 MB  à¹ƒà¸™à¸šà¸£à¸£à¸—ัดที่ 3 นะครับ


ความหมายในแต่ละบรรทัดดังนี้


  • Total size before shrink in MB: --> ความจุทั้งหมดของไดร์ฟ C: ที่เราเลือก
  • Size of available shrink space in MB: --> ขนาดที่เราสามารถแบ่งได้
  • Enter the amount of space to shrink in MB: --> ใส่ขนาดที่เราต้องการจะแบ่ง
  • Total size after shrink in MB: --> ขนาดไดร์ฟ C: ที่เหลือ หลังจากการแบ่งไดร์ฟ


  • ขั้นตอนที่ 5 :  à¹€à¸¡à¸·à¹ˆà¸­à¸£à¸°à¸šà¸šà¸›à¸£à¸°à¸¡à¸§à¸¥à¸œà¸¥à¹€à¸ªà¸£à¹‡à¸ˆà¸ˆà¸°à¸ªà¸±à¸‡à¹€à¸à¸•à¸¸à¸§à¹ˆà¸²à¹€à¸£à¸²à¹„ด้ไดร์ฟเพิ่มมาอีก 1 Drive ครับ ตามความจุของพื่นที่ที่เรากำหนดไว้ (ขาดนิดหน่อยเพราะระบบต้องแบ่งไว้สำหรับปฏิบัติการ) ให้ตั้งชื่อให้ Drive ด้วยครับ โดยคลิ๊กขวาที่Drive แล้วเลือกเมนู "New Simple Volume"

    เลือก Next ไปเรื่อยๆแล้วระบบก็จะแจ้งความจุของ Drive ให้เราทราบ Next ไป





    ขั้นตอนที่ 6 :  à¹ƒà¸«à¹‰à¹€à¸¥à¸·à¸­à¸à¸£à¸°à¸šà¸¸ Drive ที่เราต้องการให้แสดงครับ ในที่นี่ผมเลือก Drive : F ครับ ซึ่งที่ชอบ "F" ไม่ได้แปลมาจากความ "ล้มเหลว" นะครับ แต่มาจากคำว่า





    ขั้นตอนที่ 7 :  à¸•à¸±à¹‰à¸‡à¸Šà¸·à¹ˆà¸­à¹ƒà¸«à¹‰ Drive ของคุณเลยตามใจชอบเช่น "Informatics" ภาควิชาสารสนเทศฯ ซึ่งผมใช้เพราะข้อมูลเยอะพอควร และก็จะสร้างงานอื่นๆต่อไปอีกหลายงาน.. แล้วคุณก็ Next ไปเลย



    ผลลัพธ์


                         à¹€à¸¡à¸·à¹ˆà¸­à¹€à¸£à¸²à¹€à¸›à¸´à¸” Explorer ขึ้นมาก็จะพบกับ Drive ที่ได้มาเพิ่ม...ที่นี้ก็เก็บงานอย่างสะบายใจเฉิบ..เลยครับพี่น้อง







    สุดท้าย : คุณก็ได้ Drive โดยมีความจุและชื่อตามที่คุณตั้งไว้สำหรับใช้งานมาอีก 1 Drive เพื่อป้องกันความสูญเสียกรณีเครื่องพังโดยไม่ทราบสาเหตุและก็ยังสามารถสร้าง Drive ได้เรื่อยๆ  à¹€à¸›à¹‡à¸™à¸­à¸µà¸à¸«à¸™à¸¶à¹ˆà¸‡"ความเสี่ยง" ที่หน่วยงานควรเผยแพร่วิธีการสำรองขà¹
              The GOOG->MSFT Exodus: Working at Google vs. Working at Microsoft        

    Recently I’ve been bumping into more and more people who’ve either left Google to come to Microsoft or got offers from both companies and picked Microsoft over Google. I believe this is part of a larger trend especially since I’ve seen lots of people who left the company for “greener pastures” return in the past year (at least 8 people I know personally have rejoined) . However in this blog post I’ll stick to talking about people who’ve chosen Microsoft over Google.

    First of all there’s the post by Sergey Solyanik entitled Back to Microsoft where he primarily gripes about the culture and lack of career development at Google, some key excerpts are

    Last week I left Google to go back to Microsoft, where I started this Monday (and so not surprisingly, I was too busy to blog about it)
    …
    So why did I leave?

    There are many things about Google that are not great, and merit improvement. There are plenty of silly politics, underperformance, inefficiencies and ineffectiveness, and things that are plain stupid. I will not write about these things here because they are immaterial. I did not leave because of them. No company has achieved the status of the perfect workplace, and no one ever will.

    I left because Microsoft turned out to be the right place for me.
    …
    Google software business is divided between producing the "eye candy" - web properties that are designed to amuse and attract people - and the infrastructure required to support them. Some of the web properties are useful (some extremely useful - search), but most of them primarily help people waste time online (blogger, youtube, orkut, etc)
    …
    This orientation towards cool, but not necessarilly useful or essential software really affects the way the software engineering is done. Everything is pretty much run by the engineering - PMs and testers are conspicuously absent from the process. While they do exist in theory, there are too few of them to matter.

    On one hand, there are beneficial effects - it is easy to ship software quickly…On the other hand, I was using Google software - a lot of it - in the last year, and slick as it is, there's just too much of it that is regularly broken. It seems like every week 10% of all the features are broken in one or the other browser. And it's a different 10% every week - the old bugs are getting fixed, the new ones introduced. This across Blogger, Gmail, Google Docs, Maps, and more
    …
    The culture part is very important here - you can spend more time fixing bugs, you can introduce processes to improve things, but it is very, very hard to change the culture. And the culture at Google values "coolness" tremendously, and the quality of service not as much. At least in the places where I worked.
    …
    The second reason I left Google was because I realized that I am not excited by the individual contributor role any more, and I don't want to become a manager at Google.

    The Google Manager is a very interesting phenomenon. On one hand, they usually have a LOT of people from different businesses reporting to them, and are perennially very busy.

    On the other hand, in my year at Google, I could not figure out what was it they were doing. The better manager that I had collected feedback from my peers and gave it to me. There was no other (observable by me) impact on Google. The worse manager that I had did not do even that, so for me as a manager he was a complete no-op. I asked quite a few other engineers from senior to senior staff levels that had spent far more time at Google than I, and they didn't know either. I am not making this up!

    Sergey isn’t the only senior engineer I know who  has contributed significantly to Google projects and then decided Microsoft was a better fit for him. Danny Thorpe who worked on Google Gears is back at Microsoft for his second stint working on developer technologies related to Windows Live.  These aren’t the only folks I’ve seen who’ve decided to make the switch from the big G to the b0rg, these are just the ones who have blogs that I can point at.

    Unsurprisingly, the fact that Google isn’t a good place for senior developers is also becoming clearly evident in their interview processes. Take this post from Svetlin Nakov entitled Rejected a Program Manager Position at Microsoft Dublin - My Successful Interview at Microsoft where he concludes

    My Experience at Interviews with Microsoft and Google

    Few months ago I was interviewed for a software engineer in Google Zurich. If I need to compare Microsoft and Google, I should tell it in short: Google sux! Here are my reasons for this:

    1) Google interview were not professional. It was like Olympiad in Informatics. Google asked me only about algorithms and data structures, nothing about software technologies and software engineering. It was obvious that they do not care that I had 12 years software engineering experience. They just ignored this. The only think Google wants to know about their candidates are their algorithms and analytical thinking skills. Nothing about technology, nothing about engineering.

    2) Google employ everybody as junior developer, ignoring the existing experience. It is nice to work in Google if it is your first job, really nice, but if you have 12 years of experience with lots of languages, technologies and platforms, at lots of senior positions, you should expect higher position in Google, right?

    3) Microsoft have really good interview process. People working in Microsoft are relly very smart and skillful. Their process is far ahead of Google. Their quality of development is far ahead of Google. Their management is ahead of Google and their recruitment is ahead of Google.

    Microsoft is Better Place to Work than Google

    At my interviews I was asking my interviewers in both Microsoft and Google a lot about the development process, engineering and technologies. I was asking also my colleagues working in these companies. I found for myself that Microsoft is better organized, managed and structured. Microsoft do software development in more professional way than Google. Their engineers are better. Their development process is better. Their products are better. Their technologies are better. Their interviews are better. Google was like a kindergarden - young and not experienced enough people, an office full of fun and entertainment, interviews typical for junior people and lack of traditions in development of high quality software products.

    Based on my observations, I have theory that Google’s big problem is that the company hasn’t realized that it isn’t a startup anymore. This disconnect between the company’s status and it’s perception of itself manifests in a number of ways

    1. Startups don’t have a career path for their employees. Does anyone at Facebook know what they want to be in five years besides rich? However once riches are no longer guaranteed and the stock isn’t firing on all cylinders (GOOG is underperforming both the NASDAQ and DOW Jones industrial average this year) then you need to have a better career plan for your employees that goes beyond “free lunches and all the foosball you can handle".

    2. There is no legacy code at a startup. When your code base is young, it isn’t a big deal to have developers checking in new features after an overnight coding fit powered by caffeine and pizza. For the most part, the code base shouldn’t be large enough or interdependent enough for one change to cause issues. However it is practically a law of software development that the older your code gets the more lines of code it accumulates and the more closely coupled your modules become. This means changing things in one part of the code can have adverse effects in another. 

      As all organizations mature they tend to add PROCESS. These processes exist to insulate the companies from the mistakes that occur after a company gets to a certain size and can no longer trust its employees to always do the right thing. Requiring code reviews, design specifications, black box & whitebox & unit testing, usability studies, threat models, etc are all the kinds of overhead that differentiate a mature software development shop from a “fly by the seat of your pants” startup. However once you’ve been through enough fire drills, some of those processes don’t sound as bad as they once did. This is why senior developers value them while junior developers don’t since the latter haven’t been around the block enough.

    3. There is less politics at a startup. In any activity where humans have to come together collaboratively to achieve a goal, there will always be people with different agendas. The more people you add to the mix, the more agendas you have to contend with. Doing things by consensus is OK when you have to get consensus from two or three people who sit in the same hallway as you. It’s a totally different ball game when you need to gain it from lots of people from across a diverse company working on different projects in different regions of the world who have different perspectives on how to solve your problems. At Google, even hiring an undergraduate candidate has to go through several layers of committees which means hiring managers need to possess some political savvy if they want to get their candidates approved.  The founders of Dodgeball quit the Google after their startup was acquired after they realized that they didn’t have the political savvy to get resources allocated to their project.

    The fact that Google is having problems retaining employees isn't news, Fortune wrote an article about it just a few months ago. The technology press makes it seem like people are ditching Google for hot startups like FriendFeed and Facebook. However the truth is more nuanced than that. Now that Google is just another big software company, lots of people are comparing it to other big software companies like Microsoft and finding it lacking.

    Now Playing: Queen - Under Pressure (feat. David Bowie)


              Sr. Informatica MDM Developer        

              Delito informaticos        
    DELITOS INFORMATICOS

    DEFINICION

    Delitos Informáticos" son todos aquellas conductas ilícitas susceptibles de ser sancionadas por el derecho penal, que hacen uso indebido de cualquier medio Informático.
    El delito Informático implica actividades criminales que un primer momento los países han tratado de encuadrar en figuras típicas de carácter tradicional, tales como hurto, fraudes, falsificaciones, perjuicios, estafa, sabotaje, etc., sin embargo, debe destacarse que el uso indebido de las computadoras es lo que ha propiciado la necesidad de regulación por parte del derecho.

    DEFINICIÓN DEL CÓDIGO PENAL COLOMBIANO

    “Delito Informático" puede comprender tanto aquellas conductas que recaen sobre herramientas informáticas propiamente dichas, ya sean programas, ordenadores, etc., como aquellas que valiéndose de estos medios lesionan otros intereses jurídicamente tutelados como son la intimidad, el patrimonio económico, la fe pública, etc.

    CLASIFICACIÓN

    Se han clasificado los delitos informáticos con base en dos criterios:

    Ø Como instrumento o medio.

    Ø Como fin u objetivo.


    COMO INSTRUMENTO O MEDIO

    Comprende:

    ü Conductas delictivas que se valen de las computadoras como método, medio, o símbolo en la comisión del ilícito.
    ü Conductas delictivas en donde los individuos utilizan métodos electrónicos para llegar a un resultado ilícito.
    ü Conductas delictivas en donde para realizar un delito utilizan una computadora como medio o símbolo.

    COMO FIN U OBJETIVO

    ü En esta categoría se enmarcan las conductas delictivas que van dirigidas en contra de la computadora, accesorios o programas como entidad física.
    ü También comprende las conductas delictivas dirigidas contra la entidad física del objeto o máquina electrónica o su material con objeto de dañarla.



    PARTES IMPLICADAS

    SUJETO ACTIVO
    Los Sujetos Activos tienen habilidades para el manejo de los sistemas informáticos y generalmente por su situación laboral se encuentran en lugares estratégicos donde se maneja información de carácter sensible, o bien son hábiles en el uso de los sistemas informatizados, aún cuando en muchos de los casos, no desarrollen actividades laborales que faciliten la comisión de este tipo de delitos.

    SUJETO PASIVO
    En primer término tenemos que distinguir que el sujeto pasivo o víctima del delito es el ente sobre el cual recae la conducta de acción u omisión que realiza el sujeto activo, y en el caso de los "delitos informáticos", mediante él podemos conocer los diferentes ilícitos que cometen los delincuentes informáticos, que generalmente son descubiertos ‘casuisticamente’ debido al desconocimiento del “modus operandi”.

    TIPOS DE DELITOS
    Sabotaje Informático.
    Conductas dirigidas a causar daño físico.
    Conductas dirigidas a causar daños lógicos.
    Bombas lógicas.
    Fraude a través de computadoras.
    Estafa electrónica.
    Pesca u olfateo de claves secretas.
    Estratagemas.
    Juegos de azar.
    Fraude.
    Blanqueo de dinero.
    Copia ilegal de software y espionaje informático.
    Infracción de copyright de bases de datos.
    Uso ilegítimo de sistemas informáticos ajenos.
    Acceso No autorizado.

    DELITOS INFORMÁTICOS CONTRA LA PRIVACIDAD
    Interceptación de e-mail.
    Pornografía infantil.

    ACTIVIDADES DELICTIVAS GRAVES

    Terrorismo.
    Narcotráfico.
    Espionaje.
    Espionaje Industrial.
    Usos comerciales no éticos.
    Actos parasitarios.

    Auditor de delitos informáticos
    n El rol del auditor informático solamente está basado en la verificación de controles, evaluación del riesgo de fraudes y el desarrollo de exámenes que sean apropiados y que deben razonablemente detectar.
    n Irregularidades que puedan tener un impacto sobre el área auditada o sobre toda la organización
    n Debilidades en los controles internos que podrían resultar en la falta de prevención o detección de irregularidades.
    n Detección de delitos.
    n Determinar si se considera la situación un delito realmente; Establecer pruebas claras y precisas
    n Determinar los vacíos de la seguridad existentes y que permitieron el delito; Informar a la empresa correspondiente dentro de la organización;
    n Informar a autoridades regulatorias cuando es un requerimiento legal.


    Legislación Colombiana

    Aproximación al concepto de “Delito Informático”:

    El Código Penal Colombiano expedido con la Ley 599 de 2000, no hace referencia expresa a los delitos informáticos como tales; no obstante, en varias de sus normas recoge conductas que podrían entenderse incorporadas al concepto que la doctrina ha elaborado a este respecto.

    v En Colombia con la expedición de la Ley 527 de 1999 y su decreto reglamentario 1747 de 2000, se reconoció fuerza probatoria como documentos a los mensajes de datos.
    El artículo 10º de la Ley 527/99 regla:
    "Los mensajes de datos serán admisibles como medios de prueba y su fuerza probatoria es la otorgada en las disposiciones del Capítulo VIII del Título XIII, Sección Tercera, Libro Segundo del Código de Procedimiento Civil.

    v La Corte Constitucional en sentencia C-662 de junio 8 de 2000, con ponencia del Magistrado Fabio Morón Díaz, al pronunciarse sobre la constitucionalidad de la Ley 527 de 1999, hizo las siguientes consideraciones:

    (...) "El mensaje de datos como tal debe recibir el mismo tratamiento de los documentos consignados en papel, es decir, debe dársele la misma eficacia jurídica, por cuanto el mensaje de datos comporta los mismos criterios de un documento”. (…)

    Legislación Internacional

    q Alemania, para hacer frente a la delincuencia relacionada con la informática, el 15 de mayo de 1986 se adoptó la Segunda Ley contra la Criminalidad Económica
    Espionaje de datos. Estafa informática. Falsificación de datos probatorios Utilización abusiva de cheques o tarjetas de crédito
    q Austria. Según la Ley de reforma del Código Penal del 22 de diciembre de 1987, se contemplan los siguientes delitos:
    Destrucción de datos, Estafa informática
    q Chile. La ley 19223 señala que la destrucción o inutilización de un sistema de tratamiento de información puede ser castigado con prisión de un año y medio a cinco.
    q China. El Tribunal Supremo Chino castigará con la pena de muerte el espionaje desde Internet, según se anunció el 23 de enero de 2001. La corte determina que hay tres tipos de actividades donde la vigilancia será extrema secretos de alta seguridad, los secretos estatales y aquella información que dañe seriamente la seguridad estatal y sus intereses
    Se consideran actividades ilegales:
    La infiltración de documentos relacionados con el Estado, la defensa, las tecnologías de punta y la difusión de virus informático
    q España. Ley orgánica de protección de datos de carácter personal (LOPDCP) aprobada el 15 de diciembre de 1999 Se sanciona en forma detallada la obtención o violación de secretos, el espionaje, la divulgación de datos privados, las estafas electrónicas el hacking maligno o militar, el phreacking, la introducción de virus, etc.
    q Estados Unidos En 1994 se adoptó el Acta Federal de Abuso Computacional . regulación de los virus Modificar, destruir, copiar, transmitir datos o alterar la operación normal de las computadoras, los sistemas o las redes informáticas es considerado delito. El fcic (Federal Computers Investigation Commitee), es la organización más importante e influyente en lo referente a delitos computacionales .
    Asociación Internacional de Especialistas en Investigación Computacional (IACIS),
    n Francia. Aquí, la Ley 88/19 del 5 de enero de 1988 sobre el fraude informático contempla: Acceso fraudulento a un sistema de elaboración de datos. Sabotaje Informático , destrucción de datos y Falsificación de documentos informatizados.
    n Holanda. Hasta el día 1 de marzo de 1993, día en que entró en vigencia la Ley de Delitos Informáticos, puede ser castigado con cárcel de seis meses a quince años,
    n Inglaterra. En agosto de 1990 comenzó a regir la Computer Misuse Act (Ley de Abusos Informáticos) El acceso ilegal a una computadora contempla hasta seis meses de prisión o multa de hasta dos mil libras esterlinas
              Aumentano le persone che nei Paesi Ue lavorano nell’informatica e nelle tlc. I dati Eurostat: in Italia numeri in crescita anche se al di sotto della media (INFOGRAFICHE)        
    Nel 2016, sono arrivati a quota 8,2 milioni di persone gli specialisti impiegati nell’Unione europea nel settore dell’informatica e delle [...]
              #207 "No quiero a alguien que llegue a las 9h y se vaya a las 18h"        

    #94 Yo trabaje en una startup asi, y me encanta mi trabajo y me considero muy bueno en lo mio...

    Pero al final el hecho de que estes pringando como un campeon y de los frutos no te caiga nada...

    No se...

    Te vendo un consejo: si haces una startup de temas relacionados con la informatica y tienes que contratar a desarrolladores desde el primer dia: contratalos buenos y haz que se impliquen haciendoles socios de la empresa en cuanto veas que valen -con beneficios y responsabilidades-. Sino se acabaran quemando.

    Yo al menos si monto una empresa esa parte la tengo clara... No puedes esperar que alguien de el 200% -que es lo necesario al principio- por 1 sueldo...

    #202 "Si me dan acciones de la empresa, y me hacen partícipe de las ganancias (como se hacen en muchas empresas de USA), es probable que me sienta implicado..." que cojonazos tienes!!! si te dan acciones eres parte de la empresa. Si aún así tienes dudas de cual sería tu implicación mal vamos.

    Creo que no le has entendido: dice que si le hacen participe de las ganancias aceptara -o no- sentirse implicado (aceptara o no ese trabajo dependiendo de su momento vital... hay gente que prefiere vivir sin implicacion y con menos ganancias). Lo que no puede ser es que te pidan que te impliques como el dueño cuando si las cosas van bien van a ir bien solo para el dueño...

    » autor: dreierfahrer


              PLIMBAREA CU TICO de CORINA LUCIA COSTEA        
    Să nu râdeţi Azi am trăit una dintre cele mai frumoase experienţe din viaţă o plimbare cu Tico ( un automobil cu pretenţii de maşină , zisese un amic) Nu de mult, a apărut la noi la liceu un domn bărbos, sobru, cu ochelari, subţire şi înalt, mereu gânditor ... o reîncarnare a părintelui Arsenie Boca (iertată să-mi fie comparaţia, dar dacă te uiţi la omul acesta şi la poza cunoscutului duhovnic, o să rămâi frapat de asemănare). Am aflat că e venit cu un proiect... Dacă la început copiii l-au privit sceptic, iar el ieşea istovit de întâlnirea cu întrebările lor din lume , acum îi văd pe unii că-l însoţesc pe scări, vorbesc şi în pauze. Nu ştiu cât va dura proiectul... Pentru mine, omul acesta era o enigmă...dincolo de înfăţişarea ecumenică, ochii lui îmi păreau cunoscuţi...de unde ...de unde A fost elev la noi , îmi spusese informaticianul. A făcut întâi Politehnica şi apoi Teologia ... a stat o vreme şi la Muntele Sfânt ... nici de acolo ... nici de acolo ... Mă frământa ideea că nu ştiu dacă-l cunosc din altă împrejurare, din altă etapă a vieţii noastre sau dacă nu cumva mintea mea naşte ... poveşti. Mă ...
              Bioinformatician II - University of Massachusetts Medical School - Worcester, MA        
    We are looking for enthusiastic bioinformaticians and computational biologists to become part of the team and collaborate with the computational community both...
    From University of Massachusetts Medical School - Fri, 21 Jul 2017 18:27:29 GMT - View all Worcester, MA jobs
              Research Scientist, Bioinformatics - George Washington University - Foggy Bottom, MD        
    We are seeking a highly motivated, skilled, and collaborative computational biologist to contribute to multiple NIH -funded microbiome research projects....
    From George Washington University - Fri, 16 Jun 2017 17:12:57 GMT - View all Foggy Bottom, MD jobs
              Biology: Assistant Professor of Biology - University of Richmond - Richmond, VA        
    We seek a biologist who has expertise in analysis of big data, modeling, bioinformatics, genomics/transcriptomics, biostatistics, or other quantitative and/or...
    From University of Richmond - Thu, 06 Jul 2017 23:17:18 GMT - View all Richmond, VA jobs
              Data Scientist/Quantitative Analyst, Engineering - Google - Mountain View, CA        
    (e.g., as a statistician / data scientist / computational biologist / bioinformatician). 4 years of relevant work experience (e.g., as a statistician /...
    From Google - Sat, 05 Aug 2017 09:55:57 GMT - View all Mountain View, CA jobs
              Why Bioinformatics Pros Dig Mathematica        
    During discussions at the International Mathematica User Conference 2009 with bioinformaticians using Mathematica, I learned a lot of very important things—like why protein folding isn’t something you can order at the dry cleaner. I also learned that a lot of people seriously dig Mathematica‘s modeling and automatic interface construction capabilities, which make it easy for [...]
               Exploring blogs as a collaborative tool for advancing nursing informaticians: Professional development         
    Karl, O., Murray, P. and Ward, R. (2006) Exploring blogs as a collaborative tool for advancing nursing informaticians: Professional development. In: 16th Summer Institute in Nursing Informatics (Advancing Clinical Practice Through Nursing Informatics), University of Maryland, Baltimore, USA, 19th July, 2006. Available from: http://eprints.uwe.ac.uk/3801
              Eckerson Group Profiles Top Eight Innovations In Data Science        

    Alpine Data, DataRobot, Domino Data Lab, FICO, Informatica, Nutonian, RapidMiner, and SAS are selected for their innovations in data science

    (PRWeb August 08, 2017)

    Read the full story at http://www.prweb.com/releases/2017/08/prweb14560218.htm


              Software Test Lead Engineer - Cortex - Austin, TX        
    Use API Back-End white box, Agile Methodology, Java, Perl, Selenium Web driver, Jmeter, Load Runner, IBM Appscan, SOAPUI, Informatica, Oracle, TeraData, Unix...
    From Cortex - Sat, 05 Aug 2017 06:37:48 GMT - View all Austin, TX jobs
              [urls] Top 10 Urls: 25 July-1 August 2004        
    Monday, August 2, 2004
    Dateline: China
     
    The following is a sampling of my "urls" for the past eight days.  By signing up with Furl (it's free), anyone can subscribe to an e-mail feed of ALL my urls (about 150-350 per week) -- AND limit by subject (e.g., ITO) and/or rating (e.g., articles rated "Very Good" or "Excellent").  It's also possible to receive new urls as an RSS feed.  However, if you'd like to receive a daily feed of my urls but do NOT want to sign up with Furl, I can manually add your name to my daily Furl distribution list.  (And if you want off, I'll promptly remove your e-mail address.)
     
    Best new selections (in no particular order):
     
    * A Web Services Choreography Scenario for Interoperating Bioinformatics Applications (SUPERB, covering all the bases; might serve as the foundation for a blog posting)
    * ICC Report: Software Focus, June 2004 issue (if you're not familiar with this monthly newsletter from Red Herring, it's worth scanning; this particular issue is their "annual" on enterprise software)
    * Northeast Asia: Cultural Influences on the U.S. National Security Strategy (this might serve as the basis for a blog posting; EXCELLENT, broad-based review of cultural issues)
    * Economics of an Information Intermediary with Aggregation Benefits (think B2B and e-markets, although the implications are wide-ranging)
    * How to Increase Your Return on Your Innovation Investment (provides a link to an article published in the current issue of Harvard Business Review; good food for thought)
    * Why Mobile Services Fail (insights from Howard Rheingold)
    * Anything That Promotes ebXML Is Good (lots of good links; I'm an ebXML advocate, so the tone of this article is one which I fully support)
     
    Examples of urls that didn't make my "Top Ten List":
     
    > RightNow, Sierra Atlantic Announce Partnership to Deliver Enterprise CRM Integration (a trend in the making; I've talked about this quite a bit, i.e., systems integrators working with utility computing vendors)
     
    and many, many more ...
     
    Cheers,
     
    David Scott Lewis
    President & Principal Analyst
    IT E-Strategies, Inc.
    Menlo Park, CA & Qingdao, China
    e-mail: click on http://tinyurl.com/3mbzq (temporary, until Gmail resolves their problems; I haven't been able to access my Gmail messages for the past week)
     
    http://www.itestrategies.com (current blog postings optimized for MSIE6.x)
    http://tinyurl.com/2r3pa (access to blog content archives in China)
    http://tinyurl.com/2azkh (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW)
    http://tinyurl.com/2hg2e (AvantGo channel)
     
     
    To automatically subscribe click on http://tinyurl.com/388yf .
     

              [news] Cognizant & the "Intelligent Internet" + a Peek at 2005 IT Budgets (Part 1 of 2)        
    Thursday, July 29, 2004
    Dateline: China
     
    In a rush to catch a flight, so I'll make this brief.
     
    The subject of this message sounds like there might be some sort of causal relationship between Cognizant and Internet futures, but I'm really referring to two separate issues (and two different articles).  Although the article which is the basis for my Cognizant spin made my secondary urls listing in my last posting, it's worth reconsidering, especially for their take on China.
     
    The article on Infosys was a Q&A session with one of their senior execs, Francisco D'Souza (see http://tinyurl.com/3lpyu ).  Some of the more interesting points (in no particular order):  For one thing, Cognizant invests about 25-26% in SG&A, almost twice that of their competitors.  (As a marketing and bizdev guy, kudos to Cognizant for their foresight!)  "This has helped us (i.e., Cognizant) build a formidable sales and marketing infrastructure and invest in local practice leaders, client partners, relationship managers and so on."  They also took an early lead in verticalization, "which is another reason for richer customer experience."  A key issue, of course, is that Cognizant has already started its expansion outside India into other low-cost locations like China.  Kudos again!  (They have a foray into Europe, but it's minor:  They also view the "Golden Triangle" as the best strategy.)  They also believe that it will be "quite normal" for them to be delivering to customers from multiple locations in the world.
     
    Here's a great quote (I guess neoIT didn't see it):  "The initial feedback is that clients are interested in piloting work in China.  Our experience on the ground in China is that we've been able to find talented individuals with reasonable English-language capabilities."  Another comment (playing off my "Golden Triangle" theme):  "Currently, only India and China represent the potential to scale up volumes.  This may well continue for another decade." 
     
    On a related note, an exec (K.S. Suryaprakash) with Infosys was recently quoted as saying, "China is the only country which can offer cost scales comparable to India."  In their Shanghai operation, they have "seven or eight Indians" among their staff of 25 -- and they use Donald Duck posters to help teach English!  See http://tinyurl.com/5wyga .
     
    Sorry, but I really tried to include the "Intelligent Internet" and the 2005 IT budgets sneak preview in this posting; alas, I'll complete this in Dalian.  Got to run!!  (I already wrote what follows last night.)
     
    The International Software and Information Services Outsourcing Business Development Forum (in Dalian later TODAY)
     
    Off to Dalian to give a presentation on ITO market opportunities, primarily from an enterprise software perspective.  What's hot, what's emerging, IT e-strategies (hmmm ... sounds familiar), the usual stuff.  A bit on "why China," but this is being covered by several other speakers; none of the other speakers seem to be focused on market opportunities from an apps perspective.  In case anyone is wondering, I'm one of the invited speakers at this forum.
     
    Next: Collaboration Technologies: The Great Hope? (unless something at CISIS captures my attention).  (Oops ... after I finish this posting.)
     
    What I'm Reading:  Besides four articles and papers on collaboration technologies, I'm reading the special section on P2P-based data management in the July issue of IEEE Transactions on Knowledge and Data Engineering (a couple of papers from this issue have already been urled; see my urls blog), a paper on the evolution of IR to information interaction (I hope Google reads it), and a paper on web services for bioinformatics (which is a SUPERB paper, also already urled).  The latter paper may make it into a posting, so don't expect to see it on the "Top Ten" list for this week (but I might include it on the secondary list).
     
    Cheers,
     
    David Scott Lewis
    President & Principal Analyst
    IT E-Strategies, Inc.
    Menlo Park, CA & Qingdao, China
     
    http://www.itestrategies.com (current blog postings optimized for MSIE6.x)
    http://tinyurl.com/2r3pa (access to blog content archives in China)
     

              Comment on The Implementation of Molecular Evolution for the Masses by Oyun        
    Thanks;A couple of years ago, there was talk in the bioblogosphere about getting the general public interested in bioinformatics and molecular evolution
              Orent gets it (mostly) right        
    I've been tough on journalist Wendy Orent here because I thought her widely read op-ed pieces on bird flu were wrong-headed, inaccurate and unhelpful in getting people ready for a possible pandemic. Yesterday she had another op-ed in the LA Times and I'm glad to say it's on the right track. Not that I agree with everything she says, but it's informative and helpful to readers who want to understand some of the controversies. Here's the lede:
    There's a lot of bird flu virus out there. Despite encouraging news from Vietnam and Thailand, neither of which has reported any bird or human cases of the lethal H5N1 strain this year, the situation in Indonesia continues to worsen. Eight members of a family contracted the disease, and seven of them died this month. The timing suggests person-to-person transmission. Although not the first instance of such transmission, it's the single largest cluster that has been seen, according to virologist Earl Brown of the University of Ottawa. Indonesia appears to lack the resources to combat the disease.

    The virus is also active in Egypt and has spread to Israel, Jordan and the territories where Palestinians live. Africa has a wide belt of infection. With the disease spread over so much of the world, more people in contact with sick birds means more opportunities for humans to catch the virus. This appears how human influenza pandemics have begun — through human contact with sick birds.

    But the factors that set off a pandemic remain unknown. No one has ever tracked the evolution of a new pandemic. All we have seen — in 1918, 1957 and 1968 — is the aftermath of that evolution. Still, we are told that all it would take for H5N1 to become a pandemic would be for the virus to mutate so it could spread in a sustained way from person to person. (LA Times)
    The rest is a discussion of what she and others think this talk of mutation means. I might disagree in details, but essentially I agree we don't know much about what it would take. Most scientists don't believe a chicken virus turns into an easily transmissible human virus in one step (although it's possible). But Orent goes further. Her view is that natural selection is the key to the virus's evolution and it can't happen suddenly, requiring instead a period of adaption in mammalian and probably human hosts.

    This isn't an unreasonable point of view, and this adaptation might be occurring now in Indonesia and elsewhere. But I think it's wrong to believe it is the only point of view. Here are a couple of other possibilities.

    Whatever genetic changes are needed for transmissibility in humans may be traveling along with some that are useful for the virus in birds. Selection pressure doesn't explain everything, as we see in the sudden emergence of amantadine resistance in virtually all flu virus in the US. Amantadine is not used much in the US, so this isn't selection. Most likely it is a "hitchiker" effect with the amino acid change conferring amantadine resistance linked with another genetic feature that conveys some selective advantage (a point made by bioinformatician EC Holmes). Here's another possibility. It takes multiple genetic switches to be flipped to produce enhanced transmissibility in humans (let's say ten) but eight or nine are already flipped, leaving only one or two to go. Since we are largely ignorant of what it takes, we are also ignorant about how many switches are flipped already. Here's yet another point. Genetic changes that enhance transmissibility don't have to confer selective advantages or be linked to them. Without such an advantage the virus will eventually be replaced by another, more fit one, but the transient period could be very nasty. None of this is not a repudiation of Darwinism. It is consistent with the current neo-Darwinian synthesis ushered in by Sewall Wright, Ernst Mayr and others in the 1930s and 1040s.

    So it's good to see Ms. Orent on board. A flu-denier has now become a useful source of information. I hope she's right about her viral evolution scenario. But the distinct possibility she isn't is good enough reason to prepare.

              Free Training In Informatica,datastage,sasµstrategy.        
    C, C++, Java, .NET and/or other programming languages will be helpful....
              Free Training And Job Opportunity For Informatica, Datastage And Microstrategy.        
    C, C++, Java, .NET and/or other programming languages will be helpful....
              Free Training And Job Opportunity For Informatica, Datastage And Microstrategy.        
    C, C++, Java, .NET and/or other programming languages will be helpful....
              Free Training And Job Opportunity For Informatica, Datastage And Microstrategy.        
    C, C++, Java, .NET and/or other programming languages will be helpful....
              Free Training/placement In Microstaregy / Informatica.        
    Upcoming trainings : SAS - ONGOING MICROSTAREGY...
              Tema 6.5- Concepto de codificacion diagnostica        
    Se define como codificacion el acto de reemplazar un simbolo de significado ambiguo por otro mas preciso, normalmente se realiza codificando un sintagma nominal mediante una cadena alfanumerica. Mediante el uso de cadenas alfanumericas se resuelve el problema de la ambigüedad y es mas util para la informatica


    6.5.1- Etapas basicas en la codificacion

    Las etapas basicas que deben seguirse son las siguientes:
    1. En primer lugar se debe consultar el tomo 1 o indice alfabetico de enfermedades. Este indice alfabetico esta ordenado por procesos que pueden ser expresados por su nombre, adjetivos y epónimos
    2. En segundo lugar se localizara en el tomo 2 de la CIE-9 MC el codigo seleccionado. Debe guiarse por cualquier nota de exclusion que dirigiran la utilizacion del codigo para un diagnostico, condicion o enfermedad particular
    3. Se debe leer y guiarse por las convenciones empleadas en la lista tabular, aquellas cuestiones sobre la utilizacion e interpretacion de la clasificacion internacional de enfermedades podra consultarse con las unidades de codificacion de los hospitales, de la conselleria de sanidade o del ministerio de sanidad y consumo
    Debido a las multiples dudas que surgen, la Xunta de galicia elabora semestralmente un boletin de codificaciones diagnosticas en todas las comunidades autonomas


    6.5.2- Boletines de codificacion diagnostica

    Los boletines de codificacion diagnostica son documentos tecnicos cuya finalidad es servir de apoyo a los profesionales que desarrollan su trabajo en las unidades de codificacion. las opreguntas que se remiten tiene estas caracteristicas:
    • Mantienen la confidencialidad del paciente
    • Se refieren a a casos concretos de codificacion diagnostica
    • Deben estar adecuadamente documentados
    Dichas preguntas son analizadas y contestadas elaborando asi los nuevos boletines


    6.5.3- Sistemas de codificacion: nomenclaturas, clasificaciones y agrupamientos

    Los sistemas de codificacion deben representar conceptos
    • Los conceptos son las unidades mas elementales del pensamiento
    • La formacion de un concepto implica discernir un conjunto de propiedades comunes a una serie de objetos, generando una clase abstracta mental que representa a cada una de las instancias particulares
    Asi por ejemplo, la determinacion de una serie de atributos de personas que sufren problemas de salud permite construir los conceptos que conocemos como enfermedades especificas. estos conceptos son basicamente intransferibles, ya que en realidad son fenomenos mentales. la unica manera de hacerlos publicos es mediante el uso de simbolos

    Uno de los motivos de interes de la codificacion se relaciona con la llegada de los sistemas informaticos aplicados a la atencion a la salud. Su lenguaje natural es numerico, por lo que manejan los numeros con gran eficiencia pero no tienen la misma capacidad para efectuar operaciones o representaciones de conceptos no numericos como son los diagnosticos en medicina. Las relaciones entre conceptos no numericos deben ser precargados
    Por tanto, los sistemas de codificacion son, por su precision, candidatos naturales a esta precarga que luego manipulara el ordenador

    Existen dos tipos principales de sistemas de codificacion:
    • Las nomenclaturas o terminologia, que se refiere a la identificacion precisa del concepto
    • Las clasificaciones que agrupan conceptos similares
    Es posible decir que hay relacion entre nomenclatura y clasificacion, donde por cada item de una clasificacion es posible encontrar uno o varios items de nomenclatura.
    La eleccion de un sistema de codificacion adaptado a las necesidades de cada usuario u organizacion especificos se relaciona con la especialidad medica y la complejidad del sistema de atencion, pero factores tales como el proposito ultimo de codificacion y los medios disponibles para implementarla son tanto o mas importantes.
    La medicina moderna es criticada por su gran desfragmentacion del proceso de atencion: multiples especialistas y especialidades
    • La mayor parte de los sistemas de codificacion corresponden a la categoria de las clasificaciones, cuando se dirige a la poblacion
    • Si se analizan individuos, es mas util utilizar como sistema de codificacion las nomenclaturas.
    6.5.3.1 Nomenclaturas

    • SNOMED: desarrollada por el Colegio Americano de Patólogos y que incluye enfermedades, signos, sintomas, drogas, organismos vivos, etc. Así pues, codifica tanto conceptos como términos que los expresan vinculados entre sí.
    • En el Reino Unido, la nomenclatura utilizada por el Servicio Nacional de Salud es el Clinical Terms, que junto con el SNOMED, constituyen las nomenclaturas más completas y conocidas históricamente.

    6.5.3.2.- Clasificaciones
    1. Clasificación Internacional de Enfermedades: es el sistema de codificación mas conocido y utilizado a nivel munidal, ya que todos los países miembros de la OMS están obligados a comunicar sus estadísticas sanitarias en base a ella. Se centra en enfermedades y problemas relacionados con la salud y su función principal es conformar estadisticas sanitarias nacionales. Su primera edicion es de finales del siglo XIX y actualmente va por su 10ª edicion; existe ademas una modificacion clinica desarrollada en EEUU y adaptada a otros paises, la version española es mantenida por el Ministerio de sanidad y consumo
    2. Clasificación internacional de atencion primaria:es la mas difundida entre las comunidades de atencion primaria y medicina familiar, enfatizando los diagnosticos. Este sistema es mantenido y desarrollado por la WONCA (world organization of national colleges, academies and academic associations of general practioners) y dispone de una version en español
    3. Current procedural terminology: Los current procedural terminology o CPT codifican practicas y procedimientos diagnosticos y terapeuticos. Es desarrollado y mantenido por la AMA (American medical association)

    6.5.3.3- Agrupamientos

    Entre los agrupamientos destacan los grupos relacionados por el diagnostico o GRD que representan las clases de practicas y procedimientos agrupados por tipo de tratamiento, grupo erario y presumible costo similar. Este sistema de codificacion deriva de un trabajo destinado a probar una tecnica estadistica para ele studio de los costes medicos en EEUU, es empleado para el pago de los servicios sanitarios y actualizado periodicamente


    6.5.3.4- Seleccion del sistema de codificacion

    Todo proyecto de codificacion de la informacion medica debe presuponer la posibilidad de almacenamiento masivo de datos con la posibilidad de recuperarlos, reorganizarlos, distribuirlos y analizarlos.
    Se ha comentado que la codificacion implica el reemplazo del lenguaje natural por un sistema simbolico mas sintetico y necesariamente menos expresivo. Actualmente tanto el almacenamiento masivo como la comunicacion o el analisis de datos se apoyan en la informatica, de ahi que el nivel de disponibilidad de sistemas informaticos sea una consideracion primaria.

    La seleccion de un sistema de codificacion depende de la valoracion de una serie de cuestiones:
    • Del nivel de informatizacion disponible
    • De la informacion que quiera codificarse
    • De quien vaya a usar esa informacion derivada de la codificacion
    • La existencia de codigos opcionales u obligatorios
    • De los recursos humanos disponibles para desarrollar la tarea de codificacion
    • Del coste del proyecto

              Tema 5.7- Procesadores de texto        
    Un procesador de texto es una aplicacion informatica que sirve para crear y modificar documentos de texto con un ordenador. Normalmente forma parte de una suite o paquete que incluye otros programas para realizar tareas de oficina con la posibilidad de compartir datos entre las aplicaciones. Algunos de los procesadores de texto son:
    - Microsoft word, que es parte del paquete ofimatico Microsoft office
    - Writer, forma parte del paquete openoffice y es una alternativa gratuita a Microsoft word

    No olvidemos tambien la nueva generacion de aplicaciones en linea


    5.7.1- Utilidades de un procesador de texto

    ¿Que podemos hacer con un procesador de texto?
    • Trabajar con distintos tipos y tamaños de letra
    • Formatear parrafos
    • Modificar interlineales
    • Crear documentos de una o varias columnas
    • Insertar tablas e imagenes
    • Copiar y pegar texto
    • Establecer sangrias y tabulados
    • Comprobar la ortografia y tambien la sintaxis
    • Ofrecer sinonimos
    • Obtener presentaciones preliminares para saber como va a quedar el documento
    • Hacer etiquetas
    • Combinar correspondencia


    Ejemplo de procesador de texto en Word

              Tema 5.6- Internet: utilidades, navegadores, busqueda de informacion        
    5.6.1- Breve historia
    Internet es un sistema global de redes interconectadas que intercambian datos mediante el protocolo TCP/IP. Esta red consta de millones de redes academicas, de negocios, gubernamentales y militares interconectadas entre si

    Antes de la existencia de internet, habia otras redes precursoras:
    • ARPANET, creada a finales de los años 60 para fines militares
    • IPSS, que fue la primera red internacion de conmutacion de paquetes a mediados de los años 70
    • TCP/IP, creado por Vinton Cerf y Robert Khan a comienzo de los 70 y fue mejorando hasta que en 1983 toda ARPANET migro a este sistema
    • NSFNET a comienzo de los años 80, era una red que interconectaba las distintas universidades americanas basadas en este sistema
    Internet como la conocemos hoy dia empezo a funcionar y abrirse al mundo en 1988, compañias como Cisco o Juniper comenzaron a desarrollar los routers que permitian interconectar redes. la red comenzo a ser conocida a principios de los 90 gracias a la "world wide web" desarrollada por el CERN en Suiza, la web fue inventada por el ingles Tim Berners-Lee en dicho centro en 1989

    Uno de los primeros navegadores fue Mosaic en 1993, seguidos de otros como Netscape o Internet explorer. Desde entonces se estima que la web ha crecido un 100% cada año hasta convertirse en lo que es hoy dia

    En este enlace se puede encontrar informacion mas detallada sobre la historia y la evolucion de internet

    5.6.2- Usos de la red
    Los usos mas frecuentes de la red son:
    • Correo electronico: permite el envio de mensajes de texto que pueden llevar archivos adjuntos entre distintos usuarios a traves de internet. Algunos ejemplos son Gmail o Hotmail
    • WWW: se basa en documentos que pueden contener textos e imagenes y enlaces a otros documentos, estos enlaces se llaman hipervinculos y las direcciones a las que llevan, URL. Las paginas webs pueden incluir contenido multimedia de todo tipo, incluyendo juegos y aplicaciones
    La web
    Se basa en un protocolo llamado HTTP que permite hacer peticiones y recibir respuesta, tambien permite parametros para formularios
    • las webs estan escritas en lenguaje HTML, que permite el control de la interfaz grafica de la propia web
    • Los programas para acceder a las paginas webs se llaman navegadores, algunos ejemplos son Internet explorer o Firefox
    • Buscadores: se utilizan para buscar paginas webs que incluyan contenidos que nos interese, los mas famosos son Google o Yahoo
    • Acceso remoto: utilizando programas como logMein o Remote desktop es posible acceder a nuestro equipo desde otra ubicacion donde haya internet
    • Webcams: mediante camaras conectadas al ordenador es posible enviar imagenes en tiempo real y verlas desde internet
    • Telefonia: existe hardware y software que permite la comunicacion con otros usuarios mediante la voz, tan solo necesitamos un microfono y altavoces para ello, si tambien queremos ver a esa persona necesitaremos una camara web y pasara a ser una videoconferencia
    • Mensajeria instantanea: con programas como messenger o ICQ podremos chatear con usuarios que hayamos agregado y tengan el mismo programa
    • Redes sociales: son paginas en las que la gente pone datos sobre ellos mismos y pueden conocer a otros usuarios con datos similares o mantener contacto con personas ya conocidas, tambien puede ser usado en el ambito profesional para empresas, proveedores y clientes. Algunos ejemplos son Tuenti o Facebook y cada vez se utilizan mas y mas
    • Suscripciones: mecanismos como RSS o Atom permiten la suscripcion a un sitio web de manera que cada vez que dicha pagina incluya contenido nuevo, al usuario le saldra una notificacion de aviso sobre ello. De esta forma se evita que el usuario tenga que estar pendiente de si una pagina actualiza o no mirandola cada cierto tiempo
    • Podcast: son contenidos en forma de audio sobre noticias, temas de actualidad o cualquier cosa que se imagine, estan en formato descargable para que los usuarios lo puedan descargar y reproducirlos en su ordenador. Cada vez mas emisoras de radio cuelgan sus programas en internet en forma de podcast
    • Compra digital: contenidos como libros, discos o peliculas cada vez estan mas disponibles en formato digital, eliminando el formato fisico y pudiendo llevarnoslo y reproducirlo donde queramos en un dispositivo externo
    • Cartografia: programas como Google earth permiten ver cualquier parte del planeta gracias a satelites espaciales, no se puede ver en tiempo real pero van actualizando las imagenes cada cierto tiempo
    • P2P: estas redes permiten el intercambio de ficheros entres dos o mas usuarios y suelen ser objeto de polemica porque se suele utilizar para intercambiar contenidos ilegales o protegidos por derechos de autor
    • Aplicaciones informaticas: son programas informaticos que se pueden utilizar dentro de las propias paginas webs y asi no tenemos que instalar el programa en nuestro disco duro
    Tendencias actuales
    1. Web 2.0: no hace referencia a ninguna tecnologia en concreto, sino a sitios webs en los que el usuario cobra un mayor protagonismo. Algunos ejemplos son Wikipedia (enciclopedia editable), eBay (subastas y compraventa), Flickr (compartir fotos), del.icio.us (marcar paginas que nos interesen), Youtube (subida de videos), Twitter (microbloging similar a los SMS) o los blogs
    2. The cloud: es un conjunto de iniciativas de los principales fabricantes, en la cual el software y los contenidos se trasladaran cada vez mas a la red y dejaran de estar en los ordenadores personales de la gente. Esto requiere una gran difusion del acceso a internet, un cambio cultural importante y supone grandes ventajas de accesibilidad.
    3. La movilidad: Ahora es posible acceder a internet desde cualquier lugar, en cualquier momento y con cualquier tipo de dispositivo. Cada vez es mayor el numero de usuarios que utiliza su telefono movil para acceder a internet

              tema 5.4- Sistemas operativos        
    Se entiende por sistema operativo el conjunto de programas informaticos que permiten un uso eficaz de los recursos con los que cuenta el equipo, el software gestiona el hardware y ofrece una interfaz al usuario. El sistema operativo es algo asi como el punto intermedio entre el hardware y el software, ya que sin el mismo el ordenador no podria funcionar debidamente

    Las funciones principales de un sistema operativo son:
    1. Interfaz de usuario: posibilita la carga de programas, acceso a los archivos y otras tareas, las interfaces pueden ser graficas (incluyendo ventanas e imagenes) o textuales (se basan en terminales, solo se ven letras y caracteres)
    2. Administrar recursos: administra los recursos del hardware, ya sean los del propio equipo como dispositivos conectados. Debe garantizar la eficiencia en el uso de los recursos y facilitar los mecanismos para que varias aplicaciones funcionen a la vez
    3. Gestion de archivos: el sistema tiene los recursos para controlar el acceso a archivos y datos, esta actividad que parece simple requiere dotar una serie de mecanismos internos
    4. Gestion de tareas: el sistema permite que varios procesos esten funcionando a la vez, repartiendo el uso del procesador que hace cada uno y permitiendo sustituir la tarea activa por una de mayor prioridad
    5. Seguridad
    6. Soporte
    Una forma sencilla de clasificarlos es en monotarea y multitarea segun el numero de procesos simultaneos que pueda ejecutar, otra clasificacion puede ser monousuario y multiusuario segun el numero de personas que pueda trabajar a la vez

    Los sistemas operativos mas utilizados son Windows vista, Windows XP, mac OSX o Linux entre otros.

    Imagen explicativa sobre los sistemas operativos
              Tema 5.3- Concepto de hardware y software        
    Los elementos que podemos encontrar en un ordenador se diferencian en dos grandes grupos: Hardware y software

    Se conoce como hardware toda la parte fisica y tangible de un ordenador, incluyendo elementos electricos, electronicos y electromecanicos. El software en cambio es la parte intangible, incluyendo a componentes logicos para realizar tareas especificas


    5.3.1- Elementos

    -CPU
    Es el componente fundamental del ordenador, sirva para interpretar y ejecutar instrucciones y procesar datos. Algunos fabricantes de CPU son Intel y AMD

    -Memoria
    Son componentes que retienen datos informaticos durante un periodo de tiempo y se divide en:

    • Memoria primaria: nos referimos a la memoria RAM, esta contiene a los programas en ejecucion y los datos que utilizan y su contenido desaparece al apagar el ordenador
    • Memoria secundaria: se refiere a los dispositivos fisicos de almacenamiento como discos duros, cds o memorias flash, sirven para guardar informacion en ellos y mantenerla ahi para siempre o hasta que el usuario decida borrarla. Algunas de sus caracteristicas son capacidad, direccionamiento de la informacion o metodo de lectura
    -Teclado
    Periferico con un conjunto de teclas que sirve para introducir datos en un ordenador

    -Raton
    Periferico de entrada que se controla con una mano y detecta el movimiento en una superficie plana, trasladando este movimiento a la interfaz del sistema operativo. Este periferico permitio la evolucion de los sistemas basados en terminales a sistemas operativos con interfaz grafica

    -Monitor
    Dispositivo que muestra lo que hace la computadora en forma de interfaz grafica. En los ultimos años han pasado de ser pantallas de tubo a pantallas LCD o pantallas planas, que tienen una mayor resolucion, mayor nº de colores y no cansan tanto la vista

    -Disco duro
    Es un dispositivo de memoria volatil, es decir, la informacion sigue almacenada aunque se apague el ordenador. Consiste en un conjunto de platos metalicos que giran a alta velocidad y sobre ellos una serie de cabezales que leen o escriben impulsos electromagneticos similar a como lo hacen las cintas de grabacion
    En la actualidad esta saliendo al mercado un tipo de disco duro que no tiene partes moviles, el llamado disco de estado solido (SSD) o disco duro externo, que es similar a un pendrive pero con la capacidad de un disco duro normal y corriente

    -Disco optico
    Son discos de plastico que guardan informacion, esta se guarda en forma de surcos sobre la superficie del disco y se lee con un rayo laser de la unidad lectora segun la luz reflejada. Su difusion comenzo cuando Sony lanzo el primer CD en 1983 y posteriormente se creo el DVD con 5 veces mas capacidad, actualmente el Blu-ray tiene unas 50 veces la capacidad de un CD y sigue en aumento


    Evolucion de la capacidad de almacenamiento de los discos opticos

    -Fuente de alimentacion
    Es el dispositivo que convierte la tension alterna en tension continua a un voltaje apto para ser usada por los componentes del ordenador. La red electrica suele utilizar 220V de tension y la fuente de alimentacion la convierte a la tension que utiliza el ordenador, unos 12V o 5V

    -Placa base
    Es la tarjeta de circuitos impresos del ordenador en los que insertar los componentes basicos del ordenador y que contienen los buses o caminos que interconexionan dichos elementos. las placas actuales incluye elementos que en otra epoca hubieran estado en una placa de ampliacion

    -Tarjeta de expansion
    Es lo que se introduce en la placa base, son dispositivos con circuitos integrados que sirven para ampliar cosas como la memoria RAM, tarjeta grafica, etc...
    Algunos elementos que antes iban en tarjetas ahora se incluyen en la placa base, debido a la reduccion del tamaño de los chips y permitiendo abaratar costes

    Para conocer otros elementos o ver imagenes de los mismos, en este PDF se incluye mas informacion
              Tema 5. Aplicaciones informaticas para la gestion y coordinacion de emergencias        
    1.- Introducción.

    Con este capítulo se pretende dar a conocer algunos conceptos fundamentales de la informatica de usuario, tal como se entiende hoy en día. Comenzaremos hablando de algunos conceptos fundamentales como software y de hardware.
    Tambien hablaremos de los sistemas operativos, que dan vida a los ordenadors.
    hablaremos también de algunas de las aplicaciones fundamentales que componen un paquete ofimatico. Es importante conocer y comprender los conceptos de Internet e Intranet, y saber qué podemos esperar de ellos.
              PERCHÉ LA NOTTE...        

    Chiunque pensi che fare i turni di notte sia più facile e rilassante "perché tanto non chiama nessuno e quindi puoi dormire" riceverà una mia visita a casa durante la quale provvederò a dare fuoco a tutte le suppellettili esistenti. Poi legherò l'incauto commentatore come una salamella e lo lascerò penzolare a testa in giù dal balcone in fiamme a imperituro monito per i passanti.

    Sfatiamo immediatamente il mito che "non chiama nessuno", perché ci arrivano chiamate da tutto il mondo e quindi se qui è mezzanotte a Las Vegas sono le 3.00 del pomeriggio e ovviamente lavorano. È vero che a New York gli uffici chiudono alle 18.00, ma i negozi possono rimanere aperti anche fino alle 22.00.
    Mano a mano che il nostro bel pianeta gira (assieme a non_diciamo_cos'altro ), cominciano a chiamare le Hawaii, il Giappone, la Corea, la Cina, la Thailandia, l'India, .Dubai (non tutti esprimendosi in idiomi a me comprensibili... coi coreani faccio dei siparietti comici che non avete idea visto che l'unica cosa che sanno dire in inglese è Korea Rock!), e prima che arrivi l'alba hai accumulato tanta di quella bile che potresti scioglierci un'intero capannone di amianto!
    Aggiungiamoci poi i clienti italiani che hanno pagato per l'assistenza h24 e che sì, vi assicuro che chiamano, non foss'altro per non sentirsi soli in quelle immense fabbriche di fazzolettini di carta, assorbenti e pannolini, i maniaci sessuali del Pakistan che non si sa come hanno avuto il numero gratuito di uno dei nostri clienti e che non raffreddano i bollenti spiriti nemmeno quando a rispondere è una possente voce maschile , i medici e gli infermieri di guardia per i quali forniamo supporto informatico e siamo a cavallo.

    A proposito dei medici, proprio l'altra sera ne ho chiamato uno verso mezzanotte perché mi avevano segnalato che aveva un problema con la posta elettronica e che anche lui aveva il turno serale. Poiché aveva detto di essere impegnato con dei prelievi al momento, mi ero offerta di ricontattarlo più tardi visto che tanto non avrei certo dormito. E lui, invece di replicare con un semplice "molto gentile" non ha trovato di meglio che rifilarmi un PIPPONE sulla decadenza dello stato sociale in Italia, sul fatto che negli anni '70 si lavorava solo 6 ore al giorno, che non si facevano turni, che  adesso nessuno di noi vedrà mai una pensione, che i mutui costano, che il lavoro non si trova, che è tutto un magna magna...
    Io non ho replicato, ma avrei TANTO voluto dirgli che lui è pur sempre un medico e sicuramente il suo stipendio è molto più alto del mio, e che ogni notte mi è pagata in più appena 6 euro. Troppo pochi per sopportare anche questo.


    CANZONE DELLA NOTTE: Patty Smith - Because the night

    With love we sleep
    With doubt the vicious circle
    Turn and burns
    Without you I cannot live
    Forgive, the yearning burning
    I believe it's time, too real to feel
    So touch me now, touch me now, touch me now
    Because the night belongs to lovers

    Because tonight there are two lovers
    If we believe in the night we trust
    Because tonight there are two lovers


              Hacker: un furto da 45 milioni di euro al Bancomat        

    Hacker: un furto da 45 milioni di euro al Bancomat

    [caption id="attachment_8000" align="alignleft" width="300"]Hacker: un furto da 45 milioni di euro al Bancomat Hacker: un furto da 45 milioni di euro al Bancomat[/caption]

    Un gruppo di hacker, ma dovremmo parlare più propriamente di truffatori informatici, sono riusciti a rubare 45 milioni di dollari in poche ore. Si tratta di una notizia apparsa sulle pagine del New York Times. I malviventi hanno cancellato i limiti ai prelievi di un determinato numero di carte prepagate, creato nuovi codici di accesso e quindi li hanno caricati su tessere magnetiche di ogni tipo.

    Su queste prepagate fittizie sono state caricate grandi quantità di denaro sottratto a fondi bancari. La fase finale dell’operazione prevedeva il semplice prelevamento da bancomat. È  proprio durante il momento del prelievo che i malfattori non sono riusciti a nascondere la grande quantità di banconote erogata.

    Otto le persone arrestate da parte delle autorità di New York, nel frattempo, però, era stati prosciugati 2904 sportelli bancomat in circa 10 ore, per un prelievo di 2,4 milioni di dollari in contanti. L’unica vera nota positiva è che non sono stati toccati i soldi dei correntisti. 


              Smart card anti hacker: ecco il software Tookan        

    Smart card anti hacker: ecco il software Tookan

    [caption id="attachment_7563" align="alignleft" width="300"]Smart card anti hacker: ecco il software Tookan Smart card anti hacker: ecco il software Tookan[/caption]

    Si chiama Tookan ed è il software elaborato dai ricercatori dell’Università Ca’Foscari Venezia per assicurare maggiore protezione alle smart card. Il programma provvederebbe a individuare e prevenire gli attacchi degli hacker della Rete rivolti a dispositivi usb, smart card e a hardware crittografici di massima sicurezza adottati da grandi gruppi imprenditoriali.

    Il progetto nasce dagli sforzi di un team di lavoro guidato dal professore Riccardo Focardi, docente di informatica al dipartimento di Scienze Ambientali, Informatica e Statistica dell'Università Ca'Foscari. Il tookan sarebbe capace di rilevare e segnalare le tracce di attacchi ai dispositivi informatici. Non solo. Sarà definita la natura dell’attacco hacker e le modalità per arrestarlo. La strategia messa a punto per prevenire l’aggressione è basata su dei modelli di analisi che rappresentano una sintesi di tutte le ricerche elaborate fino a ogni in questo settore di sviluppo.

    I primi test sono già stati eseguiti su dispositivi realmente esistenti ed hanno messo in luce attacchi concreti. Minacce delle quali sono stati messi in guardia i produttori, che hanno provveduto a porre un rimedio. Anche il mondo dell’accademia sembra apprezzare la bontà di Tookan. Il professor Focardi e il suo team di lavoro sono infatti arrivati quinti alla competizione internazionale di sicurezza informatica organizzata dalla UCSB (Università California Santa Barbara).

    Per una volta è l’Università italiana a dimostrare il grande potenziale della ricerca pubblica. Speriamo che presto le intuizioni dei ricercatori dell’Università Ca’Foscari sapranno tradursi in un concreto beneficio per gli utenti finali. 


              Hacker dei cartelloni pubblicitari per Pirate Bay        

    L'hacking in Piazza della Repubblica a Belgrado

    [caption id="attachment_7529" align="alignleft" width="300"]L'hacking in Piazza della Repubblica a Belgrado L'hacking in Piazza della Repubblica a Belgrado[/caption]

    Come in Italia, anche in Serbia il problema della sicurezza informatica è molto trascurato. Per sensibilizzare l'opinione pubblica e per allertare una società pubblicitaria della falla che li affliggeva, Ivan Petrovic e Filip Stanisavljevic, due ventenni serbi con la passione dell'hacking, hanno deciso di trasformare un cartellone pubblicitario di Piazza della Repubblica a Belgrado nel loro schermo personale, in cui visualizzare le partite dei videogiochi pilotati dai loro smartphone e postare il logo di Pirate Bay per oltre 20 minuti, con una citazione di Gandhi:

    Prima ti ignorano, poi  ti deridono, poi ti combattono, poi vinci!

    L'impresa dei due ha avuto un eco mediatico non indifferente e la DPC, società pubblicitaria vittima della burla, ha deciso di premiare i due giovani hacker con un iPad mini 4G ciascuno. Il premio è stato consegnato ai due direttamente nella sede della società, dove sono stati convocati per tenere una conferenza sulla sicurezza informatica e sulla falla da loro riscontrata.

    [youtube FO_o8casdhs]

    Slobodan Petrovic, manager della DPC ha commentato: "Un fatto del genere non era mai successo prima, ma apprezziamo che questi ragazzi, in modo splendido, ci abbiano indicato questo enorme problema. Ora più che mai è chiaro che dobbiamo proteggerci meglio". Secondo il direttore del DPC i due studenti hanno la fortuna di trovarsi in Serbia, dove la legislatura non prevede pene severe per questo genere di reati. "Nei paesi più sviluppati, queste azioni sono impensabili a causa di pesanti sanzioni", ha aggiunto.

    E se fosse successo in Italia, cosa sarebbe successo a Ivan e Filip? voi cosa ne pensate? Dite la vostra nei commenti.

    Fonte: Torrent Freak


              Apple attaccata da hacker dell’Europa dell’est        

    Apple attaccata da hacker dell’Europa dell’est

    [caption id="attachment_7359" align="alignleft" width="300"]Apple attaccata da hacker dell’Europa dell’est Apple attaccata da hacker dell’Europa dell’est[/caption]

    Sfuma la pista cinese. A dispetto degli attacchi ricevuti da altri colossi dell’informatica, come Facebook e Twitter, gli hacker che hanno colpito Apple non hanno base operativa in Cina. Lo rende noto la stessa Cupertino che ha identificato l’origine dell’azione pirata nell’Europa dell’Est.

    In base a quanto rivelato da Bloomberg, il gruppo di hacker sarebbe alla ricerca dei segreti aziendali delle compagnie più importanti al mondo. Sarebbe Mosca la base scelta dagli hacker.

    L’attacco si è realizzato sfruttando un portale per sviluppatori iPhone fittizio, che andava a infettare i Mac che vi accedevano. Cupertino è già ricorso ai ripari rilasciando un aggiornamento di Java che rimuove ogni rischio.

    Una prova ulteriore della permeabilità del sistemi Apple e del rapporto diretto tra informazione e potere. Braccio di ferro che vede schierate le più grandi potenze mondiali, Cina in testa.


              Cyber Crime Conference: 27, 28 marzo a Roma        

    Cyber Crime Conference: 27, 28 marzo a Roma

    [caption id="attachment_7298" align="alignleft" width="300"]Cyber Crime Conference: 27, 28 marzo a Roma Cyber Crime Conference: 27, 28 marzo a Roma[/caption]

    L’appuntamento si avvicina, e gli appassionati di sicurezza informatica sono pronti a darsi appuntamento a Roma. Nella capitale, infatti, il 27 e il 28 marzo si svolgerà, presso il Crowne Plaza Convention Centre, la 3° edizione del Cyber Crime Conference.

    Temi centrali della kermesse: la valutazione della vulnerabilità delle strutture, di Cybercrime, di Cyber Warfare e di Intelligence, analisi e visione geopolitica, Cyberdefence e Infosharing, Sicurezza dei dati aziendali e la formazione.

    Saranno coinvolti hacker che offriranno le ultime notizie in fatto di privacy, legalità e protezione delle infrastrutture critiche. “Si affronterà una panoramica dettagliata degli eventi di cybercrime e incidenti informatici più significativi degli anni passati in Italia e nel mondo e le tendenze per il futuro. Si analizzano anche i fenomeni emergenti che hanno caratterizzato il 2012 e che continuano a farlo, quali: hacktivism, espionage e sabotage/cyber warfare”.

    Per maggiori info vi invitiamo a consultare il portale http://www.tecnaeditrice.com/forumcybercrime13_presentazione.php

     


              Cina vs USA: i cyber-attacchi sanciscono una nuova guerra fredda        

    Cina vs USA: i cyber-attacchi sanciscono una nuova guerra fredda

    [caption id="attachment_7160" align="alignleft" width="300"]Cina vs USA: i cyber-attacchi sanciscono una nuova guerra fredda Cina vs USA: i cyber-attacchi sanciscono una nuova guerra fredda[/caption]

    Cina vs USA. La nuova guerra fredda corre sul filo del Web. È una frontiera destinata a divenire sempre più rilevante sullo scacchiere internazionale, e le super potenze mondiali ne forniscono prova. Continuano le indagini che hanno colpito aziende e istituzioni statunitensi. L’origine del cyber-attacco sarebbe un palazzo dove ha sede l’esercito cinese.

    Responsabile della scoperta la società Madiant, ma è un dato non univoco. In altre parole, non è possibile affermare con certezza che il governo cinese abbia portato a segno gli attacchi. Chiunque, in linea di concetto, avrebbe potuto portare a termine l’offensiva, agendo magari di propria iniziativa. C’è poi da considerare l’eventualità dell’errore umano, e che quindi i tecnici Madiant si siano sbagliati. Questioni teoriche sì dirà, ma che, de facto, rendono impossibile qualunque risoluzione finale.

    Volendo leggere con un pizzico di malizia, indispensabile in questioni tanto sottili e delicate, la divulgazione delle ultime notizie rilasciate da Madiant, potremmo pensare a un chiaro monito, diretto a Pechino, lanciato da Washington: “se gli attacchi continuano siamo pronti a rispondere”. Sì, perché proprio qualche giorno fa è stato approvato un nuovo piano per la difesa digitale, che concede carta bianca al Presidente.

    Secondo l’AD di Mandiant, Kevin Mandia, nessuna ambiguità sulla natura degli attacchi: “o gli attacchi provengono dall'Unità 61398 (l'edificio in questione, nella periferia di Shanghai), o le persone che governano la rete Internet più controllata e monitorata del mondo non hanno idea che migliaia di persone stanno eseguendo attacchi nelle loro periferie”.

    Il bottino informatico conquistato, in quasi sette anni di attività, dal gruppo di hacker denominato "Comment Crew" o "Shanghai Group avrebbe sottratto, secondo i giornalisti del New York Times,  “terabyte di dati da aziende come Coca Cola, concentrandosi sempre di più su società legate alle infrastrutture critiche degli USA – la rete elettrica, le linee del gas e gli acquedotti. […] Secondo (Mandiant) l'unità era tra quelle che attaccarono la società di sicurezza RSA, nei cui computer si conservano dati confidenziali del governo e di aziende". 

    Pechino rimanda al mittente ogni accusa, sostenendo di non praticare in alcun modo attività di hacking di sistemi informatici. Anzi, semmai la Cina sarebbe vittima di attacchi che le vengono portati da diversi gruppi residenti negli Stati Uniti. Da presunto carnefice a vittima della prepotenza a stelle e strisce, in base alle ultime notizie. Sarà vero?

    Difficile pensare a una battaglia a senso unico. Se è nell’informazione il potere del futuro, la vera atomica capace di orientare indici azionari, bolle speculative e di far saltare i Governi del pianeta, USA e Cina sono nella stessa barca, pur remando in direzioni opposte. Lo scopo è uno solo, detenere le informazioni più pertinenti e preziose. Una battaglia all’ultimo bit. Chiunque vincerà non lo farà certo per il bene dei popoli, al massimo per l’élite che li domina.


              Mobile Hacking: facile con un Nexus 7 Pwn Pad        

    nexus-7-pwn-pad

    nexus-7-pwn-padNovità dal settore hacking da dispositivi mobile. Se prima l'hacking di una rete era un lavoro duro, meticoloso, che portava via parecchio tempo, ora non più!

    E' di poco più di un'ora fa la notizia che gli sviluppatori di Pwn Express hanno creato un kit per il perfetto hacker su piattaforma Android: un tablet Google Nexus 7 moddato, in grado di gestire l'enorme quantità di pacchetti dati su connessione wifi necessari per verificare la robustezza e la sicurezza di una rete.

    Per aggirare i limiti tecnici imposti dal chip integrato, il tablet è dotato di un'antenna wifi esterna e di una suite di programmi costruiti ad hoc per gestire ogni operazione con rapidi tocchi e pochi passaggi.

    Il Pwn Pad (questo il nome del tablet) si unisce ad una vasta gamma di prodotti della Pwn Express dedicati alla sicurezza informatica e viene venduto al pubblico per $795.00 USD, una cifra non accessibile a chiunque ma considerando il set di funzioni che mette a disposizione un professionista del settore sicurezza informatica potrebbe farci un pensierino. L'elenco delle App è davvero vasto ed alcune di esse non possono essere utilizzate su altri dispositivi a causa dei limiti hardware:

    Wireless Tools
    • Aircrack-ng
    • Kismet
    • Wifite-2
    • Reaver
    • MDK3
    • EAPeak
    • Asleap-2.2
    • FreeRADIUS-WPE
    • Hostapd

    Bluetooth Tools:

    • bluez-utils
    • btscanner
    • bluelog
    • Ubertooth tools
    Web Tools
    • Nikto
    • Wa3f

    Network Tools

    • NET-SNMP
    • Nmap
    • Netcat
    • Cryptcat
    • Hping3
    • Macchanger
    • Tcpdump
    • Tshark
    • Ngrep
    • Dsniff
    • Ettercap-ng 7.5.3
    • SSLstrip v9
    • Hamster and Ferret
    • Metasploit 4
    • SET
    • Easy-Creds v3.7.3
    • John The Ripper(JTR)
    • Hydra
    • Medusa 2.1.1
    • Pyrit
    • Scapy

    Se siete interessati alle potenzialità di Pwn Pad e ad effettuare test di sicurezza in maniera rapida e veloce usufruendo di tutte le comodità di un tablet potete effettuare l'acquisto cliccando qui.

    Ecco alcuni screenshot:

    pwd-pad-screenshot      pwd-pad-screenshot-metasploit    

     

    pwd-pad-screenshot-2

     Chi non ha 795 dollari da spendere può sempre dilettarsi con l'eccellente dSploit, app android gratuita che permette di effettuare hacking delle reti a cui è connessa mettendo a disposizione un buon numero di tool che vanno dallo sniffing al MITM (Man In The Middle).

    Fonte: FabrizioPuce.it


              Sicurezza informatica: il nuovo progetto USA        

    Sicurezza informatica: il nuovo progetto USA

    [caption id="attachment_7085" align="alignleft" width="294"]Sicurezza informatica: il nuovo progetto USA Sicurezza informatica: il nuovo progetto USA[/caption]

    È quasi un atto dovuto, vista la preminenza capitale assunta dalla protezione digitale nei progetti governativi. Il Presidente americano Barack Obama ha sottoscritto l’ordine esecutivo sulla sicurezza informatica.

    Un fronte caldo, tanto che Barack Obama ha stigmatizzato il suo intervento sostenendo che gli attacchi dei pirati informatici «rappresentano una minaccia in rapida crescita». Il ruolo assunto dagli hacker, secondo il Presidente, non sarebbe limitato ad azioni contro soggetti privati o aziendali. La nuova frontiera della pirateria andrebbe a toccare «la nostra rete elettrica, le nostre istituzioni finanziarie e i nostri sistemi di controllo del traffico aereo».

    Da ciò deriverebbe l’urgenza di un’azione governativa decisa: «non possiamo tra qualche anno guardare indietro [nel tempo] e chiederci perché non abbiamo fatto nulla».  Un progetto, quello inaugurato dalla Casa Bianca, che prende in esame lo sviluppo e l’applicazione di pratiche condivise con partner industriali. L’ente che si occuperà dell’elaborazione delle nuove prassi sarà il National Institute of Standards and Technology.

    Sarà inoltre integrato il programma Enhanced Cybersecurity Services, volto al supporto delle società che operano nel settore delle infrastrutture digitali, primi soggetti interessati dagli attacchi degli hacker. 


              Twitter subisce un attacco, a rischio 250mila user        

    Twitter subisce un attacco, a rischio 250mila user

    [caption id="attachment_6975" align="alignleft" width="300"]Twitter subisce un attacco, a rischio 250mila user Twitter subisce un attacco, a rischio 250mila user[/caption]

    Sono passate poche ore dai momenti di panico che hanno afflitto gli utenti della piattaforma di microblogging per eccellenza. Twitter ha subito un grave attacco informatico che ha messo a rischio migliaia di account. Si parla di circa 250mila utenze, soggetti all'offensiva di ignoti hacker. Numeri che incontrano le conferme ufficiali, riportate sul blog ufficiale, del portale:

    Le nostre indagini hanno fin ora indicato che l’attacco può aver dato l’accesso ad una serie di informazioni degli utenti relativamente limitate (hanno potuto scoprire il nome dell’utente, l’indirizzo mail, ecc.). Come misura cautelativa, abbiamo dovuto resettare le password degli account in questione. Un e-mail verrà recapitata nella vostra casella di posta dove vi verrà richiesto di inserire una nuova password. La vecchia password ovviamente non funzionerà più.

    Siete degli abituali fruitori del servizio Twitter? Controllate se avete ricevuto la comunicazione dello Staff sulla vostra casella di posta elettronica!


              New York Times sotto l’attacco di hacker cinesi        

    New York Times sotto l'attacco di hacker cinesi

    [caption id="attachment_6973" align="alignleft" width="248"]New York Times sotto l'attacco di hacker cinesi New York Times sotto l'attacco di hacker cinesi[/caption]

    Il colosso d'Oriente fa valere il suo peso anche nel campo dell'hacking. Quattro mesi di attacchi di pirati informatici cinesi ai danni del New York Times. È quanto dichiarato dal noto quotidiano statunitense, che avrebbe suscitato le ire del governo cinese dopo l'inchiesta, pubblicata lo scorso ottobre, sull'enorme patrimonio segreto del primo ministro Wen Jiabao.

    Appena resa nota la notizia, il sito web del New York Times era stato oscurato dalla Cina. Ma questo era solo il preludio a quanto sarebbe avvenuto. Da quel momento, con cadenza quotidiana, il sistema informatico del quotidiano sarebbe stato oggetto di pervicaci attacchi.

    Il bottino dei pirati cinesi ammonterebbe a 53 password di giornalisti, tra cui quella di David Barboza, corrispondente da Shanghai e autore dell'articolo, e quella di Jim Yardley, corrispondente da Pechino. Non solo. Sottratti anche file ed email relativi alla famiglia Wen. Gli attacchi sarebbero partiti da un'università cinese.


              #Pdftribute: la Rete celebra la scomparsa di Aaron Swartz        

    #Pdftribute: la Rete celebra la scomparsa di Aeron Swartz

    [caption id="attachment_6871" align="alignleft" width="300"]#Pdftribute: la Rete celebra la scomparsa di Aeron Swartz #Pdftribute: la Rete celebra la scomparsa di Aeron Swartz[/caption]

    La tragica, prematura scomparsa dell'attivista hacker Aaron Swartz viene celebrata dal Web con #Pdftribute. Creatore e ideatore delle licenze Creative Commons, morto suicida, lo ricordiamo, ha fatto nascere un movimento di protesta. Si tratta di un hashtag utile alla pubblicazione libera di articoli scientifici, ebook e tesi di laurea coperte dal copyright. Non è mancata la compagine azzurra degli utenti.

    Si sono uniti all'hashtag #Pdftribute tanti ricercatori, programmatori e ricercatori. Tra gli argomenti più diffusi dai documenti, come era facilmente prevedibile, abbiamo la sicurezza informatica. Luca Rossi, ricercatore del Dipartimento di Comunicazione dell’Università Carlo Bo di Urbino, ad esempio, ha deciso di offrire numerosi articoli inerenti i comportamenti e le interazioni tra utenti nelle piattaforme virtuali.

    Mentre Giovanni Amoroso propone la sua tesi “Nuove forme di diffusione della musica: il free download”. In ambito economico vi segnaliamo Alberto Cottica, esperto di politiche pubbliche collaborative e impiegato al Consiglio d'Europa, che ha pubblicato decine di articoli. E voi, avete segnalato delle risorse durante questa sorta di commemorazione online?

     


              Operation Red October: malware che ha colpito l’intelligence internazionale        

    Operation Red October: malware che ha colpito l'intelligence internazionale

    [caption id="attachment_6858" align="alignleft" width="300"]Operation Red October: malware che ha colpito l'intelligence internazionale Operation Red October: malware che ha colpito l'intelligence internazionale[/caption]

    È una scoperta destinata a suscitare l'attenzione dei sistemi di sicurezza internazionali. I ricercatori di Kaspersky Lab hanno scoperto un network di spionaggio su larga scala che ha colpito migliaia di organizzazioni diplomatiche, governative e scientifiche distribuite in 39 Paesi, tra cui la Russia, l'Iran e gli Stati Uniti. Sono i risultati della campagna di spionaggio “Operation Red October” (operazione ottobre rosso), attiva dal 2007, e che avrebbe già acquisito migliaia di terabyte di informazioni sensibili.

    Sono stati utilizzati oltre mille distinti moduli mai visti prima per personalizzare l'attacco ai profili di ogni vittima. Colpiti computer individuali, componenti Cisco System e smartphone (dai prodotti Apple a Microsoft e Nokia). Lo scopo degli aggressori era mirato all'acquisizione di documenti sensibili detenuti, in larga parte, dall'intelligence geopolitica. Gli esperti di Kaspersky Lab nell'ottobre del 2012 hanno dato luogo all'inchiesta che ha scoperchiato la rete di spionaggio mondiale, probabilmente attiva fino ai primi giorni di gennaio 2013.

    Al centro degli attacchi un malware denominato “Rocra”, che trova il suo punto di forza proprio nell'inedita struttura modulare. Una novità persino per gli esperti Kaspersky: “non abbiamo mai visto questi tipi di moduli così distribuiti, che hanno raggiunto un livello di personalizzazione tanto elevato nello sferrare un attacco informatico da rappresentare qualcosa di inedito”.

    Tra i dati acquisiti figurano file ottenuti da sistemi crittografati come l'Acid Cryptofiler. Le credenziali trafugate erano usate per conoscere i dati di login in altre piattaforme. Al momento sappiamo molto poco riguardo alle persone e alle organizzazioni responsabili del progetto. La contraddittorietà dei dati disponibili rende difficile definire la nazionalità degli hacker. Sebbene una parte degli sviluppatori del malware è di origine russa, tanti exploit sfruttati erano stati elaborati, quanto meno in un primo momento, da hacker cinesi.

    Il Paese più colpito è la Russia, seguita dal Kazakistan, Azerbaijan, Belgio, India, Afghanistan, Iran e Turkmenistan. In totale si tratta di 39 Paesi in diversi continenti. Sono stati usati oltre 60 nomi relativi al proxy servers, utili a oscurare la destinazione finale. Una infrastruttura pensata per proteggere l'identità degli hacker e per resistere a eventuali contrattacchi: “Si tratta di una infrastruttura ben articolata e gestita, che supporta multipli livelli di proxy con la finalità di proteggere la mothership (il livello più alto della struttura)”. A sorprendere in maniera particolare è l'affidabilità assicurata da un sistema tanto sofisticato in un periodo di cinque anni.

    Una delle vie d'accesso del malware protagonista dell'Operation Red October è costituita dalla creazione di un'estensione Adobe Reader e Microsoft Word sulle macchine destinate ad essere compromesse. Ma se l'Operation Red October è stata portata avanti per cinque anni senza poter essere smantellata, viene spontaneo domandarsi se non vi siano altri progetti analoghi ancora in svolgimento e quali le conseguenze in termini di sicurezza internazionale.


              Kaspersky lancerà un sistema operativo per computer industriali        

    kaspersky-logo

    kaspersky-logoLe prime indiscrezioni sono apparse proprio sul blog personale del fondatore della russa Kaspersky Lab, Eugene Kaspersky. Il progetto è ambizioso, sviluppare uno speciale sistema operativo capace di resistere a ogni attacco informatico, dai virus ai malware ai cracker. Una vera rivoluzione improntata sulla sicurezza informatica, che allevierebbe non di poco il carico di apprensione dimostrato negli ultimi anni dagli utenti, soprattutto in ambito professionale.

    Il signor Kaspersky si è limitato, per ragioni legate allo spionaggio industriale operato da parte della concorrenza, alla divulgazione del nome del progetto: 11.11. Numeri che richiamano la data di inizio del progetto, inaugurato oltre 10 anni fa e ancora lontano dalla sua completa conclusione. Ma quali sono le sue caratteristiche? Dalle prime indiscrezioni trapela che:

    • sarà composto da un codice assai ridotto

    • sarà attivabile solo su computer industriali (ICS, Industrial Control Systems) e di controllo delle infrastrutture SCADA (Supervisory Control And Data Acquisition)

    • non sarà distribuito per i consumatori medi, ma solo per apparati simil-governativi, dove la sicurezza è tutto

    • eseguirà compiti pre-programmati al rilascio, in modo tale che il software non possa essere manipolato dagli hacker.

    Ma perché sviluppare una piattaforma software così articolata? Questa l'analisi di Kaspersky: i recenti attacchi portati da virus e malware sembrano ”frutto di una nuova tipologia di conflitto mondiale, che vede piccoli stati e grandi potenze politico-economiche darsi battaglia assoldando team di pirati informatici chiamati a predisporre set di armi cibernetiche  con il fine di colpire i propri rivali”.


              Occasioni d’incontro. Febbraio 2009        
    Altre due importanti occasioni d’incontro: mercoledì 11 febbraio 2009 – ore 17-19, sarò a Torino, in Corso Trento n. 21, per “I mercoledì di NEXA”. Il Centro NEXA e’ un centro del Dipartimento di Automatica e Informatica del Politecnico di Torino. Per informazioni http://nexa.polito.it/events; venerdì 20 febbraio 2009 – ore 15, sarò a Potenza all’Aula […]
              Mei Liu, Expert in Healthcare Analytics, Appointed         

    Mei Liu, PhD, a computer scientist who uses advanced informatics approaches to improve health care, will join this fall the NJIT College of Computing Sciences as an assistant professor. 

    Tagged: college of computing sciences, ccs, healthcare, mei liu, emr, electronic medical records, healthcare analytics, bioinformatics, natural language processing, data-mining



              Recensione: "La confraternita delle ossa"        
    Ed eccomi finalmente a pubblicare la mia recensione de "La confraternita della ossa" di Paolo Roversi.
    Come ricorderete, l'ho letto in anteprima ad agosto, grazie al fatto che facevo parte di un gruppo ristretto, la Confraternita dei lettori appunto. 
    È stato il mio primo incontro con un giovanissimo Enrico Radeschi, una simpatica canaglia imbranata che però ha "fiuto" e cerca di farsi strada a modo suo, complice anche una discreta faccia tosta e un insolito talento appena scoperto per l'informatica.
    Siamo nel 2002: l'Italia ha appena abbandonato la lira per l'euro, in tv c'è Papi con Sarabanda e Altavista è il motore di ricerca per eccellenza che fa ombra a uno sconosciuto Google. 
    Enrico Radeschi, ventisettenne della bassa, fresco di laurea in lettere, arriva a Milano: vive in un angusto appartamento diviso col suo coinquilino calabrese e si arrabatta in mille lavoretti - più o meno legali - per arrivare a fine mese e cercare di diventare un vero giornalista. 
    Nel centro di questa Milano fredda e sfuggente, un noto avvocato viene assassinato in pieno giorno; prima di esalare l'ultimo respiro, riesce a tracciare uno strano segno col proprio sangue, un segno che rimanda a una confraternita che trae ispirazione da San Carlo Borromeo. 

    Nel frattempo, una donna misteriosa attira nella sua rete dei giovani ragazzi e li uccide dopo averli sedotti. 
    Sembrano due casi perfetti per MilanoNera,