use ↔ to navigate


# consuming and publishing data should be easy
# today: non-automated workflows
- ftp/bulk downloads
- REST APIs
- scraping
- writing data import scripts

# make a new dat store
dat init
# put a JSON object into dat
echo '{"hello": "world"}' | dat
# stream the most recent of all rows
dat cat
# pipe dat into itself (increments revisions)
dat cat | dat
# start a dat server
dat serve
# delete the dat folder (removes all data + history)
rm -rf .dat
# goals of dat
- de-facto tabular data "format"
- automated data install/sync
- eliminate glue code
- building block for more complex logic
- be fast!
# transforms
- unix pipes, stdin + stdout
- simple line delimited data (json, csv)
- [example R transform](https://github.com/maxogden/dat/blob/master/examples/transform.r)
- no knowledge of JavaScript required
# small pieces loosely joined


# outside the scope
- diffing/merging
- complex queries
- end-user applications
- package management
- these can be built on top of dat
# use cases
- scientific/research data
- "big data" sets
- streaming/realtime data
# initial goals
- beta testers
- make it fast
- populate the ecosystem with modules
# background
- 6 months of grant funding for a prototype
- 100% open source
- github.com/maxogden/dat
# ways to contribute
- developer preview is available now
- #dat on freenode IRC
- browse Github Issues
- comment, leave a use case
- github.com/maxogden/dat/issues
## add a large file
git add, commit: 13m40s
dat: 11m10s
## adding one line
git add, commit: 8m30s
dat: 5ms
# thanks! i'm @maxogden