Ultimately, it depends on the architecture that your machine has.
(background) Nodes can solely store data in their properties. Their properties are stored using a key-value store. (per here)
The value in each property is limited to Java primitives (ints, floats, etc.), strings, and arrays of primitives/strings.
Therefore, the maximum amount of data a particular property can hold would be limited to the maximum size for a string or the maximum size for an array of strings (that's per node). This limit (for 32-bit machines) is 4GB. (Note that this may be limited to 2-3 GB.)
(Also, having said this, there was a bug previously that limited string size to 1 MB. I expect that this is resolved.)
Of course, this raises the question of whether multiple properties could store more than 4GB per node. Since the properties list is essentially a key-value store, it would expect that the maximum size would be limited by disk space and key selection. I can't find anything to support or deny this, however.
That doesn't definitively answer your question, but from what I understand you should be able to store large amounts of data per node (up to disk space capacity).
You restart Neo4j by calling /path/to/neo/bin/neo4j restart
or by /etc/init.d/neo4j restart
if you installed it as a service.
The neo4j startup script is in the same path as the shell.
Best Answer
It would appear neo4J (well, the web site) was just totally broken for some days there (around Aug 7 2017).
As reported by wBob, it does seem to be working now!
It's worth noting that the basic problem (reported in the question above) seems to be resolved. However, after you "create a sandbox" the system appears to fail when you "go to the data browser" in that sandbox ... but it would appear that it simply works after, say, a few minutes (there's no warning while it is getting going).
Once you get past all that it does seem to work, eg the "movie" sandbox: