Python Virtual Environments in Windows, virtualenv, lsvirtualenv, venv

Useful commands from command line:
  • lsvirtualenv to view virtual environments
  • C:\Dev\venv\Scripts\activate to activate the virtual environment located in C:\Dev\venv\
Other notes on PyCharm:
  • PyCharm’s built-in Terminal does NOT work as an external command line. I witnessed different error messages (formatted differently, less detailed) in the built-in Terminal. In one case, python setup.py did NOT work from the built-in Terminal, while the same command worked from a normal command line.
  • If the packages you install need for some reason Visual Studio C++ of some version, it did NOT help me to add to the PATH the directories where my Visual Studio C++ was located (namely, set PATH=C:\Users\%USERNAME%\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64;C:\Users\%USERNAME%\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\WinSDK\Bin\x64;C:\Users\%USERNAME%\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\WinSDK\Bin;%PATH%). Instead, press “Start”, then search for “Visual C++ Command Prompt” or something like that, and this command prompt already includes in its %PATH% variable the relevant directories.
Other notes on python:
  • https://www.lfd.uci.edu/~gohlke/pythonlibs/ — Unofficial Windows Binaries for Python Extension Packages

ZCash zcash4win problems starting up

The problem:
  • You are using Windows.
  • You are using http://zcash4win.com/ as your ZCash node / wallet.
  • For some reason, it doesn’t start up as it usually did, and instead gives you some weird message “daemon is taking longer than expected to start”
Solution:
  • Make sure you’re using the latest version from http://zcash4win.com/. This will usually solve it.
  • Otherwise, edit the file zcash.conf as explained in: https://forum.z.cash/t/required-config-change-for-zcash4win-1-0-12/26632

Ethereum Wallet and geth

Windows 10, 64-bit:

Geth:

  • Download Geth from https://geth.ethereum.org/downloads/ (Update, April 3rd: https://ethereum.github.io/go-ethereum/downloads/).
  • Note: There is an “Installer” version and an “Archive” version. You can go with the “Installer” (even though it was once deemed suspicious by my Antivirus software). But probably okay to download the 64-bit “Archive”, unzip it and move the geth.exe into Program Files\Geth\ instead of the older geth.exe file you had there. Less important: I believe the “Installer” does more stuff, namely to try to change the PATH variable (it didn’t succeed in my case), and to update the uninstall.exe file as well…
  • Run certutil -hashfile __path_to_file__ MD5 to compare to MD5 in website.

Ethereum Wallet:

  • Download Ethereum-Wallet-installer-*.exe from https://github.com/ethereum/mist/releases/.
  • Run certutil -hashfile __path_to_file__ SHA256 to compare to SHA256 in website.
  • Run the installer. IMPORTANT: It may suggest to place the blockchain data in: C:\Users\___your_user_name___\AppData\Roaming\Ethereum. Here you may change it to D:\Ethereum instead, if you prefer an external drive.
  • Optional (recommended): Create a shortcut with --syncmode "light". In fact, the “Target” in the shortcut can be something like: "C:\Program Files\Ethereum-Wallet\Ethereum Wallet.exe" --node-syncmode "light" --node-datadir="D:\Ethereum"
  • Note: No need to run both geth and wallet. Just run the wallet (using that shortcut you created!), it will invoke the proper geth.

apt-get GPG error mongodb-org

When running apt-get update, problem:
Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 Release: The following signatures were invalid: KEYEXPIRED 1507497109
W: Failed to fetch http://repo.mongodb.org/apt/ubuntu/dists/xenial/mongodb-org/3.2/Release.gpg  The following signatures were invalid: KEYEXPIRED 1507497109
W: Some index files failed to download. They have been ignored, or old ones used instead.
Solution: https://askubuntu.com/questions/842592/apt-get-fails-on-16-04-installing-mongodb Essentially getting the correct key:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927

Apache2: Multiple SSL Virtual Hosts (for multiple domains) on same machine

RewriteLog: Those familiar with earlier versions of mod_rewrite will no doubt be looking for the RewriteLog and RewriteLogLevel directives. This functionality has been completely replaced by the new per-module logging configuration via LogLevel.

Problem I had: Going to “www.a.com” was redirected to https://www.a.com, but “a.com” was NOT.

Some things to check before anything:

  • Does the problem occur in any browser? Clear cache!
  • Useful command to see configuration of apache2: apache2ctl -S
  • TensorFlow Cheatsheet

    General cheatsheet

    Initialize a Variable using constants or random values (zeros, zeros_like, linspace, etc.).

    Can save and restore models using tf.train.Saver.

    Read on from here for more cool stuff: https://www.tensorflow.org/programmers_guide/threading_and_queues

    Graph vs. Session

    (following the great explanation by Danijar Hafner)
    graph = tf.Graph()
    with graph.as_default():
        v = tf.Variable(tf.random_normal(shape=[1]), name='foo')
        print v.shape, v.shape.ndims, v.shape.num_elements()  # (1,) 1 1
        if 1: # Don't do this! tf.global_variables_initializer() defines an op that initializes all variables in the graph (so far); so you should call this AFTER they were all defined, otherwise you'll get something like "FailedPreconditionError: Attempting to use uninitialized value FailedPreconditionError: Attempting to use uninitialized value foo_scalar"
            init_op_notgood = tf.global_variables_initializer()
        v_scalar = tf.Variable(tf.random_normal(shape=[]), name='foo_scalar'))  # shape=[] or shape=() defines a 0-dimensional tensor, i.e. scalar
        print v_scalar.shape, v_scalar.shape.ndims, v_scalar.shape.num_elements()  # () 0 1
        init_op = tf.global_variables_initializer()  # Add an op to initialize all variables in the graph (actually probably best to define this even further down after the entire graph was constructed, but defining it here is already okay for our example)
        
        assign_v = v.assign([101])
        assign_v_scalar = v_scalar.assign(102)
    
    c = tf.constant(4.0)  # Will be defined as attached to the default graph (!) tf.get_default_graph()
    
    # Sanity check
    print c.graph == tf.get_default_graph(), variable.graph == graph, graph == tf.get_default_graph()  # True True False
    

    Then instantiate a Session to run our graph:

    with tf.Session(graph=graph) as sess:
        sess.run(init_op)
        print sess.run(v)  # e.g. [-0.407900009]
        print sess.run(v_scalar)  # e.g. 1.30248
        if 1:  # Don't do this part
            sess.run(init_op_notgood)
            print sess.run(v)  # e.g. [0.33414543], a different value than above
            print sess.run(v_scalar)  # e.g. 1.30248, same as above
        print sess.run([assign_v, assign_v_scalar])  # [array([ 101.], dtype=float32), 102.0] -- return values probably not really interesting here
        print sess.run(v)  # [ 101.]
        print sess.run(v_scalar)  # 102.0
    
        if 0: # Error, as 'c' is not an element of the current graph...
            print sess.run(c)  # ValueError: Fetch argument <tf.Tensor 'Const_50:0' shape=() dtype=float32> cannot be interpreted as a Tensor. (Tensor Tensor("Const_50:0", shape=(), dtype=float32) is not an element of this graph.)
    

    Docker in Windows, allocate more memory

    Some installation / cheatsheet

    Using Docker for Windows, Windows 10. 17.03.0-ce-win1 (10296). Channel: stable. 94675c5.

    (Tip: Also install Kitematic, then you can right-click the running Docker icon and from there launch Kitematic for a nice GUI to see your containers.)

    Basic commands:

    # Create a tensorflow container:
    docker run -it -p 8888:8888 -p 6006:6006 --name my_tensor_flow -v C:/Data/Docker:/data -v C:/Dev/python:/devpython tensorflow/tensorflow
    # Run it in the future:
    docker start -ai my_tensor_flow
    # To connect to it with bash: (note: /notebooks is where the notebooks are kept, and we can already access the host's C:/Data/Docker in /data)
    docker exec -it my_tensor_flow bash
    # Then from the bash can run: tensorboard --logdir=/data/tensorboard/5/
    

    Apart from the nice GUI, there are some useful commands from the command line:

    docker stats [--all]
    docker ps [--all]
    docker container list [--all]

    Allocating more memory

    For my running container, docker stats showed (in “MEM USAGE / LIMIT”) that it was bounded by 1.934 GiB. Indeed, “Hyper-V Manager” showed “MobyLinuxVM” machine has only 2GB.

    To allocate more memory: Docker’s settings, Advanced, set Docker’s memory e.g. to 8448MB. Docker will restart. Verify that “Hyper-V Manager” now shows “MobyLinuxVM” has assigned memory of 8448 MB. Run your previous container docker start -ai container-name, and now docker stats will show a memory bound of 8.003 GiB. Success.

    Note: Running some

    docker run -it -p ... --name ... --memory-swap -1 --memory 8g -v ...
    will not work if your “MobyLinuxVM” doesn’t have enough assigned memory.

    Some notes about Entropy

    KullbackÔÇôLeibler divergence (Wikipedia): A non-symmetric difference between two distributions:

        \[D_{KL}(P||Q) = \sum_i{P(i)\log\frac{P(i)}{Q(i)}}\]

    Conditional Entropy:

        \[H(Y|X) = -\sum_{x}{p(x) \sum_{y}{p(y|x) \log p(y|x)} } = -\sum_{x,y}{p(x,y) \log \frac{p(x)}{p(x,y)}}\]

    Joint Entropy:

        \[H(X,Y) = -\sum_{x,y}{p(x,y) \log p(x,y) }\]

        \[H(X,Y) = H(X) + H(Y|X) = H(Y) + H(X|Y)\]

    A non-symmetric measure of association (“uncertainty coefficient”?) between X and Y measures the percentage of entropy reduced from Y if X is given:

        \[U(Y|X) = \frac{H(Y) - H(Y|X)}{H(Y)} = \frac{I(X;Y)}{H(Y)}\]

    Or the percentage of entropy reduced from X if Y is given:

        \[U(X|Y) = \frac{H(X) - H(X|Y)}{H(X)} = \frac{I(X;Y)}{H(X)}\]

    Where I(X;Y) = H(X) + H(Y) - H(X,Y) = H(X,Y) - H(X|Y) - H(Y|X), the mutual information of X and Y, is non-negative and symmetric.

    Anyway, U(Y|X) equals 0 if no association, 1 if knowing X fully predicts Y (i.e. Y is a function of X).
    A symmetric measure can be made of a weighted average of U(Y|X) and U(X|Y):

        \[U(X,Y) &= \frac{H(X)U(X|Y) + H(Y)U(Y|X)}{H(X)+H(Y)}\]

        \[= 2 \Big[ \frac{H(X)+H(Y)-H(X,Y)}{H(X)+H(Y)} \Big]\]

    Relation to \chi^2 or measures like Cramer’s V etc.:

        \[\chi^2 = N \cdot \sum_{x,y}{\frac{(p(x,y)-p(x)p(y))^2}{p(x)p(y)}}\]

    No obvious relation. Generated some 2×2 contigency tables and plotted their U(X,Y) vs Cramer’s V:

    Running Apache and Node on the same server

    Option 1. Apache in the front, /node/ handled by local Node.

    STEP 1: Add this to some .conf file:
    ProxyPass /node/ http://localhost:8000/
    (e.g. save it in some my_proxy_forward_node_to_8000.conf, then a2enconf it) More notes for this to work: (from ProxyPass documentation)
    • Ensure the mods proxy and proxy_http are enabled (a2enmod).
    • If the first argument ends with a trailing /, the second argument should also end with a trailing /, and vice versa. Otherwise, the resulting requests to the backend may miss some needed slashes and do not deliver the expected results.
    • The ProxyRequests directive should usually be set off when using ProxyPass.
    STEP 2: Then run Node locally on port 8000:
    var http = require('http');
    http.createServer(function (req, res) {
      res.writeHead(200, {'Content-Type': 'text/plain'});
      res.end('Hello Apache!\n');
    }).listen(8000, '127.0.0.1');
    

    Option 2. TrafficServer in the front, Apache and Node behind it.

    STEP 0: Install TrafficServer. (Note: As of 4/2017, this installed version 5.3.x from the repositories, though the real latest version is 7.2.x)
    apt-get install trafficserver
    IMPORTANT: Edit records.config and add this line (I added in the beginning; I don’t understand why this isn’t the default configuration). If you don’t specify such user_id, it will not agree to run itself at all, because it doesn’t want to run as root.
    CONFIG proxy.config.admin.user_id STRING trafficserver
    STEP 1: Enable Reverse Proxying (pass incoming traffic to e.g. another port in localhost, preserving “Host:” headers etc.). Edit records.config: (the underlined value here is the only change needed for the default value, as of 4/2017)
    CONFIG proxy.config.http.cache.http INT 1
    CONFIG proxy.config.reverse_proxy.enabled INT 1
    CONFIG proxy.config.url_remap.remap_required INT 1
    CONFIG proxy.config.url_remap.pristine_host_hdr INT 1
    CONFIG proxy.config.http.server_ports STRING 8080
    Edit remap.config:
    regex_map http://(.*):8080/ http://localhost:80/
    Finally, reread the config files (traffic_line --reread_config or equivalently -x). IMPORTANT: The 8080 and 80 above (in both records.config and remap.config) should be switched if you really want TrafficServer in the front… STEP 2: Log files, monitoring, security: https://docs.trafficserver.apache.org/en/5.3.x/admin/working-log-files.en.html https://docs.trafficserver.apache.org/en/5.3.x/admin/monitoring-traffic.en.html https://docs.trafficserver.apache.org/en/5.3.x/admin/security-options.en.html STEP 3: If you want to disable cache completely: (didn’t check this) Edit record.config:
    CONFIG proxy.config.http.cache.http INT 0

    (BONUS) STEP: Cache configuration: storage.config. E.g. its default contents were simply a one-liner /var/cache/trafficserver 256M. Changes require restart of trafficserver. It is recommended to use raw devices; see documentation. E.g. how to cache everything (e.g. if you’re serving only static files): By default, Traffic Server will cache an HTTP response only if it contains a Cache-Control or Expires header explicitly specifying how long the item should be stored in the cache. To disable that need for those required headers:
    sudo traffic_line --set_var proxy.config.http.cache.required_headers --value 0
    sudo traffic_line --reread_config
    (BONUS) STEP: Use a tool called Cache Inspector:
    sudo traffic_line --set_var proxy.config.http_ui_enabled --value 1
    Edit remap.config, add this line at the top of the file:
    map http://your_server_ip:8080/inspect http://{cache}
    Then restart the trafficserver service.

    WordPress Twenty Seventeen modifications

    The correct way is to work with Child Themes instead of the following hacks. To disable WordPress editor from inserting annoying <p> for new lines, add this in functions.php:
    remove_filter( 'the_content', 'wpautop' );
    remove_filter( 'the_excerpt', 'wpautop' );
    To remove the “Proudly powered by WordPress”, comment out the line that fetches this template part in footer.php:
    //get_template_part( 'template-parts/footer/site', 'info' );
    To make post’s primary content slightly wider at the expense of the sidebar (58%:36% => 65%:31%), simply add more CSS in Appearance => Customize: (or alternatively modify directly these rows in style.css)
    @media screen and (min-width: 48em) {
    .has-sidebar:not(.error404) #primary { width: 65%; }
    }
    @media screen and (min-width: 48em) {
    .has-sidebar #secondary { width: 31%; }
    }