I'm sorry, this content requires a device with a screen at least 1,000 pixels wide. Please resize your browser and try again.

Self-ish

In 2016 I began drawing blind contour portraits as a way to strip society's influence from my art. In 2018, I shifted to drawing self-portraits and quickly realized something interesting:

Despite the fact that my subject was always the same, each portrait I drew was vastly different.

But why?

To investigate this, lets explore what it means to draw. (Scroll down)

Drawing is fundamentally a feedback loop.

This means that as we draw, we compare our drawing to the object we are attempting to draw, and make adjustments in order to improve our drawing so that it better represents our subject.

From a process control perspective, the visual setpoint is the subject being drawn, the process controller is the artist doing the drawing, and the output is the drawing itself.

For example:

Say you wanted to draw a flowerpot.











You would first look at the flowerpot.


























Then think about what the flower pot looks like, and determine the best way to represent it in 2D.



























Without even thinking about it, your brain would then send a signal to the neurons in your arm to contract and relax the necessary muscles needed to move your arm (and pen) across the page.

























The movement of your arm, and the pressure of the pen on the paper would cause the ink to be released from the pen and onto the page effectively generating a rudimentary drawing.

























Once you've finished your mark, you'll look down at the page in order to see the fruits of your labor and consider what your drawing looks like.

























You'll then shift your gaze back to the original object in order to compare the visual setpoint to your drawing and determine any needed corrections.





































This process will then repeat over and over again until you are fully satisfied that your drawing does not need any additional corrections in order to look like your setpoint.

























By preforming blind contours, however, drawing is no longer a feedback loop but a linear process because the perception of the output and its subsequent comparison to the visual setpoint are eliminated from our block flow diagram.

























This suggests that our perception of the drawing functions as a feedback controller that standardizes and sterilizes how we experience reality. But what causes the initial diversions from this standardized version of reality that is apparent in the blind contour drawings and leads them to being so different from one another?

























We begin our investigation with the simplest node to investigate: the muscular response.

The neuro-muscular response is initiated by a synapse occurring within a neuron located in the primary motor cortex (M1). The signal then proceeds to travel through a series of neurons where it ends in the stimulation of muscle fibers that subsequently contract (or relax) muscles to generate movement.

























drawing output perception of output muscular response interpretation of setpoint perception of setpoint visual setpoint setpoint comparison

In an attempt to determine whether or not the differences between portraits arose from a group of motor neurons behaving unexpectedly, portraits were drawn with both the right and the left hand.

In theory, if the neuronal group involved in using the dominant (right) hand to draw behaves erratically while the other does not, then the portraits drawn with the non-dominant (left) hand should display a greater amount of both accuracy (when compared to the visual setpoint) and similarity across all the left-handed portraits.

As one will notice, however, despite similarities between the right-handed and left-handed portraits, the left-handed portraits vary just as much as the right-handed portraits, (loosely) suggesting that additional factors are at play and further adjusting our block diagram.

Right Handed

Left Handed

drawing output perception of output muscular response interpretation of setpoint perception of setpoint visual setpoint setpoint comparison

After playing around with the images, it became apparent that the similarities between the pairs of portraits was highlighted when overlaid on top of the image taken at the time of drawing.

Right Handed

Overlay

Left Handed

Noting the likeness between each of the portraits and the photo but the differences between the portraits when compared to each other raises the question of what else could be causing such variation if not an abnormal neuronal group.To investigate this question, and specifically to investigate how perception is involved in blind contour portrait drawing, gaze tracking was explored as a means to identify what was being seen.

In order to extract a gaze location from an image several pieces of information are required. This includes:

  1. the x-y-z coordinates of the eyes' center of rotation
  2. the x-y-z coordinates of the pupils' center
  3. the position of the object of interest relative to the vertical plane formed by the subjects eyes

Unfortunately, high accuracy gaze tracking requires a fair amount of expensive equipment that was not available. As such, a more rudimentary gaze-tracking analysis was preformed by assuming that there was no movement in the z direction throughout the drawing. Additionally, in lieu of highly specialized equipment and fancy algorithms, two simple python libraries were used (dlib and GazeTracking) in order to preform (somewhat) accurate gaze tracking at no cost.

To begin, a video recording of conducting a self portrait was recorded. This video was then divided into individual frames and exported as photos. Within each photo, the x and y coordinates of the center of rotation were found by first finding the position of several different facial "landmarks" that appear around the eye and function as a sort of anchor. This was done by using the x and y coordinates of the dlib landmark points 37-42 for the right eye, and 43-48 for the left eye to calculate the geometric center of each eye respectively (shown below).

Once the center of rotation was determined, the center of each pupil was identified using the GazeTracking python library. The resulting x and y coordinates were then used in conjunction with the coordinates of the center of rotation previously determined in order to determine the direction of the gaze. The direction vector and the ray connecting the two points were then used to find the intersection point with the screen, which was assumed to be 24 inches away from the center of rotation, or 2443 px away. The x-y value of each vector (1534 in total) at the intersection of the screen was then determined by assuming a constant distance from the screen of 24 inches, or 2400 px. A video with the pupil center highlighted from GazeTracking can be seen below. Intersection points were then plotted using matplotlib.

When compared to the final drawing, however, it becomes clear that this method of rudimentary gaze tracking does not work.