Est tempora vero facere est placeat tempore quos. Consequatur nam deserunt sint nulla magni recusandae ab autem. Voluptas sed incidunt harum necessitatibus porro enim illo. Ab quam commodi veritatis. Dolor iure esse unde sint. Facere reprehenderit nostrum illo temporibus voluptatibus. Vero sapiente sint culpa. Cum rerum aut et. Minima suscipit animi hic molestias blanditiis repellendus. Impedit vel est dolor autem et molestias assumenda iste. Nobis laboriosam doloribus iusto omnis eum sunt. Ratione ut omnis consequatur minus. Sapiente eum id dolor modi aut voluptatem eligendi. Voluptatibus sint nihil quibusdam quod accusamus et. Perferendis impedit debitis minima culpa sit omnis fuga. Rerum voluptatem est sit. Iusto pariatur qui doloribus et asperiores. Ex sed ullam perferendis nostrum. Aut quas adipisci sed consequatur explicabo ut.
Aut quas adipisci sed
Est tempora vero facere est placeat tempore quos. Consequatur nam deserunt sint nulla magni recusandae ab autem. Voluptas sed incidunt harum necessitatibus porro enim illo. Ab quam commodi veritatis. Dolor iure esse unde sint. Facere reprehenderit nostrum illo temporibus voluptatibus. Vero sapiente sint culpa. Cum rerum aut et.
suscipit animi hic molestias blanditiis repellendus. Impedit vel est dolor autem et molestias assumenda iste. Nobis laboriosam doloribus iusto omnis eum sunt. Ratione ut omnis consequatur minus. Sapiente eum id dolor modi aut voluptatem eligendi. Voluptatibus sint nihil quibusdam quod accusamus et. Perferendis impedit debitis minima culpa sit omnis fuga. Rerum voluptatem est sit. Iusto pariatur qui doloribus et asperiores. Ex sed ullam perferendis nostrum. Aut quas adipisci sed consequatur explicabo ut.
The beginnings, how this project developed, what was my motivation
What do I think about what makes such generative art good, or what basic requirements I wanted to meet
About glasses as an object
The framework, system environment, softwares
How this project is structured in terms of layers
Cameras, and camera moving
Compositions and the animation
How it works and the controller usage
AR view
Random parts
Raffle
The beginnings, how this project developed, what was my motivation
In December 2021, we were in the City Park in Budapest with my friend. We make time every two weeks to discuss things with each other. It was at this time that I decided that I wanted to complete my artistic side in addition to programming. I always admired those artists who have "daily content", and I thought a lot about what I wanted to do. I knew it would be something that involved both coding and visual arts. At that time, generative art meant fractals to me, and I admit that I didn't really like this way. That's why I programmed paintings and artworks - not generatively, but on the contrary, each brush stroke separately and I processed my old oil paintings in pure css or individually coded 3D, but after these were completed, I didn't really feel like I had found the path I wanted to follow. There was nothing enjoyable about it, it was very didactic work, and the other reason was that there was no social environment where it would be appreciated. Most of my friends don't know how to code, so they saw a painting made in pure css as a slightly lame graphic rather than a work completed with a lot of work.
That evening, we talked about the fact that many of artist played a major role in the arts of the 1920s, who do not receive enough attention either in education or in art narratives in our environment, and how we like Constructivism and the Bauhaus movement. Encouraged by this, I began to process the works of the great constructivist predecessors: I created the works of Moholy, Van Doesburg, and Mondrian in 3D space. This system was the earlier version of my current system, it was able to generate unique compositions that resembled the works of the great predecessors. I found this path myself, I had never heard that this line of generative art existed before.
Previous works
After that, in the summer of 2022, on another such creative evening, we came up with the idea to create an object with my experience of constructivist works. Sunglasses were chosen because this item is known to everyone on Earth, it is really popular. The other reason is that it is a wearable object, which has many advantages and possibilities. That's when I started making three unique pieces in my already existing system. This lasted until about November, when the 3 sunglasses were finished.
These were the first individually coded glasses
I met an other friend Cloudnoise around this time first. Not long before that he had a successful project on fxhash. When I found out what this platform was, I was very happy because I finally found a medium where my work is appreciated. In addition, I realized that I had a project almost ready to publishing on fxhash, that was my "constructivist artwork generating system", what I mentoined before. So I did not hesitate to release “The Constructions” series. I was very happy with the many positive responses and the fact that several artists I appreciated collected a piece. I knew even then that the glasses would one day become a generative series. After that, I started programming Generative Glasses in November 2022.
by username username
project name project name project name
What do I think about what makes such generative art good, or what basic requirements I wanted to meet
I think the most important things that make such a series successful are the following:
The most important thing is that it has a part generated by an algorithm that can be easily recognized by the eye, which the viewer can see, but cannot tell at first how it was made. More simply: it makes the viewer think, "How the hell did he do that?"
The output should be of such quality that I also would display it on my wall
It's a requirement for myself that I don't use image layers, only code. I am a programmer, this is my strength, so the minimum for me is to solve everything with program code
In addition to the three important things, I also think that it should be somewhat artistic and abstract.
Enjoy the process while creating
Promotion
About glasses as an object
Adapting to the proportions and dimensions of real glasses
Adapting to the construction of the object so that it has all the elements that the glasses have
I searched the net for the parts and dimensions of the glasses. Then I measured my own glasses. Based on these, I have roughly developed the rules and proportions:
Glasses have temples. The length of the temples is 130-140 mm
The temples “tips” are at the end of the temples of the glasses
The temples of the glasses are connected to the "End piece", this connects the lens and the temples,
The width of the lenses is 45-55 mm
The "bridge" is 15-20 mm between the two lenses
Roughly, the width and depth of the glasses are 135x150mm
Lenses can be framed or frameless
Lenses have different thickness and dimension
I came up with three reference points. One is above the ear where the temples of the glasses touch the head. The other point is the end of the temple, from where the end piece starts. And the third is on the bridge of the nose where the glasses rests. The program draws these points based on the given scales and then connects these points with generative parts. (I will write more about random things later).
Reference points
The framework, system environment, software
I mentioned above that I am building a system that is abstract enough to serve several of my art projects. I have a website (carco.hu) that displays them. This website is operating with NodeJs on the server side. Both on the server side and on the client side rendering with ReactJs. I use GraphQl for API and MongoDB for databases. Of course, most of the part of the art app only runs on the client.
The art app builds and displays the artwork from a json object. This json object is either saved in the database, or if it is not saved, the program generates such a data set. If there is a hash, then from it, if there is not, then simply with math random. This is how I can list and present my artworks on my site, whether generative or individually coded
I use ThreeJs for 3D and native Html5 Canvas for 2D. I used TensorFlow for the augmented reality view. I used TweenJs for the animation. The default font of the system is Roboto - Google font. I use two other fonts for the Callout notations of Generative Glasses: Permanent Marker, Kalam - Google fonts
The framework saves the animation on the server side with Puppeteer. The images are also saved with Puppeteer, but the image data from canvas is collected on the client side by the handless browser and then sent back to the Puppeteer api on the server, which saves the image. Better quality can be achieved this way. Unfortunately, there is still a 4k limit, that is what works safely. A larger size than this could only be saved if the canvas was cut into pieces.
How this project is structured in terms of layers
At the very bottom, there is a canvas tag on which I draw in 2D. There is an array in the json object that defines what drawing functions the program code should run and contains unique parameters for them. Iterates through the array and draws at every step on the same canvas. I draw like this the
Background color,
Then comes the canvas texture,
The border
The title box on product design view
With all its contents, signature, date,
And even the marker background
2D layer
Above it is a 3D canvas, which includes all 3D elements. The position, size and rotation angle, materials and more properties (eg.: shadow) of the 3D elements are saved in the json object, the system reads it from here and builds the composition In this 3d space also there are editing lines and callout notations. These can be turned on and off separately for each camera, but the user can also disable them completely. (More about the controller below).
When saving the image, all the layers had to be merged and the blending options had to be written.
Cameras, and camera moving
There are a total of 4 camera views, which can be displayed side by side on a split screen. For this project, in the product design view, I use 2 other camera views in addition to the main camera, with the blending set to grayscale. Each camera view has its own background layer and its own effect layer. The position and many other parameters of these camera views can be adjusted on the json object. For example, the view of the camera or the marker background is also different.
Split screen: 2 other camera views in addition to the main camera
These must be taken into account separately when saving and placed on the canvas as they are in the html content.
The main camera can be moved with the mouse or touch. It also moves during the animation. The other cameras follow according to the spherical parameters
Compositions and the animation
The json object can contain several compositions that can be completely different from each other. But if it has the same elements in consecutive compositions, it can animate from one to another. With this, I achieved the effect that from the "scattered" abstract view, the image is assembled into a pair of glasses during the animation. When switching between non-consecutive compositions, elements goes out of the view and new elements arrive.
So the system handles different compositions. First, I generate a basic composition, then I duplicate more from it and modify the duplicates. This is how the entire animation is created:
There is an abstract view where the parts of the glasses are scattered
There is a front-left view
Front-right view
Top view
Product design view that converts to split screen
Front view where the object falls apart and starts from the abstract view again
Six compositions
Each composition has its own camera position, target, and other parameters. When switching composition the program animates from one to the other. For this necessary animating the camera position the camera target and camera zoom.
How it works and the controller usage
A controller ui has been created for many user adjustable settings. With this, user can control with mouse or touch action every feature that other generative softwares usually operate just from the keyboard. It bothered me that I couldn't use these softwares properly on mobile, and I thought it was important to innovate within generative art if I had the opportunity.
First, one of the 6 compositions comes in and the animation starts. The system randomly determines which composition should be the first in the line. This will also be the saved image that the fxhash system creates.
If you don't click or touch anything, the animation will loop continuously. If you click into the area anywhere, the controller will appear and the animation will stop. Until you get to the next composition and the animation stops, the pause button flashes until now you can't press anything, all buttons are disabled. If the animation has stopped, the buttons are enabled again, and you can move the camera in 3D space by dragging. The controls have left and right arrows that can be used to step between compositions. The animation restarts with the play button.
There are also a few other buttons:
There is a save button which is a floppy disk. A menu will then appear, offering to save images of different proportions, with the default aspect ratio generated by the program code at the top.
There is a camera icon that restores the original camera view. It is worth pressing this before saving.
There are options that are not available for all configurations, they only appear if they are enabled in that composition:
Switch split screen to main camera view, and back (4 squares icon)
Shows and hide Editing lines (pair of compasses)
Shows and hide Callout notations (pencil and ruler)
Shows AR view and stop it (cube icon)
Controller
The keyboard controls also remained:
Press “5” to save image to default aspect ratio
Press “6” to save image to square aspect ratio
Press “7” to save image to portrait aspect ratio
Press “8” to save image to landscape aspect ratio
Press “ArrowDown” for restores the original camera view
Press “D” to switch to next composition
Press “A” to switch to previous composition
Press “S” to start animation
Press “Spacebar” to switch to split screen (if it is available)
Press “1” to hide/show Editing lines (if it is available)
Press “2” to hide/show Callout notations (if it is available)
Press “3” to hide/show AR view (if it is available)
AR view
If you have a camera on your device, you can try out how such glasses would suit you. This function is not available for all compositions, so if you cannot find the cube icon, tab to the next composition and click on the cube icon.
I created this function with the help of tensorflow face landmark detection software. It's not perfect, there are delays, but it's good enough to try out the glasses.
I'm testing the AR view :)
Random parts
First, I summarize these in points (the programming details in another article, if there is a need for it):
Generated parameters of the glasses:
The reference points of the glasses and the main dimension based on them (1)
Item type (sunglasses or glasses with transparent lens) (2)
The thickness of the frame, and frame is translucent or not (3)
The symmetry of the glasses (symmetrical or asymmetrical) (4)
The material thickness of the glasses (5)
Color scheme
Temples height
Geometry of the temples tips (6)
Temples geometry (6)
Geometry of the end pieces (6)
Shape and transparency and the width, height and thickness of the lens (7)
Bridge width and geometry of the bridge (6)
The rest are generative parts:
The product no. based on the generated parameters of the glasses (8)
Product created date: between November 2022 and March 2023 (9)
Default aspect ratio for image saving
Preview type - which composition should be first (Abstract view, Front-right, Front-left, Product design view)
The color and transparency of the gray pixels of the canvas texture (10)
The brush size of the marker background, the size of the spaces of the strokes, curvature of brush strokes, the the amount of lead-in and lead-out, and the marker background should be horizontal or vertical (12)
Abstract composition (the position and angle of rotation of the pieces of the disintegrated pair of glasses) (13)
If you want to participate in the raffle, like and retweet my featured post on my twitter and leave a comment with your hash. If you take a screenshot or a video of yourself trying on a pair of glasses in the AR view, you'll have a better chance of winning! I look forward to the funny pictures to make my day! :)
The minting for Generative Glasses will open on April 4, 2023 at 4 p.m CET
by username username
project name project name project name
Thank you for your attention! Thanks for the support! Thanks for the opportunity fxhash and genart community! I hope I was able to contribute to generative art with my work!