I know many of you might be having trouble in understanding what UV/ST are. I too had during beginning of OpenGL texturing. Before eating too much time its just:
(U,V) = (X,Y) of texture.
U = X pixel position of image / image width ;
V = Y pixel position of image / image height ;
And we know how to cap something between 0 to 1 range. Just divide it by the maximum that's why we are dividing pixel position by width and height just to cap it between 0 to 1 range.
Now we do simple maths, multiply U value by 100 and we get pixel position in %age.
suppose U = 0.4 and V = 0.6, it simply means
40% width, and 60% height.
U is pointing to position = 0 + 40% of image width
V is pointing to position = 0 + 60% of image height
That is why texture coordinate system becomes
left = 0 (lowest width), right = 1 (max width)
bottom = 0 (lowest height), top = 1 (max height)
Image Source: = http://paulyg.f2s.com/graphics/uvexp.gif
EDIT:
Now what UV mapping is?
I'll try to explain in lame language.
Assume a triangle as your polygon data and a rectangular, balloon like, nonrigid sheet as texture, here we want to cover the triangle with desired portions of that sheet, what should we do in real world? Of-course we will try to stretch it around triangle in all possible ways, we can start from wherever we want but limitation is simple, we have got 3 points to staple that sheet. this is exactly what OpenGL does. It asks which part of the texture you want to be mapped into which vertex of polygon. We can't tell OpenGL that hey put this UV in center on the polygon!
That's why the higher the number of vertices we have in polygon the better mapping we can do.
But in actual programming, we can assign almost any UV to any vertex. That would make the triangle with unusual patterns. I haven't tested this :-), try this in 3D modelling programs like Blender if you wanna save your time and experiment quickly on UV mapping. Then export your simple mesh into OBJ format, which you can read on text editors as well.
(U,V) = (X,Y) of texture.
U = X pixel position of image / image width ;
V = Y pixel position of image / image height ;
And we know how to cap something between 0 to 1 range. Just divide it by the maximum that's why we are dividing pixel position by width and height just to cap it between 0 to 1 range.
Now we do simple maths, multiply U value by 100 and we get pixel position in %age.
suppose U = 0.4 and V = 0.6, it simply means
40% width, and 60% height.
U is pointing to position = 0 + 40% of image width
V is pointing to position = 0 + 60% of image height
That is why texture coordinate system becomes
left = 0 (lowest width), right = 1 (max width)
bottom = 0 (lowest height), top = 1 (max height)
Image Source: = http://paulyg.f2s.com/graphics/uvexp.gif
EDIT:
Now what UV mapping is?
I'll try to explain in lame language.
Assume a triangle as your polygon data and a rectangular, balloon like, nonrigid sheet as texture, here we want to cover the triangle with desired portions of that sheet, what should we do in real world? Of-course we will try to stretch it around triangle in all possible ways, we can start from wherever we want but limitation is simple, we have got 3 points to staple that sheet. this is exactly what OpenGL does. It asks which part of the texture you want to be mapped into which vertex of polygon. We can't tell OpenGL that hey put this UV in center on the polygon!
That's why the higher the number of vertices we have in polygon the better mapping we can do.
But in actual programming, we can assign almost any UV to any vertex. That would make the triangle with unusual patterns. I haven't tested this :-), try this in 3D modelling programs like Blender if you wanna save your time and experiment quickly on UV mapping. Then export your simple mesh into OBJ format, which you can read on text editors as well.
No comments:
Post a Comment