perhaps if you only consider the company's turnover, but i'm talking about the amount of job openings specifically for a lua. while lua may be a good tool in your toolbox, you're probably going to need to know more languages than just lua to land a job
blizzard for example isn't currently hiring any lua scripters, but they are hiring programmers for other languages
If you've got lua experience within a company that uses lua somewhere, you are more valuable as a programmer and more likely to land the job compared to someone who does not though.
yeah, absolutely
To me niche isn't about the money the bring in but the part of the industry they're in. I think the overwhelming majority of software jobs have nothing to do with videogames. Personally, I have no interest in working in the game industry so Lua would probably be pretty low on my list of priorities. Even looking on jobs websites, there are countless c#, java, or javascript jobs but only a handful of Lua jobs.
..but the argument isn't whether or not you should learn Lua, but rather if its relevant on a resume? Nor is Lua just used in game dev so I'm not sure why you keep bringing up that angle.
Even though I was quite happy with how my clouds turned out, they looked awful when moving the camera right underneath or above them, since the particles would start to spin. So I implemented a solution:
https://www.youtube.com/watch?v=MDY_8ni-Bo8
Right so I guess it depends on what you're going for. If you wanna be a game dev then sure put it on your resume, but if you want to do regular software or web APIs or something then you may as well put BASIC on there
If you know a language, you should put it on a resume. As simple as that.
Wanna generate road networks. Decided to go with beziers for calculating the geometry even though everything will be automated.
Should be fun
https://files.facepunch.com/forum/upload/195338/f29b91e0-028a-44ef-9cb8-d22f6a0b5190/2018-05-20 17-17-41.mp4
You might want to consider using circular arcs: Curved Paths
Thanks, that's a pretty cool approach!
Currently I mainly intend on using this for long roads connecting cities, but first off I will also make the beziers 'simplify' into straight and curved pieces, as a continuously curving road doesn't make much sense for easy driving.
If I want to make them more uniform and predictable for building placement I would probably go for a more controlled system like that indeed.
what? Working in a scientific field I see tons of scattered language knowledge but saying that it's about as useful as basic is silly.
Besides, just listing a language on a CV isn't enough to convince me of much. I want to see how someone has used that language: they may claim they know modern C++, but if I go check code they've written and see C++99 style stuff, no usage of the STL, and other outdate idioms then I know their CV doesn't reflect the reality. If someone says they know a scripting language, that still can be of some use if they've shown that they used the scripting language in a fashion relevant to our current line of work. So maybe they wrote a Python or Lua system to help generate build configuration files (really rather common, and something Python excels at imo), or maybe they implemented a simple scripting system so non-programmers can extend functionality somehow. Regardless, if you taught yourself some language and then proceeded to cement that knowledge with some work that I can see I'll be convinced of your abilities - and probably of your ability to potentially learn more on-the-job.
Bashing on Lua seems silly. I've just started to use it but I can already see a lot of the advantages it has: namely, it's a lot easier to integrate and embed into applications than I found Python to be, and doesn't deal with the GIL. Not to mention how much more performant it is than Python. I mean sure, it doesn't have the third-party ecosystem that Python does and the standard library isn't as full-featured but it definitely has a place. I'm looking forward to seeing what it can do for controlling parts of my spacecraft simulator - potentially even letting it define whole now component types for spacecraft. Should be neat!
Also yeah we got that big huge extension worth $$$$$ on our contract so now we'll be hiring more software developers (I hope, I really hope). I'm currently in the planning stages of the rather monolithic task of building our own rendering engine for our simulation, and for use elsewhere in the company. Turns out that engine development is really fucking hard and complex ????!?
I quite enjoy the work but christ, it is a serious undertaking alone and we have a rather hefty/difficult list of required features.
Do you think it would be too insulting if I had a wordcloud of buzzwords ("hard working", "fast learner") on my resume?
Put smart working instead of hard working to be extra edgy
"Already playing chess while the rest of the company is still playing checkers"-working
I've been working really hard on learning to use better design principles, particularly due to how reading about ECS and trying to implement ECS paradigms has affected a lot of my approach to code. I was already moving towards a lot of the conclusions and ideas I arrived at, but I've been pushed further lately by this work. Mostly it involves keeping objects small, and thinking about how I can encapsulate smaller and smaller chunks of logic (when it makes sense, not just for the sake of making my entire codebase a pile of fucking lego bricks). That and "data oriented design", just not to any extremes - I'm mostly focusing on sharing resources/memory and considering how efficient rapidly iterating through things will be
I'm effectively just using a transcribed copy of entt at this point, but I'm definitely going to be refining their approach based on usage. Thus far, creating a renderable entity that references a certain mesh + rendering it is as simple as:
https://gist.github.com/fuchstraumer/270ce17b9559fb90d885a749396d80b8
Starting to separate backing data from things that reference it or use it is definitely one of the biggest benefits thus far: "MeshData" holds the actual backing objects (and I don't actually initialize the Vulkan memory or transfer it to the device here but w/e), and the components just refer to that. Editing some of the available values in those components can change what subsections of the MeshData buffers you actually draw though, and you can also change the parameters of the IndexedDrawCommand to modify the draw slightly. Things are getting more complex though, as I figure out how to do things like object models with multiple parts and multiple materials. Not sure the best way to store that data, and then I also want to add a MultidrawIndirectCommandComponent which will somehow have to cleanly interface to multi-part and multi-material objects of different types.
Regardless, this approach feels much more maintainable, flexible, and easy to use than how I used to do things. Previously I would have had a single "box" component that would have stored the mesh data, the index and vertex buffer component equivalents, would have a set draw command method that wasn't configurable, and would've been much more complex in implementation and generally a pain to individually modify and customize. Plus it would've stored it's own graphics pipeline and descriptor resources, so it really was just a massively complex object that didn't encourage efficient resource usage at all.
lua opengl 4.3 stuff
local Xx, Yx, W, H = X or 0, Y or 0, 100, 100
local format = {
{
index = VAO_POSITION,
type = "vec2"
}
}
vao = M.createVertexDescriptor():bind()
vao:describe(format)
vbo = M.createVertexBuffer():bind()
vbo:describe(format)
vbo:start()
vbo:push({Xx, Yx})
vbo:push({Xx+W, Yx})
vbo:push({Xx, Yx+H})
vbo:push({Xx+W, Yx+H})
vbo:commit()
vao:assign(VAO_POSITION, vbo)
https://i.imgur.com/yIIcd1g.gifv
procedurally generated tactical rpg i've been working on for too long now.
custom engine and vertical stack of the game is finished, just adding content now.
I've been working on a funny little marble game as my love note to Super Monkey Ball for a few months now. I just finished setting up a new stage intro camera, and I'm really happy with how it turned out!
https://files.catbox.moe/l8w9dg.mp4
You can speed it up by holding a certain button/key bind, since it's normally very slow (to allow a new player some time to observe the stage before they play it)
I've heard a lot about bias in different kinds of algorithms, particularly facial recognition:
> How white engineers built racist code – and why it's dangerous f..
> Google Photos Tags Two African
Basically a lot of activists allege that facial recognition algorithms are being trained with primarily white or Asian faces and it makes it hard for algorithms to recognize darker skin.
What I'm wondering is (if anyone here knows anything about computer vision): is the problem that engineers are using biased input sets, or is it that darker skin faces are hard to detect because of their pigmentation (and therefore need specialized algorithms to handle their cases?)
I would find it a little hard to believe that the issue is the input set since I don't think facebook is paying engineers to sit there and train their algorithms, but using their users to tag people in photos to learn from that data.
I think an easy answer is this: Can you somewhat reliably distinguish and recognise black people in modern digital photographs and videos, and are they usually exposed well-enough to look decent?
The answer is probably "yes", which means the problem isn't really with the data itself.
The algorithm just sucks/was built on wrong assumptions/didn't get enough diverse training data/got biased training data.
That last point is especially a problem if you train anything on crowdsourced data (which Google seems to be particularly fond of).
But wouldn't crowd sourced data be just as diverse as the population? Unless black people (globally, not just in the US) have worse access to internet/social media.
So it seems more likely that the algorithm itself is programmed with worse assumptions.
I think one of the issues in this all is that current machine learning tech really doesn't work like most biological systems, though.
It's trained to produce results that make sense to us, but what happens in-between is often hardly known. That's why these systems are also often very vulnerable to adversarial input.
Published the game from my senior project class.
Kickshot on Steam
https://www.youtube.com/watch?v=SWDMo08dOrA
It's free so check it out and leave a review if you want.
Refactoring my code to combine OOP with RESTful naming schemes.
So for example, before I would have functions like:
function CreateTab(tab){
$.post("/api/tab", tab).then(() => {
//etc.
});
}
function DeleteTab(tab){
$.ajax({
method: "DELETE",
data: tab
}).then(() => {
//etc.
});
}
function UpdateTab(tab)
//...
//etc
I now have a "tab" class that has methods for "POST", "GET", "PUT", "DELETE", and "PATCH".
and my angular bindings do like
<div ng-repeat="tab in Tabs">
<button ng-click="tab.post()"/>
</div>
This helps organize my code a lot more
I've been readin up on stm32/mbed development and came across this living hellspawn:
JavaScript on mbed | Mbed
Fucking javascript on an embedded platform
But the best part is this:
The code within the JavaScript VM runs about a hundred
times slower than equivalent C++ code. That sounds like a lot, but
everything underneath your application (like drivers, network stacks,
crypto) is still C++, so the actual impact will be less noticeable.
"That sound like a lot, ~but~"
I don't think I've ever seen anything more horrifying programming related.
You laugh but fitbit's frontend is JS and the watches don't pack much power either.
It's the same thing as having micropython on ESP8266. Those platforms are inherently unsuited to run interpreted languages, yet the loss of speed is easily offset by the development speed and comfort you gain.
Except python is obviously the superior language.
something something elua superiority
Sorry, you need to Log In to post a reply to this thread.