• What are you working on? October 2015
    1,070 replies, posted
Same bug here Galaxy note3, 5.0
just make a google play beta or something so you can more easily track errors
[QUOTE=Giraffen93;48987860][url=http://unifrog.braxnet.org/]what a coincidence, i recently relaunched it :v:[/url][/QUOTE] Did Geel tell you to back off yet?
[QUOTE=proboardslol;48995102]Did Geel tell you to back off yet?[/QUOTE] did i miss something?
[url]https://facepunch.com/showthread.php?t=1473935&p=48193023&viewfull=1#post48193023[/url]
[QUOTE=Giraffen93;48995116]did i miss something?[/QUOTE] As far as I know, someone showed off a project similar to something he did, and that ruffled his feathers quite a bit.
I took a break from my game thing because I don't know how to proceed with my grid level collision detection thing yet. Each object can exist in 4 cells simultaneously and my array writing stuff is extremely slow/doesn't work because I don't have multidimensional arrays in javascript. So now I'm playing with this chinese arduino clone I bought off ebay. :v: It came in a anti-static bag and apart from the spotty soldering, it looks pretty nice and feels very sturdy.
I'm not gonna lie, trying to learn how to write code for guidance and use the concepts of my Calc III class combined with diff eq actually helped me on my last two tests. I started writing one of the problems on the calc test today like it was part of my code... because it was very similar to something I needed to find myself (path length from parametrized vector equation). [editline]27th October 2015[/editline] so in case any of you do what i did or what tons of you all seem to do anyways, just keep coding don't do practice problems what can possibly go wrong
I've been reading up on Fourier transforms and thinking about applications within AI design. It seems like the frequency-amplitude (I believe that's it) mirrors the structure of connection weights in neural networks somewhat. Also, convolution is a common function created from the structure of nodes (like the network that deepdream uses). So my first theory is that one could create a more effective neural network by utilizing the fact that neural networks can perform convolution through their functioning, and that the state of all connections for a node is similar to an instantaneous frequency. The first idea that pops out from this is that you could use a (possibly) independent set of nodes to convolute two other layers. Who knows what that could accomplish or if it's possible, but it seems like it could yield applications in self-managing neural networks (to some extent) The other potential theory is that the state of any region of a brain (from a single neuron to the whole thing) can be expressed as a wave, in either the amplitude or frequency domain. And stimuli can affect the state through either of these. This is kind of going off of the concept of a holographic brain, where the entire state of the brain is present in every constituent part (eg half the brain contains the same information, but at half the resolution) I haven't had time to develop these at all (drowing in midterms), but it's a breath of fresh air for this project, and it's helping me take time into account (all previous versions of my AI were designed around simulating thinking, but only at extreme timeframes (a few seconds or forever). So this feels really appropriate for filling that gap.
[QUOTE=ZenX2;48995561]I've been reading up on Fourier transforms and thinking about applications within AI design. It seems like the frequency-amplitude (I believe that's it) mirrors the structure of connection weights in neural networks somewhat. Also, convolution is a common function created from the structure of nodes (like the network that deepdream uses). So my first theory is that one could create a more effective neural network by utilizing the fact that neural networks can perform convolution through their functioning, and that the state of all connections for a node is similar to an instantaneous frequency. The first idea that pops out from this is that you could use a (possibly) independent set of nodes to convolute two other layers. Who knows what that could accomplish or if it's possible, but it seems like it could yield applications in self-managing neural networks (to some extent) The other potential theory is that the state of any region of a brain (from a single neuron to the whole thing) can be expressed as a wave, in either the amplitude or frequency domain. And stimuli can affect the state through either of these. This is kind of going off of the concept of a holographic brain, where the entire state of the brain is present in every constituent part (eg half the brain contains the same information, but at half the resolution) I haven't had time to develop these at all (drowing in midterms), but it's a breath of fresh air for this project, and it's helping me take time into account (all previous versions of my AI were designed around simulating thinking, but only at extreme timeframes (a few seconds or forever). So this feels really appropriate for filling that gap.[/QUOTE] thats really interesting! Self managing neural networks could also be implemented into a network of microprocessors that are able of cooperation, maybe. I know there are a few boards that do it already and I had always found that idea fascinating- a small cluster of microprocessors capable of repairing and modifying its fellows, or cutting power and data off if shit goes downhill, is really neat.
[QUOTE=ZenX2;48995561]I've been reading up on Fourier transforms and thinking about applications within AI design. It seems like the frequency-amplitude (I believe that's it) mirrors the structure of connection weights in neural networks somewhat. Also, convolution is a common function created from the structure of nodes (like the network that deepdream uses). So my first theory is that one could create a more effective neural network by utilizing the fact that neural networks can perform convolution through their functioning, and that the state of all connections for a node is similar to an instantaneous frequency. The first idea that pops out from this is that you could use a (possibly) independent set of nodes to convolute two other layers. Who knows what that could accomplish or if it's possible, but it seems like it could yield applications in self-managing neural networks (to some extent) The other potential theory is that the state of any region of a brain (from a single neuron to the whole thing) can be expressed as a wave, in either the amplitude or frequency domain. And stimuli can affect the state through either of these. This is kind of going off of the concept of a holographic brain, where the entire state of the brain is present in every constituent part (eg half the brain contains the same information, but at half the resolution) I haven't had time to develop these at all (drowing in midterms), but it's a breath of fresh air for this project, and it's helping me take time into account (all previous versions of my AI were designed around simulating thinking, but only at extreme timeframes (a few seconds or forever). So this feels really appropriate for filling that gap.[/QUOTE] I hope someday to even remotely grasp how neural networks function on this scale. I want to get into AI, but shit's crazy.
[QUOTE=Berkin;48995208]As far as I know, someone showed off a project similar to something he did, and that ruffled his feathers quite a bit.[/QUOTE] That was also obviously a joke.
[QUOTE=ZenX2;48995561]I've been reading up on Fourier transforms and thinking about applications within AI design. It seems like the frequency-amplitude (I believe that's it) mirrors the structure of connection weights in neural networks somewhat. Also, convolution is a common function created from the structure of nodes (like the network that deepdream uses). So my first theory is that one could create a more effective neural network by utilizing the fact that neural networks can perform convolution through their functioning, and that the state of all connections for a node is similar to an instantaneous frequency. The first idea that pops out from this is that you could use a (possibly) independent set of nodes to convolute two other layers. Who knows what that could accomplish or if it's possible, but it seems like it could yield applications in self-managing neural networks (to some extent) The other potential theory is that the state of any region of a brain (from a single neuron to the whole thing) can be expressed as a wave, in either the amplitude or frequency domain. And stimuli can affect the state through either of these. This is kind of going off of the concept of a holographic brain, where the entire state of the brain is present in every constituent part (eg half the brain contains the same information, but at half the resolution) I haven't had time to develop these at all (drowing in midterms), but it's a breath of fresh air for this project, and it's helping me take time into account (all previous versions of my AI were designed around simulating thinking, but only at extreme timeframes (a few seconds or forever). So this feels really appropriate for filling that gap.[/QUOTE] Neural networks seems like such a finicky thing to duplicate. I haven't ever really researched how these would be done in computer science, but the brain and how it works is really the area I study most at University. If one were to ever try to emulate a neural network, there is a lot of little things one needs to take into account, such as node's changing the strength of their connection with another node based on past history of connection frequency as well as how closely adjacent nodes fire together in time, and other things such as a node being exhaustible and how clusters can work together to avoid such problems.
[QUOTE=Karmah;48996214]Neural networks seems like such a finicky thing to duplicate. I haven't ever really researched how these would be done in computer science, but the brain and how it works is really the area I study most at University. If one were to ever try to emulate a neural network, there is a lot of little things one needs to take into account, such as node's changing the strength of their connection with another node based on past history of connection frequency as well as how closely adjacent nodes fire together in time, and other things such as a node being exhaustible and how clusters can work together to avoid such problems.[/QUOTE] The changing strength based on past connection history is what the theory about interactions across the brain in both amplitude and frequency domains is aimed at handling; I'm really curious about the relationship between them where something localized in time in amplitude is spread over a wide range of frequencies, whereas something in a localized frequency band is spread out over time in amplitude. The relation of that to learning really stands out; as a stimuli present once can have a wide but non-lasting effect, whereas repeated stimuli (training, etc) over time seems to localize the effect (a specific input gives an output in a more specific range) The hebbian learning stuff also could have something to do with amplitude/frequency. But that's also tied to the relationship of network structure to functioning, which I'm just starting to explore Clusters sound more related to ensembles for overcoming the whole bias-variance issue and also the concept of small-world networks for redundancy and resiliency; that I don't understand as well yet because I haven't ever done much with statistics (but I'll probably be taking courses in that soon), although as far as scale-free networks go, I have a decent idea of how they work
[QUOTE=ZenX2;48996533]The changing strength based on past connection history is what the theory about interactions across the brain in both amplitude and frequency domains is aimed at handling; I'm really curious about the relationship between them where something localized in time in amplitude is spread over a wide range of frequencies, whereas something in a localized frequency band is spread out over time in amplitude. The relation of that to learning really stands out; as a stimuli present once can have a wide but non-lasting effect, whereas repeated stimuli (training, etc) over time seems to localize the effect (a specific input gives an output in a more specific range) The hebbian learning stuff also could have something to do with amplitude/frequency. But that's also tied to the relationship of network structure to functioning, which I'm just starting to explore Clusters sound more related to ensembles for overcoming the whole bias-variance issue and also the concept of small-world networks for redundancy and resiliency; that I don't understand as well yet because I haven't ever done much with statistics (but I'll probably be taking courses in that soon), although as far as scale-free networks go, I have a decent idea of how they work[/QUOTE] oh this is so cool. laplace and FF transforms seem really neat but we're ages from learning them in my diff eq class. I have half a mind to just read the book nd work my way to that point but I'd get nothing else done for my other classes. The implementations in hardware for this could be huge, and the applications are near endless. I mean, I guess it depends how accurately and powerfully you model this, but even light modeling of the changing strength based on past work could allow for neat networking. Part of my idea for my guidance computer project was to network a bunch of high power efficiency MCU's together and let them modify, rewrite, reset, and control the resources of others. Two applications I think of right away are both DoD or military related I guess- comms satellites and body armor. In the case of satellites, power usage is a must. If the satellite is idling it may as well shut down the other ancillary cores or its brothers so to speak, but then if its needed to transmit it can call in one spare core. Of it needs to do heavy encryption maybe wake up the whole network. Similar in the case of body armor as well, lower power usage but ability to get as powerful as needed. I was thinking in terms of exoskeletons even, as in having the whole network learn how best to configure its computational power based on what is currently going on. Or maybe it learns a more efficient method of calculating the control impulses needed, updates its configuration, propagates this and then sends it out for other nearby clusters to look at and possibly use. If you're just walking slowly the amount of precise computation needed to make the suit follow you is lesser than when you're moving fast or in combat, since the movement required by the suit is lower per unit time. Encryption works too. If you're sitting around just shut all the spare cores down. Spare cores helps with the whole combat thing too, if one or two are damaged the cluster can elect to shut them down or try to work around the malfunction if possible. This was all based on finding [URL="http://www.liquidware.com/shop/show/IXM/Illuminato+X+Machina"]this hardware[/URL]. Watching a firmware update propagate is neat though: [video=youtube;ZBFoFYhC9B4]https://www.youtube.com/watch?v=ZBFoFYhC9B4[/video] It also lead to someone making this joke in on the article of one of the first demos of these: [QUOTE]Independent decision making modules Cell# 3712: Hey guys, have you noticed that #1914 never seems to accept requests? Cell# 141: Well, he does sometimes reject. Cell# 4439: I don't route to him very much anyway. Cell# 1142: He rejected the last three of mine. I kind of agree. Cell# 3712: So what should we do about it? Cell# 141: Can't we just fry him? There's plenty of us anyway. Cell# 3712: That's a bit harsh. Cell# 4439: Ok, I got the records here showing that he rejected 90% of requests the last week but allocated two hundred percent of average power to himself. Cell# 3712: That motherfucker, let's do it then. Cell# 1142: I don't really want to fry him, but I don't mind that much if you do. Cell# 141: Ok, gather up all your spare power, STAT![/QUOTE] I have no idea what I'm talking about. I just go big picture really fast because my mind runs at a million miles a minute and has a field trip thinking of how to integrate these things or innovations, or how to combine these with other things I've heard of recently. Its wretched. I'm perpetually the ideas guy and I hate it. No one hires "idea guys"
[QUOTE=geel9;48996173]That was also obviously a joke.[/QUOTE] but I was just pretending to be retarded!
[QUOTE=/dev/sda1;48997134]but I was just pretending to be retarded![/QUOTE] Yeah, it's called sarcasm.
[QUOTE=geel9;48997184]Yeah, it's called sarcasm.[/QUOTE] this is why chui will always be better than geeui
I made a pretty sick CLI for a side-project using Devcom for command processing. [vid]https://zippy.gfycat.com/ForsakenIncompatibleInganue.webm[/vid]
[media]https://www.youtube.com/watch?v=XCmOQOdr3cE[/media] Added a sweet inventory system for the server, items added to users are given a unique key and can be saved and reloaded back onto a player (with equipped/wear stats attached). The bugged weapon is because clients updating their server inventory (like finding an item on the floor and telling the server what item is equipped) isn't implemented currently so it tried to distinguish if the player is "alerted" by the animation they broadcasted. The slow turning is my laptop trackpad :v:
Can you please show us what happens on VATS and when player kills?
[QUOTE=gonzalolog;48998104]Can you please show us what happens on VATS and when player kills?[/QUOTE] VATS gets disabled when you load in, it wouldn't work for an online game due to it freezing time. I'll try and implement the net messages for player death soon but currently what you're seeing is 50% of how all combat works in Fallout. The movement, kickback, noise and reload sequences are purely derived from the animation file of the gun firing. The actual projectile being launched is different, so in that video the remote players looked like they were firing their weapons but they actually don't launch a tracer/projectile. It's why you can do FireWeapon on your holstered weapon and it'll shoot a bullet on your back towards your kneecaps.
[QUOTE=Silentfood;48998087][media]https://www.youtube.com/watch?v=XCmOQOdr3cE[/media] Added a sweet inventory system for the server, items added to users are given a unique key and can be saved and reloaded back onto a player (with equipped/wear stats attached). The bugged weapon is because clients updating their server inventory (like finding an item on the floor and telling the server what item is equipped) isn't implemented currently so it tried to distinguish if the player is "alerted" by the animation they broadcasted. The slow turning is my laptop trackpad :v:[/QUOTE] Wow I hadn't even heard of this, impressive
Between my lectures, labs, and lab reports, I haven't had a lot of time to really work on my port as of late. All I could do tonight was fix my projection issue: [t]http://i.imgur.com/isM4xuV.png[/t] I've previously implemented assimp model loading in both the editor and the stand alone engine, but I haven't ported over my own mesh importing, which is still needed for things I make in the level editor, such as walls and terrain, etc. Once that's done the port should be complete and new features can get worked on. I also forgot, yesterday I managed to make a model manager system similar to how I handle textures. Whenever a model is requested, the manager checks its records to see if it already made one and will pass on all the appropriate info for it if it does. But since I'm using VBO's and VAO's the manager doesn't need to constantly lug around whole mesh's, rather only model id's as everything else just gets uploaded straight to the GPU.
[QUOTE=Silentfood;48998087]Added a sweet inventory system for the server, items added to users are given a unique key and can be saved and reloaded back onto a player (with equipped/wear stats attached). The bugged weapon is because clients updating their server inventory (like finding an item on the floor and telling the server what item is equipped) isn't implemented currently so it tried to distinguish if the player is "alerted" by the animation they broadcasted. The slow turning is my laptop trackpad :v:[/QUOTE] I've been waiting for a MP version of NV for a long time, at first I thought this was some new A.I that would constantly avoid your aiming when in combat.
[QUOTE=ZenX2;48995561]I've been reading up on Fourier transforms and thinking about applications within AI design. It seems like the frequency-amplitude (I believe that's it) mirrors the structure of connection weights in neural networks somewhat. Also, convolution is a common function created from the structure of nodes (like the network that deepdream uses). So my first theory is that one could create a more effective neural network by utilizing the fact that neural networks can perform convolution through their functioning, and that the state of all connections for a node is similar to an instantaneous frequency. The first idea that pops out from this is that you could use a (possibly) independent set of nodes to convolute two other layers. Who knows what that could accomplish or if it's possible, but it seems like it could yield applications in self-managing neural networks (to some extent)[/QUOTE] Flies use convolution like this in their visual system to quickly extract [URL="http://www.sciencedirect.com/science/article/pii/0893608090900012"]meaningful[/URL] [URL="http://www.sciencedirect.com/science/article/pii/S0925231214015665"]information[/URL]. That collision warning circuit also exists as "fixed implementation" in other animals, but it probably works differently. It's much more precise in birds than in humans, regarding the approach angle. [QUOTE]The other potential theory is that the state of any region of a brain (from a single neuron to the whole thing) can be expressed as a wave, in either the amplitude or frequency domain. And stimuli can affect the state through either of these. This is kind of going off of the concept of a holographic brain, where the entire state of the brain is present in every constituent part (eg half the brain contains the same information, but at half the resolution)[/QUOTE] Personally I don't subscribe to anything close to holographic brain theories, since while the fields of course radiate outwards through their propagation equations, they are a) dampened so much by surrounding matter that they normally are too weak to meaningfully influence anything and b) the amount of states a single part of a brain can store information has a hard limit resulting from quantum physics and a soft-ish but much lower limit from the amount of heat present. In that regard it seems to be much more efficient to process any kind of information locally instead. That said, it is indeed possible to represent a brain activation state as wave, since [I]everything[/I] can be expressed as wave. It would most likely perform absolutely terribly on standard hardware, but if you use a circuit that does calculations on analog signals through interference it may work out. You would most likely need a quantum computer to arrive at a reasonable information density though. [QUOTE]I haven't had time to develop these at all (drowing in midterms), but it's a breath of fresh air for this project, and it's helping me take time into account (all previous versions of my AI were designed around simulating thinking, but only at extreme timeframes (a few seconds or forever). So this feels really appropriate for filling that gap.[/QUOTE] One thing I'd like to add is that most neural networks aren't homogeneous in their function. Humans' visual recognition is normally predetermined to work in roughly the same way, which is why common optical illusions or those images that persistently throws off your pattern colour correction exist. (At least the latter seems to vary quite a bit though. For me the effect faded quickly and disappeared completely in 2-3 days while some people get stuck with it for months on end.) If you want a neural network to solve a task efficiently, it's most likely a very good idea to mix fixed and malleable parts and to stack networks that are trained independently to complete different stages of the calculation.
I spent about 30 minutes cobbling together my latest creation.. an app I call RiceCooker: [t]http://i.imgur.com/0Uw6Dfi.png[/t][t]http://i.imgur.com/LtsfIT7.jpg[/t] I always see people "ricing" their phones and some of the people who are really into it go as far as to Photoshop their screenshots into renders of phones and add other effects to make it look really fancy.. So I took a quick break tonight and came up with an app that does it for you. Right now it only has a Nexus 5 render available and it automatically pulls your phone's wallpaper and does some cheap gaussian blur (downsample, upsample.. downsample, upsample.. downsample.. etc.) on it for the background, but I think it'd be cool to let users choose from a bunch of popular phone renders and choose background images/tints, phone offsets within the frame, phone scale, maybe some cheap lighting effects like screen shine too. Might also be useful for Android developers that want nice promo photos for their Play Store listing but don't want to go through the trouble of Photoshopping one themselves. Edit: Oops, I guess this already exists. Welp!
[QUOTE=srobins;48998710] I always see people "ricing" their phones [/QUOTE] when you say that, i picture people putting giant rear spoilers, tinted windows, and enlarged exhausts on their phones
[media]https://youtu.be/PjlLKihSFSM[/media]
[QUOTE=Map in a box;48999050][media]https://youtu.be/PjlLKihSFSM[/media][/QUOTE] What are you using to display this?
Sorry, you need to Log In to post a reply to this thread.