• What are you working on?
    5,004 replies, posted
How to make everyone happy: [code] logInfo("%s master race.\n", glGetString(GL_VENDOR)); [/code] [img]http://puu.sh/ngwpv/2c631c7c43.png[/img]
[media]https://www.youtube.com/watch?v=dfwCsZ1HHug[/media] Roommate and I made this for Global Game Jam 2016.
[QUOTE=geel9;49785014]I'm going fucking insane trying to figure out what the hell is going on here. It seems that, sometimes, when fetching an inventory from the Steam WebAPI, we're getting the wrong user's inventory. I can't reproduce it at-will, so my debugging is basically waiting for the bots to message me on Steam that something fucked up. I've narrowed it down to the fuckup happening sometime after receiving the json response from the WebAPI, but before the Inventory is returned. My top theories right now are that Steam itself is fucking up BAD (unlikely) or there's some really, really, really weird memory corruption shit going on with mono; something to do with fucked up multithreading perhaps.[/QUOTE] Mono's probably fine, could be steam. Maybe make the error detection more downstream so you can actually log the response to a file, and inspect it yourself? Also, completely unrelated: [url]http://dankaminsky.com/2016/02/20/skeleton/[/url] Whoops!
[QUOTE=JohnnyOnFlame;49783511]I'm still spamming water- [img]http://i.imgur.com/vVsoUYy.gif[/img][/QUOTE] JohnnyFlame, can you post code of this? Or tutorial? I really want to know, how you make stuff like this.
I like how easy refraction is in glsl [img]http://i.imgur.com/FgW1tjH.gif[/img]
[QUOTE=Cold;49783634]That just exempts you from laws against reverse engineering, DRM circumvention is still explicitly illegal by Copyright Law[/QUOTE] Sure, but you normally don't have to circumvent any DRM to write or run a custom server. (Clients may have to do it, though.)
[QUOTE=Tamschi;49782997] That might be why it just flickers wildly for me at first and then creates some kind of fine moving pattern. (Firefox, GTX 580)[/QUOTE] I just tested it in Firefox with ANGLE disabled and I get a similar sort of bug. It seems like it might be caused by an inconsistency between half-pixel offsets in DirectX(ANGLE) and OpenGL. Can you tell me if this looks right for you? (might be flipped due to the mouse starting at 0,0) [url]https://www.shadertoy.com/view/MdK3zd[/url] [t]http://i.imgur.com/jYPPaWJ.png[/t] Edit: I added a temporary fix so the original shader works on Firefox with ANGLE disabled (on my end atleast): [url]https://www.shadertoy.com/view/XsK3Rd[/url]
[img]http://i.imgur.com/GdLecg9.png[/img] I found the best way to destroy a lua state :v
look at this pleb doesn't even use luajit
[QUOTE=Proclivitas;49783726]I do believe .Select isn't going to execute because it's just storing the query you've created. When you call .Count it is actually going to enumerate over the query, hence doing things. [B]Tamschi is probably going to write you well-educated response soon though.[/B] I think it's called deferred execution. Edit: This was a good refresher I just read through, [url]http://blogs.msdn.com/b/charlie/archive/2007/12/09/deferred-execution.aspx[/url][/QUOTE][emphasis mine] Yes :v: I'm going to put in a little extra effort since you seemingly have high expectations though. [QUOTE=Darwin226;49783533].NET Tasks are so disappointing. I have no idea how some of these decisions make sense. For example, why WHY does an async method that's not called with await ever execute immediately? Wouldn't it be much better/clearer/simpler to have a guarantee that invoking a method and getting a Task<T> result is always a pure action? So you don't have unintuitive behavior such as [code]var tasks = new []{1, 2, 3}.Select(functionTurningIntsToTasks);[/code] not doing anything but then [code]tasks.Count()[/code] suddenly doing stuff?[/QUOTE] As Proclivitas already said, LINQ is (almost) fully deferred. If a method (partially) evaluates the query, that's usually explicitly stated in the documentation. ([URL="https://msdn.microsoft.com/en-US/library/bb338038%28v=vs.110%29.aspx"][I].Count()[/I] in particular does this conditionally.[/URL]) Since I'll need it to explain my reply to your second post about this, here's how the [I]Select[/I] method looks like (roughly): [code]static class DeferredAndAsync { public static IEnumerable<TResult> Select<TItem, TResult>(this IEnumerable<TItem> enumerable, Func<TItem, TResult> selector) { foreach (var item in enumerable) { yield return selector(item); } } }[/code] This compiles to [code]// class DeferredAndAsync [IteratorStateMachine(typeof(DeferredAndAsync.<Select>d__0<, >))] public static IEnumerable<TResult> Select<TItem, TResult>(this IEnumerable<TItem> enumerable, Func<TItem, TResult> selector) { DeferredAndAsync.<Select>d__0<TItem, TResult> expr_07 = new DeferredAndAsync.<Select>d__0<TItem, TResult>(-2); expr_07.<>3__enumerable = enumerable; expr_07.<>3__selector = selector; return expr_07; } [CompilerGenerated] private sealed class <Select>d__0<TItem, TResult> : IEnumerable<TResult>, IEnumerable, IEnumerator<TResult>, IDisposable, IEnumerator { private int <>1__state; private TResult <>2__current; private int <>l__initialThreadId; private IEnumerable<TItem> enumerable; public IEnumerable<TItem> <>3__enumerable; private Func<TItem, TResult> selector; public Func<TItem, TResult> <>3__selector; private IEnumerator<TItem> <>7__wrap1; TResult IEnumerator<TResult>.Current { [DebuggerHidden] get { return this.<>2__current; } } object IEnumerator.Current { [DebuggerHidden] get { return this.<>2__current; } } [DebuggerHidden] public <Select>d__0(int <>1__state) { this.<>1__state = <>1__state; this.<>l__initialThreadId = Environment.CurrentManagedThreadId; } [DebuggerHidden] void IDisposable.Dispose() { int num = this.<>1__state; if (num == -3 || num == 1) { try { } finally { this.<>m__Finally1(); } } } bool IEnumerator.MoveNext() { bool result; try { int num = this.<>1__state; if (num != 0) { if (num != 1) { result = false; return result; } this.<>1__state = -3; } else { this.<>1__state = -1; this.<>7__wrap1 = this.enumerable.GetEnumerator(); this.<>1__state = -3; } if (!this.<>7__wrap1.MoveNext()) { this.<>m__Finally1(); this.<>7__wrap1 = null; result = false; } else { TItem current = this.<>7__wrap1.Current; this.<>2__current = this.selector(current); this.<>1__state = 1; result = true; } } catch { this.System.IDisposable.Dispose(); throw; } return result; } private void <>m__Finally1() { this.<>1__state = -1; if (this.<>7__wrap1 != null) { this.<>7__wrap1.Dispose(); } } [DebuggerHidden] void IEnumerator.Reset() { throw new NotSupportedException(); } [DebuggerHidden] IEnumerator<TResult> IEnumerable<TResult>.GetEnumerator() { DeferredAndAsync.<Select>d__0<TItem, TResult> <Select>d__; if (this.<>1__state == -2 && this.<>l__initialThreadId == Environment.CurrentManagedThreadId) { this.<>1__state = 0; <Select>d__ = this; } else { <Select>d__ = new DeferredAndAsync.<Select>d__0<TItem, TResult>(0); } <Select>d__.enumerable = this.<>3__enumerable; <Select>d__.selector = this.<>3__selector; return <Select>d__; } [DebuggerHidden] IEnumerator IEnumerable.GetEnumerator() { return this.System.Collections.Generic.IEnumerable<TResult>.GetEnumerator(); } }[/code], which as you can see instantly creates an [I]IEnumerable<TResult>[/I]/[I]IEnumerator<TResult>[/I] instance (which is duplicated only for other threads to increase performance). Most importantly, [I]all of the code of the original iterator method is now inside the [I]MoveNext()[/I] instance method of the returned class and is not run until that is called[/I]. This ensures that the result is always up to date, since execution can otherwise only be paused at [I]yield return[/I] statements (since iterators are deterministic and [I]don't use threading[/I]). As an aside, this also means that the initial set-up doesn't always run, which makes sense since [I][I]IEnumerable[/I]s aren't expected to always run[/I]. [QUOTE]Also, why doesn't ContinueWith have an overload that takes an asynchronous continuation? How is this not the most common use case ever?[/QUOTE] You mean something like [code]static Task<bool> ATask { get { return Task.FromResult(true); } } static async Task<bool> Tasking(bool b) { if (b) await Task.Yield(); return b; } static void DoStuff() => ATask.ContinueWith(Tasking);[/code], right? That's a defective use case (or rather a slow pattern) due to how [I]async[/I] methods work internally. I made two [I]ContinueWith[/I] extension methods for comparison: [code]static async Task<R> ContinueWith<T, R>(this Task<T> task, Func<T, Task<R>> continuation) => await continuation(await task); // Two tasks. static async Task<R> ContinueWith<T, R>(this Task<T> task, Func<T, R> continuation) => continuation(await task); // One task.[/code] As you can see, for the former to behave like the latter (since [I]async[/I] methods (have to) start immediately) you end up with one more [I]Task[/I] instance (but only possibly! More on that later). For the program to run as fast as possible, you should always either use direct calls between [I]async[/I] methods or use synchronous (selector) methods that you schedule with [I]ContinueWith[/I] (but that's still a little suboptimal). If you want to quickly schedule an asynchronous continuation inline as an exception, you can still add the extension method above (and overload it for [I]Task[/I], though I haven't tested whether that's necessary) or write it as [code]var taskingTask = ((Func<Task>)(async () => await Tasking(await ATask)))();[/code], which is [I]aptly[/I] inconvenient. [QUOTE=Darwin226;49783761]Yes, this is how it works and that's fine. You have to make sure the function you're using in Select doesn't have side effects since it's a bit hard to manage when those actually get run. The problem is that the Task API doesn't let you manage them. A function returning a Task might be pure, or might start execution immediately making it impure.[/QUOTE] All [I]Task[/I]-returning methods start execution immediately by convention, and there's actually a very good reason for that. Consider this method: [code]static async Task<bool> DoAsync() => await Tasking(await ATask);[/code] Like iterators, this is turned into a state machine, but the result is very different: [code]// class Async1 [AsyncStateMachine(typeof(Async1.<DoAsync>d__7))] private static Task<bool> DoAsync() { Async1.<DoAsync>d__7 <DoAsync>d__; <DoAsync>d__.<>t__builder = AsyncTaskMethodBuilder<bool>.Create(); <DoAsync>d__.<>1__state = -1; AsyncTaskMethodBuilder<bool> <>t__builder = <DoAsync>d__.<>t__builder; <>t__builder.Start<Async1.<DoAsync>d__7>(ref <DoAsync>d__); return <DoAsync>d__.<>t__builder.Task; } [CompilerGenerated] [StructLayout(LayoutKind.Auto)] private struct <DoAsync>d__7 : IAsyncStateMachine { public int <>1__state; public AsyncTaskMethodBuilder<bool> <>t__builder; private TaskAwaiter<bool> <>u__1; void IAsyncStateMachine.MoveNext() { int num = this.<>1__state; bool result; try { TaskAwaiter<bool> taskAwaiter; if (num != 0) { if (num == 1) { taskAwaiter = this.<>u__1; this.<>u__1 = default(TaskAwaiter<bool>); this.<>1__state = -1; goto IL_C3; } taskAwaiter = Async1.ATask.GetAwaiter(); if (!taskAwaiter.IsCompleted) { this.<>1__state = 0; this.<>u__1 = taskAwaiter; this.<>t__builder.AwaitUnsafeOnCompleted<TaskAwaiter<bool>, Async1.<DoAsync>d__7>(ref taskAwaiter, ref this); return; } } else { taskAwaiter = this.<>u__1; this.<>u__1 = default(TaskAwaiter<bool>); this.<>1__state = -1; } bool arg_73_0 = taskAwaiter.GetResult(); taskAwaiter = default(TaskAwaiter<bool>); taskAwaiter = Async1.Tasking(arg_73_0).GetAwaiter(); if (!taskAwaiter.IsCompleted) { this.<>1__state = 1; this.<>u__1 = taskAwaiter; this.<>t__builder.AwaitUnsafeOnCompleted<TaskAwaiter<bool>, Async1.<DoAsync>d__7>(ref taskAwaiter, ref this); return; } IL_C3: bool arg_D2_0 = taskAwaiter.GetResult(); taskAwaiter = default(TaskAwaiter<bool>); result = arg_D2_0; } catch (Exception exception) { this.<>1__state = -2; this.<>t__builder.SetException(exception); return; } this.<>1__state = -2; this.<>t__builder.SetResult(result); } [DebuggerHidden] void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine) { this.<>t__builder.SetStateMachine(stateMachine); } }[/code] This uses a few external types, but both [I]AsyncTaskMethodBuilder<>[/I] and [I]TaskAwaiter<>[/I] are structs, so [I]this code doesn't create any instances unconditionally[/I]. Even [I]Task[/I] instantiation can be completely avoided, if the method never yields and as such never places the state on the heap (which in turn doesn't invert/break up the stack so you have to be mindful of call depth, but that's another matter): [code]// System.Runtime.CompilerServices.AsyncTaskMethodBuilder<TResult> [SecuritySafeCritical] private Task<TResult> GetTaskForResult(TResult result) { if (default(TResult) != null) { if (typeof(TResult) == typeof(bool)) { return JitHelpers.UnsafeCast<Task<TResult>>(((bool)((object)result)) ? AsyncTaskCache.TrueTask : AsyncTaskCache.FalseTask); } if (typeof(TResult) == typeof(int)) { int num = (int)((object)result); if (num < 9 && num >= -1) { return JitHelpers.UnsafeCast<Task<TResult>>(AsyncTaskCache.Int32Tasks[num - -1]); } } else if ((typeof(TResult) == typeof(uint) && (uint)((object)result) == 0u) || (typeof(TResult) == typeof(byte) && (byte)((object)result) == 0) || (typeof(TResult) == typeof(sbyte) && (sbyte)((object)result) == 0) || (typeof(TResult) == typeof(char) && (char)((object)result) == '\0') || (typeof(TResult) == typeof(decimal) && decimal.Zero == (decimal)((object)result)) || (typeof(TResult) == typeof(long) && (long)((object)result) == 0L) || (typeof(TResult) == typeof(ulong) && (ulong)((object)result) == 0uL) || (typeof(TResult) == typeof(short) && (short)((object)result) == 0) || (typeof(TResult) == typeof(ushort) && (ushort)((object)result) == 0) || (typeof(TResult) == typeof(IntPtr) && (IntPtr)0 == (IntPtr)((object)result)) || (typeof(TResult) == typeof(UIntPtr) && (UIntPtr)0 == (UIntPtr)((object)result))) { return AsyncTaskMethodBuilder<TResult>.s_defaultResultTask; } } else if (result == null) { return AsyncTaskMethodBuilder<TResult>.s_defaultResultTask; } return new Task<TResult>(result); }[/code] (Note that since this is in a generic [I]struct[/I], the JIT-compiler can at least in theory fold every single of those [I]typeof(TResult)[/I] conditions. Unlike with [I]class[/I]es, [I]struct[/I] generics always use static calls and duplicated machine code when not boxed.) As you can see, [I]async[/I] methods are optimised to be called as cheaply and to execute as quickly as possible, [I]assuming they always run and always return a single result that may be available immediately[/I]. Conversely, iterators are optimised to be called as cheaply and to execute as quickly as possibly, [I]assuming they are further processed (= end up on the heap), used on the same thread once at a time and may return any number of results[/I]. (F# [I]async { ... }[/I] blocks [URL="https://msdn.microsoft.com/en-us/library/dd233250.aspx"]behave differently yet again[/URL]. They're similar to [I]async[/I] methods but they always run in parallel and their compiled form always returns before user code runs, while [I]Task[/I] async isn't innately threaded [I]at all[/I]. [URL="http://tomasp.net/blog/csharp-async-gotchas.aspx/"]Here's a good comparison.[/URL] Generally speaking, F#'s version is probably more suited to data processing and parallel workloads, while C#'s is likely better for event handling and workflow modelling in GUI applications. You can also abuse the latter in interesting ways with custom implementations of the base layer, since it's a fully synchronous API with an asynchronous standard library :magic101:) [QUOTE]Concretely, my usecase was taking an array of pictures, converting each one into a task that sends it to a remote host, then chunking up the list of tasks and executing them in batches. The problem is that the chunking evaluated the list and the tasks began executing immediately which is exactly what I want to avoid. This should never happen.[/QUOTE] For this use case, you really should just count the original collection if possible. Otherwise use a [I]Func<TItem, Func<Task<TResult>>>[/I] selector function (a [I]Task[/I] async delegate generator), which is the natural combination of the principles outlined above. In case you just want to limit the amount of parallelism on the server queries, you can do it like this (which still of course runs them as you iterate): [code]static IEnumerable<Task<TResult>> ForEachSelectAsync<TItem, TResult>(this IEnumerable<TItem> enumerable, Func<TItem, Task<TResult>> selector, int degreeOfParallelism) { SemaphoreSlim semaphore = new SemaphoreSlim(degreeOfParallelism); foreach (var item in enumerable) { yield return ((Func<Task<TResult>>)(async () => { await semaphore.WaitAsync(); try { return await selector(item); // Only do this in newer versions of C#! } finally { semaphore.Release(); } }))(); } }[/code] There's also a solution for your original problem that exploits [I].Count()[/I]'s conditional evaluation by returning an [I]ICollection[/I] instance: [code]class CollectionSelectorEnumerable<TSource, TResult> : IEnumerable<TResult>, ICollection<TResult> { ICollection<TSource> _source; IEnumerable<TResult> _result; public CollectionSelectorEnumerable(ICollection<TSource> source, IEnumerable<TResult> result) { _source = source; } public IEnumerator<TResult> GetEnumerator() => _result.GetEnumerator(); IEnumerator IEnumerable.GetEnumerator() => _result.GetEnumerator(); public int Count => _source.Count; public bool IsReadOnly => true; public void Add(TResult item) { throw new InvalidOperationException(); } public void Clear() { throw new InvalidOperationException(); } public bool Contains(TResult item) => _result.Contains(item); public void CopyTo(TResult[] array, int arrayIndex) { foreach (var item in _result) { array[arrayIndex++] = item; if (arrayIndex == array.Length) break; } } public bool Remove(TResult item) { throw new InvalidOperationException(); } } static class CollectionSelectExtension { public static CollectionSelectorEnumerable<TSource, TResult> Select<TSource, TResult>(this ICollection<TSource> source, Func<TSource, TResult> selector) => new CollectionSelectorEnumerable<TSource, TResult>(source, source.Select(selector)); }[/code] If you don't know the precise count without hitting the server all this still doesn't fit that well, but my async enumerable library / (partial) LINQ reimplementation seems to be on my external hard drive only. (I don't plan to release this library for free though, if I can monetise it somehow.) ([editline]edit[/editline] If you enjoyed this post, please consider chipping in a little [URL="paypal.me/Tamme"]over here[/URL].)
[QUOTE=Fourier;49787657]JohnnyFlame, can you post code of this? Or tutorial? I really want to know, how you make stuff like this.[/QUOTE] [url=https://gist.github.com/JohnnyonFlame/4333be72436e29611769]Sure, there's no reason not to share.[/url] There's not much in terms of tutorial since its mostly done with very simple trigonometry, triangle ratios and some happy coincidences. You should find a bit more of information by going through my post history. There is also some information to be found in raytracing/voxel landscape tutorials if you're lost somewhere. tl;dr- since scanlines that are closer to the center of the screen start lumping together, if you plot a few pixels around it, it sort of creates a fake specmap. A little bit of fooling around and you can pretty much fake sun and stuff. It's not a general algorithm but it's enough to create a few cool gifs. [editline]22nd February 2016[/editline] Btw license is 'be happy do whatever public domain license'. [editline]22nd February 2016[/editline] Biggest trick here is trying to think in terms of "what equation creates the forms I need to sort of get there?" and then try them.
Successfully gotten a C# Vulkan wrapper generated from vk.xml, though there is some issue with extended enums, I need to read the spec on what exactly the definition means. [url]https://gist.github.com/benpye/be33a79f60f83a2373e8[/url] Does work though [IMG_THUMB]https://s.paste.ninja/devenv_2016-02-22_06-59-49.png[/IMG_THUMB] EDIT: Fixed! [IMG_THUMB]https://s.paste.ninja/devenv_2016-02-22_07-16-55.png[/IMG_THUMB]
[QUOTE=ben1066;49789718]Successfully gotten a C# Vulkan wrapper generated from vk.xml, though there is some issue with extended enums, I need to read the spec on what exactly the definition means. [url]https://gist.github.com/benpye/be33a79f60f83a2373e8[/url] Does work though [IMG_THUMB]https://s.paste.ninja/devenv_2016-02-22_06-59-49.png[/IMG_THUMB] EDIT: Fixed! [IMG_THUMB]https://s.paste.ninja/devenv_2016-02-22_07-16-55.png[/IMG_THUMB][/QUOTE] Always a little scary to see VS after having spent ages using Sublime to write Python..
[QUOTE=Tamschi;49789434][I]A chapter in a book -snipped-[/I][/QUOTE] Are you secretly Stephen Toub or something?
I made a thing. I don't know what it is, but if you leave it running long enough and with enough cells then it can create some pretty neat patterns. Adding is pretty cool. [vid]http://anotherprophecy.com/webm/pattern1.webm[/vid]
[QUOTE=Tamschi;49789434][emphasis mine] Yes :v: I'm going to put in a little extra effort since you seemingly have high expectations though. As Proclivitas already said, LINQ is (almost) fully deferred. If a method (partially) evaluates the query, that's usually explicitly stated in the documentation. ([URL="https://msdn.microsoft.com/en-US/library/bb338038%28v=vs.110%29.aspx"][I].Count()[/I] in particular does this conditionally.[/URL]) Since I'll need it to explain my reply to your second post about this, here's how the [I]Select[/I] method looks like (roughly): This compiles to , which as you can see instantly creates an [I]IEnumerable<TResult>[/I]/[I]IEnumerator<TResult>[/I] instance (which is duplicated only for other threads to increase performance). Most importantly, [I]all of the code of the original iterator method is now inside the [I]MoveNext()[/I] instance method of the returned class and is not run until that is called[/I]. This ensures that the result is always up to date, since execution can otherwise only be paused at [I]yield return[/I] statements (since iterators are deterministic and [I]don't use threading[/I]). As an aside, this also means that the initial set-up doesn't always run, which makes sense since [I][I]IEnumerable[/I]s aren't expected to always run[/I]. You mean something like , right? That's a defective use case (or rather a slow pattern) due to how [I]async[/I] methods work internally. I made two [I]ContinueWith[/I] extension methods for comparison: As you can see, for the former to behave like the latter (since [I]async[/I] methods (have to) start immediately) you end up with one more [I]Task[/I] instance (but only possibly! More on that later). For the program to run as fast as possible, you should always either use direct calls between [I]async[/I] methods or use synchronous (selector) methods that you schedule with [I]ContinueWith[/I] (but that's still a little suboptimal). If you want to quickly schedule an asynchronous continuation inline as an exception, you can still add the extension method above (and overload it for [I]Task[/I], though I haven't tested whether that's necessary) or write it as , which is [I]aptly[/I] inconvenient. All [I]Task[/I]-returning methods start execution immediately by convention, and there's actually a very good reason for that. Consider this method: Like iterators, this is turned into a state machine, but the result is very different: This uses a few external types, but both [I]AsyncTaskMethodBuilder<>[/I] and [I]TaskAwaiter<>[/I] are structs, so [I]this code doesn't create any instances unconditionally[/I]. Even [I]Task[/I] instantiation can be completely avoided, if the method never yields and as such never places the state on the heap (which in turn doesn't invert/break up the stack so you have to be mindful of call depth, but that's another matter): (Note that since this is in a generic [I]struct[/I], the JIT-compiler can at least in theory fold every single of those [I]typeof(TResult)[/I] conditions. Unlike with [I]class[/I]es, [I]struct[/I] generics always use static calls and duplicated machine code when not boxed.) As you can see, [I]async[/I] methods are optimised to be called as cheaply and to execute as quickly as possible, [I]assuming they always run and always return a single result that may be available immediately[/I]. Conversely, iterators are optimised to be called as cheaply and to execute as quickly as possibly, [I]assuming they are further processed (= end up on the heap), used on the same thread once at a time and may return any number of results[/I]. (F# [I]async { ... }[/I] blocks [URL="https://msdn.microsoft.com/en-us/library/dd233250.aspx"]behave differently yet again[/URL]. They're similar to [I]async[/I] methods but they always run in parallel and their compiled form always returns before user code runs, while [I]Task[/I] async isn't innately threaded [I]at all[/I]. [URL="http://tomasp.net/blog/csharp-async-gotchas.aspx/"]Here's a good comparison.[/URL] Generally speaking, F#'s version is probably more suited to data processing and parallel workloads, while C#'s is likely better for event handling and workflow modelling in GUI applications. You can also abuse the latter in interesting ways with custom implementations of the base layer, since it's a fully synchronous API with an asynchronous standard library :magic101:) For this use case, you really should just count the original collection if possible. Otherwise use a [I]Func<TItem, Func<Task<TResult>>>[/I] selector function (a [I]Task[/I] async delegate generator), which is the natural combination of the principles outlined above. In case you just want to limit the amount of parallelism on the server queries, you can do it like this (which still of course runs them as you iterate): There's also a solution for your original problem that exploits [I].Count()[/I]'s conditional evaluation by returning an [I]ICollection[/I] instance: If you don't know the precise count without hitting the server all this still doesn't fit that well, but my async enumerable library / (partial) LINQ reimplementation seems to be on my external hard drive only. (I don't plan to release this library for free though, if I can monetise it somehow.)[/QUOTE] Thank you for the exhaustive reply. So basically what you're saying is that there's isn't a built in overload since it would have to create an additional Task instance for every step executed? But isn't the whole point of async to defer long running tasks to different threads instead of blocking, yet have the code look like it's still synchronous? If the cost of a single heap allocation can compare to the cost of the actual task, then I'd say that's an abuse of the API. I mean you obviously know a ton about CLR performance and if you say that the API is like this because of the way it's implemented, then I'm sure you're not the only one to whom this makes sense, but it certainly doesn't make sense to me. I have a problem with performance requirements leaking into the user facing API. Also, you say that Tasks ALWAYS start executing by convention, yet [url]https://msdn.microsoft.com/en-us/library/dd270682(v=vs.110).aspx[/url] This method even produces an exception if the task is already running. Here's how this would look in Haskell [code]sendBatched imgs = sequence . map (>> sleep 1000) . map sequence . chunksOf 10 . map sendImg $ imgs [/code] (in short, read from right to left, convert every image to an action that sends that image, chunk them in tens, collect each chunk into an action that sequentially executes the actions in the chunk, follow up each chunk action by a sleep, sequentially execute the chunks) I attempted to write how it would look if sendImg started executing the action immediately but it breaks too much of the execution model and doesn't make sense (in Haskell). The point is the compositionality of this approach. I want to send requests in batches. Chunking up the list of images is already a red light in my head because that's not what I want to do. In the "sane" version I can extract this batching functionality into a separate function that only takes a list of actions and makes sure they get executed in batches with sleeping in between. [code]batch n t = map (>> sleep t) . map sequence . chunksOf n[/code] I can't do that with Tasks simply because if I have a list of Tasks at any point I can't tell if they already started executing or not. Now I have to pull in more stuff into my combinator. Namely I need to take a list of items and another function that transforms the item into a Task. I mean, I'm not sure if I'm properly getting my point across. Hopefully it makes sense. My chunking combinator function now has to care about some list of items and has to care about transforming them into tasks. This might seem like a simplistic use case but say that my remote API (the one I'm making requests towards) has an additional rate limit. On top of limiting to 10 images per second, they also limit to 300 images per minute. [code]batch 30 60000 . batch 10 1000[/code] How do you do this with the other version? You can't use your function because it already starts doing things.
Doing .wav file editing in Unity. Obviously the UI is absolute poop at the moment. [t]http://i.imgur.com/cSGLrBQ.png[/t]
[QUOTE=Ac!dL3ak;49787648]Mono's probably fine, could be steam. Maybe make the error detection more downstream so you can actually log the response to a file, and inspect it yourself? Also, completely unrelated: [url]http://dankaminsky.com/2016/02/20/skeleton/[/url] Whoops![/QUOTE] I've started comparing/storing the actual raw json string that we receive from Steam before even parsing it. At this point I can say with confidence that, at the very least, the function making the HTTP call is fucking up somehow. [b]But I can't possibly see how.[/b] Arguments would have to be changed without any code telling it to. There would have to be some cross-thread weird fuckery going on. Or Steam's fucking up. But I can't see how it would be doing that, either. Maybe caching issues.
Had a go at screenspace god rays. I'm not really a huge fan of screenspace godrays, but I thought I might as well implement them just to get a feel on how they work, and the general pitfalls with screenspace techniques like this [IMG]https://dl.dropboxusercontent.com/u/9317774/betterish.PNG[/IMG] On the plus side, I've fixed all the blockers for a very alpha 0.1 release of my swordfighting game. I'm going to start conducting very small playtests soon (after I check it works on nvidia!). If all goes well, I'll hopefully try this on greenlight in a month or so and see what happens
[QUOTE=Proclivitas;49790280]Are you secretly Stephen Toub or something?[/QUOTE] No, but maybe I should get a "buy me a coffee" button to put under these posts. (Serious question: Would any of you consider chipping in a little with things like this? I really could use the money but I don't feel OK with paywalling anything that's not at least a "paid work grade" product in terms of polish.)
Just do it and let the market/people decide.. if they wish to donate, they will.
Been working on a Hopfield network course lab. Pretty cool stuff. You can store patterns inside the neural network, and it'll retrieve even heavily distorted patterns. So it basically works as a memory. [img]https://dl.dropboxusercontent.com/u/3843429/ShareX/2016/02/p1.png[/img][img]https://dl.dropboxusercontent.com/u/3843429/ShareX/2016/02/p11.png[/img] [vid]https://dl.dropboxusercontent.com/u/3843429/ShareX/2016/02/2016-02-22_15-52-59.mp4[/vid]
[QUOTE=Darwin226;49790534]Thank you for the exhaustive reply. So basically what you're saying is that there's isn't a built in overload since it would have to create an additional Task instance for every step executed? But isn't the whole point of async to defer long running tasks to different threads instead of blocking, yet have the code look like it's still synchronous?[/QUOTE] It's one use, but generally speaking it's a general-purpose [I]coroutine[/I] and [I]not a threading API[/I]. The default [I]TaskScheduler[/I] does have multiple threads, but if you have a long-running or blocking [I]Task[/I] you should start it with [URL="https://msdn.microsoft.com/en-us/library/system.threading.tasks.taskcreationoptions%28v=vs.110%29.aspx"][I]TaskCreationOption.LongRunning[/I][/URL] to avoid thread pool starvation. (This doesn't apply when you [I]await[/I] an asynchronous API that releases the thread, of course.) [QUOTE]If the cost of a single heap allocation can compare to the cost of the actual task, then I'd say that's an abuse of the API. I mean you obviously know a ton about CLR performance and if you say that the API is like this because of the way it's implemented, then I'm sure you're not the only one to whom this makes sense, but it certainly doesn't make sense to me. I have a problem with performance requirements leaking into the user facing API.[/QUOTE] The API design came before this optimisation, as iirc at least one of the early CTPs didn't have it. In either case it's the better design here though, since it leads to a much more concise awaitable API. The alternative would essentially have you write [code]var t = DoAsync(); t.Start(); result = await t;[/code], which additionally would be inconsistent since not all awaitable methods actually return [I]Task[/I]s. [editline]edit[/editline] I forgot to mention this, but the async system also does some other lifting behind the scenes, like keeping [I]LogicalCallContext[/I] in order and flowing the security context around (assuming you're not running against a defective implementation of the first version awaiter API). You can avoid a lot of this if an async method returns immediately, since then the execution never leaves the relevant stack frame or thread. Even then, Tasks are pretty lightweight though, and it's not directly relevant to instantiation. I [I]think[/I] they're faster than the coroutine system Unity uses. (I made a game scripting scheduler based on this API (with custom endpoints), and it can easily run a few thousand scripts in parallel.) However, the documentation does say you shouldn't use them for too fine-grained methods. [QUOTE]Also, you say that Tasks ALWAYS start executing by convention, yet [url]https://msdn.microsoft.com/en-us/library/dd270682(v=vs.110).aspx[/url] This method even produces an exception if the task is already running.[/QUOTE] "All Task-returning methods start execution immediately by convention, [...]" You have a point though, since e.g. Lua's coroutine API doesn't do this iinm, so it's not completely necessary. Personally I prefer the C# way, since I can just add [I]() =>[/I] before them in most cases to defer execution if I need to, and almost always don't want to. [QUOTE]Here's how this would look in Haskell [code]sendBatched imgs = sequence . map (>> sleep 1000) . map sequence . chunksOf 10 . map sendImg $ imgs [/code] (in short, read from right to left, convert every image to an action that sends that image, chunk them in tens, collect each chunk into an action that sequentially executes the actions in the chunk, follow up each chunk action by a sleep, sequentially execute the chunks) I attempted to write how it would look if sendImg started executing the action immediately but it breaks too much of the execution model and doesn't make sense (in Haskell). The point is the compositionality of this approach. I want to send requests in batches. Chunking up the list of images is already a red light in my head because that's not what I want to do. In the "sane" version I can extract this batching functionality into a separate function that only takes a list of actions and makes sure they get executed in batches with sleeping in between. [code]batch n t = map (>> sleep t) . map sequence . chunksOf n[/code] I can't do that with Tasks simply because if I have a list of Tasks at any point I can't tell if they already started executing or not. Now I have to pull in more stuff into my combinator. Namely I need to take a list of items and another function that transforms the item into a Task. I mean, I'm not sure if I'm properly getting my point across. Hopefully it makes sense. My chunking combinator function now has to care about some list of items and has to care about transforming them into tasks. This might seem like a simplistic use case but say that my remote API (the one I'm making requests towards) has an additional rate limit. On top of limiting to 10 images per second, they also limit to 300 images per minute. [code]batch 30 60000 . batch 10 1000[/code] How do you do this with the other version? You can't use your function because it already starts doing things.[/QUOTE] (That seems faulty, it should read [code]batch 30 30000 . batch 10 1000[/code] instead I think, unless those sleeps aren't sequential but then the separation may not work properly.) Right, though I really think that's something that should be limited centrally by your web API endpoint anyway, as far as C#/.NET idioms go. You [I]can[/I] do this with the synchronous LINQ API, but it's blocking: [code]static IEnumerable<T> Chunk2<T>(this IEnumerable<T> source, int chunkSize, int millisecondsDelay) => source .Select((x, i) => new { x = x, i = i / chunkSize }) .GroupBy(xi => xi.i, xi => xi.x) .Select(g => { Thread.Sleep(millisecondsDelay); return g; }) .SelectMany(g => g);[/code] or just [code]static IEnumerable<T> Chunk<T>(this IEnumerable<T> source, int chunkSize, int millisecondsDelay) { return source.Select((x, i) => { if (i != 0 && i % chunkSize == 0) { Thread.Sleep(millisecondsDelay); } return x; });[/code]. To make a proper rate-limiting web API client that uses [I]Task[/I]s, the synchronous and sequential query system is definitely the wrong place though. You should put it into the service wrapper directly and implement some kind of scheduler that reacts more accurately to the rate you query the images at. If you really want to progressively process them in sequence an asynchronous LINQ implementation may do the trick too, but that's still not optimal at all. This should do the trick (though really, integrate that into the service wrapper): ([editline]edit[/editline] This version is too blocking. [URL="https://facepunch.com/showthread.php?t=1499629&p=49795668&viewfull=1#post49795668"]See here for a fixed one.[/URL])[code]class RateLimiter { readonly SemaphoreSlim _semaphore; readonly int _millisecondsChunkLength; public RateLimiter(int chunkSize, int millisecondsChunkLength) { _semaphore = new SemaphoreSlim(chunkSize); _millisecondsChunkLength = millisecondsChunkLength; } public void Limit(Action action) { _semaphore.Wait(); var stopwatch = Stopwatch.StartNew(); try { action(); } finally { var restDelay = stopwatch.ElapsedMilliseconds - _millisecondsChunkLength; if (restDelay > 0) { Thread.Sleep((int)restDelay); } _semaphore.Release(); } } public T Limit<T>(Func<T> func) { _semaphore.Wait(); var stopwatch = Stopwatch.StartNew(); T result; try { result = func(); } finally { var restDelay = stopwatch.ElapsedMilliseconds - _millisecondsChunkLength; if (restDelay > 0) { Thread.Sleep((int)restDelay); } _semaphore.Release(); } return result; } public async Task LimitAsync(Func<Task> asyncAction) { await _semaphore.WaitAsync(); var stopwatch = Stopwatch.StartNew(); try { await asyncAction(); } finally { var restDelay = stopwatch.ElapsedMilliseconds - _millisecondsChunkLength; if (restDelay > 0) { await Task.Delay((int)restDelay); } _semaphore.Release(); } } public async Task<T> LimitAsync<T>(Func<Task<T>> asyncFunc) { await _semaphore.WaitAsync(); var stopwatch = Stopwatch.StartNew(); T result; try { result = await asyncFunc(); } finally { var restDelay = stopwatch.ElapsedMilliseconds - _millisecondsChunkLength; if (restDelay > 0) { await Task.Delay((int)restDelay); } _semaphore.Release(); } return result; } }[/code] As long as you defer everything through the same instance(s) when setting up your query, it should obey the rate limits even if you accidentally call your enumerable twice. With your image example, this would be something like [code]var limiterA = new RateLimiter(10, 1000); var limiterB = new RateLimiter(30, 60000); var results = images.Select(image => limiterA.LimitAsync(() => limiterB.LimitAsync(() => UploadImageAsync(image)))).ToArray();[/code] This immediately gives you the [I]Task[/I]s, but they'll only complete according to the rate limits, taking varying response time into account. You can replace [I].ToArray()[/I] with a [I].Cache()[/I] method of sorts of course, but I'd avoid not having a reevalutation guard there. If you want to be able to count it without processing the images, use [code]images.Select(image => () => limiterA.LimitAsync(() => limiterB.LimitAsync(() => UploadImageAsync(image))))[/code] instead, though as I said before that's probably not the most straightforward solution here.
[vid]https://files.facepunch.com/ziks/2016/February/22/2016-02-22-1700-28.mp4[/vid] Some work in progress UI improvements
Hey, I'm requesting that video that has gotten posted in here a couple times. Title is something along the lines of "add Umph to your game" and it's a presentation about adding little subtle effects in your game that have a minor effect on gameplay but have a big effect on the feel. Its from gdc or pax or something. Sorry I know that's kinda vague. Edit: And of course, after looking for a bit longer I find it myself, for anyone who is interested. [video]https://youtu.be/Fy0aCDmgnxg[/video]
[IMG]http://i.imgur.com/EM3cOuN.gif[/IMG] Primitive based thermometer HP bar. I [I]finally[/I] understand element placement for complicated GUI components :v:
[QUOTE=chimitos;49793306] Primitive based thermometer HP bar. I [I]finally[/I] understand element placement for complicated GUI components :v:[/QUOTE] Share your secret, please!
[QUOTE=Tamschi;49791321]In either case it's the better design here though, since it leads to a much more concise awaitable API. The alternative would essentially have you write [code]var t = DoAsync(); t.Start(); result = await t;[/code][/QUOTE] Now you wouldn't. You'd write [code]var t = await DoAsync();[/code] and await would actually start the execution. Basically, the semantics I'm thinking of would fork the async method to another thread (or take a thread from a pool) and yield the current thread back to the scheduler. [QUOTE] (That seems faulty, it should read [code]batch 30 30000 . batch 10 1000[/code] instead I think, unless those sleeps aren't sequential but then the separation may not work properly.) [/QUOTE] Yes, you're right. My bad. All in all what you're saying is that Task simply isn't the abstraction that I want it to be. In fact I'd say that a simple `Func<T>` is much closer to what I want than `Task<T>` is. Fair enough.
[QUOTE=Socram;49792417]Hey, I'm requesting that video that has gotten posted in here a couple times. Title is something along the lines of "add Umph to your game" and it's a presentation about adding little subtle effects in your game that have a minor effect on gameplay but have a big effect on the feel. Its from gdc or pax or something. Sorry I know that's kinda vague. Edit: And of course, after looking for a bit longer I find it myself, for anyone who is interested. [video]https://youtu.be/Fy0aCDmgnxg[/video][/QUOTE] [video=youtube;AJdEqssNZ-U]https://www.youtube.com/watch?v=AJdEqssNZ-U[/video] Here's a similar thing that I found more helpful personally.
[url]http://www.gamasutra.com/view/news/244183/Video_Designing_an_immersive_user_interface_for_Hearthstone.php[/url] This is also quite helpful, explains the game feel of hearthstone.
Sorry, you need to Log In to post a reply to this thread.