-
Notifications
You must be signed in to change notification settings - Fork 0
/
feed.xml
5896 lines (4070 loc) · 502 KB
/
feed.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<rss xmlns:a10="http://www.w3.org/2005/Atom" version="2.0"><channel><title>Jonathan Channon Blog</title><link>http://blog.jonathanchannon.com/feed.xml</link><description>Jonathan Channon Blog</description><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2018/04/10/announcing-carter/</guid><link>http://blog.jonathanchannon.com/2018/04/10/announcing-carter/</link><a10:author><a10:name /></a10:author><category>ASP.NET</category><category>Botwin</category><category>C#</category><category>Carter</category><category>OSS</category><title>Announcing Carter</title><description><p>As of beginning of April 2018 Botwin has been renamed to Carter. Whilst I thought the name was genius it became obvious that some people didn't like it or understand it and tried to interpret it as a Bot framework for Windows. After spending too long trying to think of a new name I finally decided upon Carter. Carter comes from the surname of Jay-Z (Shawn Carter) and in his song Empire State of Mind he sings "I'm the new Sinatra". Sinatra is a web framework which inspired Nancy which heavily inspired Botwin.</p>
<p>The last release of Botwin was 3.5.0 but the same package has also been released under Carter 3.5.0</p>
<p><a href="https://www.nuget.org/packages/Botwin/3.5.0">https://www.nuget.org/packages/Botwin/3.5.0</a></p>
<p><a href="https://www.nuget.org/packages/Carter/3.5.0">https://www.nuget.org/packages/Carter/3.5.0</a></p>
<p>As part of the rename I have created a Github organization with the Carter repository inside it - <a href="https://github.com/CarterCommunity">https://github.com/CarterCommunity</a>. If people would like to build their own Carter extensions then they can push to repositories in that organization and maintain the repos from there. A little attempt to build an OSS community for Carter. Let me know if you'd like to add a repo.</p>
<p>Also we now have a Carter Slack channel. Please sign up <a href="https://join.slack.com/t/cartercommunity/shared_invite/enQtMzQwNjIwODcwMTMxLWQwMjk5NDFlYWI3Yzg5Y2M4ODNmOTkwMzA2YjkxNmE0YjI3YWU4MjU2ZjI2NmQwMmE4NjVlODBlM2RlMDI1ZmY">here</a> and if you have any questions or just want to check Carter out please jump in!</p>
<p>You'll also notice Carter has a new logo, indicating you can parachute Carter into your application and all will be well!!</p>
</description><pubDate>Mon, 09 Apr 2018 23:00:00 Z</pubDate><a10:updated>2018-04-09T23:00:00Z</a10:updated><a10:content type="html"><p>As of beginning of April 2018 Botwin has been renamed to Carter. Whilst I thought the name was genius it became obvious that some people didn't like it or understand it and tried to interpret it as a Bot framework for Windows. After spending too long trying to think of a new name I finally decided upon Carter. Carter comes from the surname of Jay-Z (Shawn Carter) and in his song Empire State of Mind he sings "I'm the new Sinatra". Sinatra is a web framework which inspired Nancy which heavily inspired Botwin.</p>
<p>The last release of Botwin was 3.5.0 but the same package has also been released under Carter 3.5.0</p>
<p><a href="https://www.nuget.org/packages/Botwin/3.5.0">https://www.nuget.org/packages/Botwin/3.5.0</a></p>
<p><a href="https://www.nuget.org/packages/Carter/3.5.0">https://www.nuget.org/packages/Carter/3.5.0</a></p>
<p>As part of the rename I have created a Github organization with the Carter repository inside it - <a href="https://github.com/CarterCommunity">https://github.com/CarterCommunity</a>. If people would like to build their own Carter extensions then they can push to repositories in that organization and maintain the repos from there. A little attempt to build an OSS community for Carter. Let me know if you'd like to add a repo.</p>
<p>Also we now have a Carter Slack channel. Please sign up <a href="https://join.slack.com/t/cartercommunity/shared_invite/enQtMzQwNjIwODcwMTMxLWQwMjk5NDFlYWI3Yzg5Y2M4ODNmOTkwMzA2YjkxNmE0YjI3YWU4MjU2ZjI2NmQwMmE4NjVlODBlM2RlMDI1ZmY">here</a> and if you have any questions or just want to check Carter out please jump in!</p>
<p>You'll also notice Carter has a new logo, indicating you can parachute Carter into your application and all will be well!!</p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2017/06/07/debugging-netcore-docker/</guid><link>http://blog.jonathanchannon.com/2017/06/07/debugging-netcore-docker/</link><a10:author><a10:name /></a10:author><category>ASP.NET</category><category>C#</category><category>Docker</category><category>OSS</category><title>Debugging .Net Core apps inside Docker container with VSCode</title><description><p>So by now using .Net Core on Linux is old news, everyone is doing it and deploying their production apps on Kubernetes to reach peak "I can scale" points. However, one thing that can get tricky is when you have a requirement to debug an application in a container. I believe VS on Windows and VS for Mac has some sort of capability to do that (I have no idea what it does underneath but hey who cares I can right click debug right!?) but the information about doing this in VSCode is a bit sketchy. I tend to use VSCode on OSX the most so I wanted to see how I could do this.</p>
<p>For demonstration purposes lets take a very simple application and we are going to publish it as a self contained application ie/one that has all the runtime and application binaries outputted so you don't have to install dotnet in a container.</p>
<p>To be able to debug that application we are going to need VSDBG(the .Net Core command line debugger) inside the container.</p>
<p><code>curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l ~/vsdbg</code></p>
<p>We are also going to need to append the launch.json for VSCode in your project's root to have the below:</p>
<pre><code>{
"name": ".NET Core Remote Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickRemoteProcess}",
"pipeTransport": {
"pipeProgram": "bash",
"pipeArgs": [ "-c", "docker exec -i json ${debuggerCommand}" ],
"debuggerPath": "/root/vsdbg/vsdbg",
"pipeCwd": "${workspaceRoot}",
"quoteArgs": true
},
"sourceFileMap": {
"/Users/jonathan/Projects/jsonfile": "${workspaceRoot}"
},
"justMyCode": true
}
</code></pre>
</description><pubDate>Tue, 06 Jun 2017 23:00:00 Z</pubDate><a10:updated>2017-06-06T23:00:00Z</a10:updated><a10:content type="html"><p>So by now using .Net Core on Linux is old news, everyone is doing it and deploying their production apps on Kubernetes to reach peak "I can scale" points. However, one thing that can get tricky is when you have a requirement to debug an application in a container. I believe VS on Windows and VS for Mac has some sort of capability to do that (I have no idea what it does underneath but hey who cares I can right click debug right!?) but the information about doing this in VSCode is a bit sketchy. I tend to use VSCode on OSX the most so I wanted to see how I could do this.</p>
<p>For demonstration purposes lets take a very simple application and we are going to publish it as a self contained application ie/one that has all the runtime and application binaries outputted so you don't have to install dotnet in a container.</p>
<p>To be able to debug that application we are going to need VSDBG(the .Net Core command line debugger) inside the container.</p>
<p><code>curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l ~/vsdbg</code></p>
<p>We are also going to need to append the launch.json for VSCode in your project's root to have the below:</p>
<pre><code>{
"name": ".NET Core Remote Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickRemoteProcess}",
"pipeTransport": {
"pipeProgram": "bash",
"pipeArgs": [ "-c", "docker exec -i json ${debuggerCommand}" ],
"debuggerPath": "/root/vsdbg/vsdbg",
"pipeCwd": "${workspaceRoot}",
"quoteArgs": true
},
"sourceFileMap": {
"/Users/jonathan/Projects/jsonfile": "${workspaceRoot}"
},
"justMyCode": true
}
</code></pre>
<!--excerpt-->
<p>They key things to note are the <code>pipeArgs</code> and <code>sourceFileMap</code>. Where it says <code>json</code>, under <code>pipeArgs</code> this will need to be replaced the name of the container that you are trying to debug. The <code>sourceFileMap</code> is a mapping between where it was compiled on your machine and where it is in VSCode. The rest of the properties are explained <a href="https://github.com/OmniSharp/omnisharp-vscode/wiki/Attaching-to-remote-processes#configuring-launchjson">here</a> </p>
<p>The final Dockerfile looks like this:</p>
<pre><code>FROM microsoft/dotnet:1.1-runtime-deps
RUN apt-get update
RUN apt-get install curl unzip
RUN curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l ~/vsdbg
COPY ./publish /app
WORKDIR /app
ENTRYPOINT ./jsonfile
</code></pre>
<p>So we're ready to go with the following steps:</p>
<p><code>dotnet publish -c Debug -f netcoreapp1.1 -r debian.8-x64 -o ./publish</code></p>
<p><code>docker build -t jchannon/jsonfile --rm .</code> </p>
<p><code>docker run -t —name json jchannon/jsonfile</code></p>
<p>Add a breakpoint to you application</p>
<p>Go to VSCode Debug pane, select <code>.NET Core Remote Attach</code> and hit F5</p>
<p>SUCCESS!!</p>
<p>One thing to note with this is, is that you cannot debug a project that has been compiled in Release mode. Whilst the config above looks like it should work it doesn't. I tried! I believe there may be plans to allow this and the issue can be tracked <a href="https://github.com/OmniSharp/omnisharp-vscode/issues/220">here</a> . A sample application and Dockerfile can be found <a href="https://github.com/jchannon/DockerDebug">here</a> </p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2017/05/15/using-docker-with-netcore-ci/</guid><link>http://blog.jonathanchannon.com/2017/05/15/using-docker-with-netcore-ci/</link><a10:author><a10:name /></a10:author><category>ASP.NET</category><category>C#</category><category>Docker</category><category>OSS</category><title>Using Docker with .Net Core in CI for OSS</title><description><p>I recently wrote a <a href="http://blog.jonathanchannon.com/2017/05/04/announcing-botwin/">project</a> for <a href="https://t.co/kpkdInRgwG">ASP.NET Core 2</a> and the time had come to get a CI system up and running. I develop on OSX and mainly test on OSX &amp; Linux and so the defacto place to go is TravisCI. I've used it in the past and all has been great but I put out a tweet asking if Travis was still the place to go:</p>
<blockquote class="twitter-tweet" data-partner="tweetdeck"><p lang="en" dir="ltr">Is Travis still the go to Linux CI tool for OSS?</p>— Jonathan Channon (@jchannon) <a href="https://twitter.com/jchannon/status/860979690462474240">May 6, 2017</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
</description><pubDate>Sun, 14 May 2017 23:00:00 Z</pubDate><a10:updated>2017-05-14T23:00:00Z</a10:updated><a10:content type="html"><p>I recently wrote a <a href="http://blog.jonathanchannon.com/2017/05/04/announcing-botwin/">project</a> for <a href="https://t.co/kpkdInRgwG">ASP.NET Core 2</a> and the time had come to get a CI system up and running. I develop on OSX and mainly test on OSX &amp; Linux and so the defacto place to go is TravisCI. I've used it in the past and all has been great but I put out a tweet asking if Travis was still the place to go:</p>
<blockquote class="twitter-tweet" data-partner="tweetdeck"><p lang="en" dir="ltr">Is Travis still the go to Linux CI tool for OSS?</p>— Jonathan Channon (@jchannon) <a href="https://twitter.com/jchannon/status/860979690462474240">May 6, 2017</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<!--excerpt-->
<p><a href="http://twitter.com/adron">Adron Hall</a> replied and said he'd been using <a href="http://codeship.com">Codeship</a> as a Docker based CI system. Having experience in Docker I thought I'd take a look. My requirements were simple, I needed the CI to run <code>dotnet restore</code> and <code>dotnet build</code> and <code>dotnet test</code>. I also thought to myself how am I going to handle releasing a NuGet package? Normally I tend to run a build script locally check everything is ok before I push to NuGet so I can avoid an "oh-shit" release but it happens! </p>
<p>I looked at Codeship's basic plan (free) and they supported most things on their systems out of the box but not .NET and so I moved to the pro plan (also free). At this point Codeship's co-founder &amp; CEO got in touch (<a href="https://twitter.com/moritzplassnig">Moritz Plassnig</a>) and had said he had seen the conversation on Twitter with Adron and was there to help. I asked him about .NET Core etc and he confirmed that my choice to use the Pro account was the best decision and any other issues to give him a ping. Good stuff I thought!</p>
<p>I began to read Codeship's <a href="https://documentation.codeship.com/pro/quickstart/getting-started/">documentation</a> and quite comprehensive it is I must say. Essentially you have three files; a services file, a steps file and a docker file. The services file describes your service eg a name, the Dockerfile path and things like a path to encrypted envionment variables plus many other settings available. The steps file is a file where you describe each step in your CI system. For me the first step was obviously to run <code>dotnet restore</code> then another step for <code>dotnet build</code> then <code>dotnet test</code>. I pushed my files to my repo and watched on Codeship's dashboard. The dotnet restore worked but the build failed. The thing I lost some time on was that each step is run in its own container and I couldn't work out why the build was failing after I had successfully installed all the packages required for it to build. Ironically I was reading the documentation for golang projects where it mentioned this! During this process and me scratching my head I was tweeting Adron and Codeship to see if they knew why I was having issues and <a href="https://twitter.com/kellyjandrews">Kelly Andrews, Codeship's Developer Advocate</a> starting helping me out which was great. He suggested I could use my Dockerfile to do the dotnet restore and build and then have a step to do the dotnet test. That got me thinking and in the end I decided I would put the restore, build and test in the Dockerfile so when each push to the repo or PR is sent it would build the Dockerfile and although not part of the steps file Codeship would still report a failed build if it couldn't build the Docker image. What I could use the steps file for was releasing to Nuget. This felt a bit scary as it increased the potential of releasing something I wasn't happy with and I would end up releasing a "oh shit" patch release but I thought I'd just give it a go. The way I could control this is a feature of Codeship's step files. (The files Codeship use is YAML). In my steps file I could filter when the step was executed. Here's the resulting file:</p>
<pre><code>- service: app
tag: ^\d+.\d+.\d+(-.*|$)
command: bash -c "dotnet pack -c Release -o /code/artifacts src/Botwin.csproj &amp;&amp; dotnet nuget push -s https://www.nuget.org/api/v2/package -k $NUGETAPIKEY /code/artifacts/Botwin.$CI_BRANCH.nupkg"
</code></pre>
<p>This says the service it belongs to is <code>app</code> which is what I define in my services file. The tag is the way to filter the step so when it sees something in a commit message or git tag then it will execute. (You can also define the inverse using <code>exclude</code>, see the docs for more) Finally the command to execute if it passes the tag regex. In my file I have said execute this step if it sees <code>number.number.number</code> or <code>number.number.number-something</code> in a git tag. So if I've done a load of work and I'm happy that a new version is ready to be released I do a <code>git tag 1.2.69</code> and then a git push and Codeship will see this tag and then build the Docker image then it see the tag and then execute <code>dotnet pack</code> and <code>dotnet nuget push</code>. Pretty good I thought and started to test it. </p>
<p>Codeship provides the same tooling that controls the CI process on their servers avaialble as a binary that can be installed via Homebrew so you can test the pipeline locally. This tooling is called Jet. So I followed the instructions and away I went. Again I was lucky as I had Kelly on hand to answer my questions but the documentation was very good. For example, as I wanted to publish to NuGet I needed to supply my API key and obviously didn't want that sitting in my repo but Codeship's docs described how you could pass in a file with the raw values, encrypt it using Jet, put the encrypted file in your repo and tell the services YAML file to look at the encrypted file to get environment variables out. So above you can see I use <code>$NUGETAPIKEY</code> and that comes from the encrypted file. You'll also see that I use <code>$CI_BRANCH</code>. This is part of a number of environmental variables that Codeship provides that you have access too. Here I could use the git tag <code>5.6.7-rc79</code> which is found inside the <code>$CI_BRANCH</code> environmental variable, slightly badly named IMO but it means I can get access to the version I have just built in this scenario as just before I do my git tag and push I also change the csproj version number so they need to match, then I tag and push and Codeship builds and tests and releases to NuGet for me.</p>
<p>The other odd thing I did spot was the need to do <code>bash -c "multiple statements go here"</code> for multiple statements in a steps file because if I ran just <code>dotnet restore</code> all was fine but <code>dotnet restore &amp;&amp; dotnet build</code> it didn't like it so I needed to add the bash prefix. Thinking about it now I could move the dotnet restore, build and test to another step rather than make it part of the Dockerfile. I'm not sure there are any advantages/disadvantages to either approach really as I don't think the layers in the Dockerfile when doing a restore/build are cached so it doesn't speed up CI time.</p>
<p>When I got it all working I was pretty impressed and was thankful for the help I got from Kelly. The project I used it for (<a href="">Botwin</a>) has fairly small requirements and there was lots of documentation I didn't even delve into so I think Codeship can probably provide a solution to much larger projects so please check them out. I'm hoping services like this expand as .NET Core gains more traction in the *nix worlds and the binding to Windows that .NET has always had truly disappears and .NET becomes a proper cross platform runtime. My next desire is to have a Linux .NET profiler, none exist currently although Jetbrains tell me they have some plans but it's a gap in the market if you're interested!</p>
<p><a href="https://documentation.codeship.com/pro/quickstart/getting-started/">Link</a> to getting started and defining services and steps YAML</p>
<p><a href="https://documentation.codeship.com/pro/builds-and-configuration/cli/">Link</a> to JET docs</p>
<p><a href="https://documentation.codeship.com/pro/builds-and-configuration/environment-variables/#encrypted-environment-variables">Link</a> to environment variables encryption</p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2017/05/04/announcing-botwin/</guid><link>http://blog.jonathanchannon.com/2017/05/04/announcing-botwin/</link><a10:author><a10:name /></a10:author><category>ASP.NET</category><category>Botwin</category><category>C#</category><category>OSS</category><title>Announcing Botwin</title><description><p>Whilst keeping my eye on what's going on in .NET Core v2 I came across some planned changes for ASP.NET Core regarding the <a href="https://github.com/aspnet/Routing/blob/dev/src/Microsoft.AspNetCore.Routing/RequestDelegateRouteBuilderExtensions.cs">routing</a>. I had also read this <a href="https://www.strathweb.com/2017/01/building-microservices-with-asp-net-core-without-mvc/">blog post</a> from <a href="https://twitter.com/filip_woj">Filip</a> about using the planned changes for microservices and a lightbulb went off in my head. I thought to myself I wonder if I could adapt the new extensions to create Nancy-esque routing. Turns out, I could!</p>
<h3>Sample</h3>
<pre><code>public class ActorsModule : BotwinModule
{
public ActorsModule()
{
this.Get("/", async (req, res, routeData) =&gt;
{
await res.WriteAsync("Hello World!");
});
}
}
</code></pre>
<p></description><pubDate>Wed, 03 May 2017 23:00:00 Z</pubDate><a10:updated>2017-05-03T23:00:00Z</a10:updated><a10:content type="html"><p>Whilst keeping my eye on what's going on in .NET Core v2 I came across some planned changes for ASP.NET Core regarding the <a href="https://github.com/aspnet/Routing/blob/dev/src/Microsoft.AspNetCore.Routing/RequestDelegateRouteBuilderExtensions.cs">routing</a>. I had also read this <a href="https://www.strathweb.com/2017/01/building-microservices-with-asp-net-core-without-mvc/">blog post</a> from <a href="https://twitter.com/filip_woj">Filip</a> about using the planned changes for microservices and a lightbulb went off in my head. I thought to myself I wonder if I could adapt the new extensions to create Nancy-esque routing. Turns out, I could!</p>
<h3>Sample</h3>
<pre><code>public class ActorsModule : BotwinModule
{
public ActorsModule()
{
this.Get("/", async (req, res, routeData) =&gt;
{
await res.WriteAsync("Hello World!");
});
}
}
</code></pre>
<p><!--excerpt-->
Whilst the extensions in the routing allowed users to create some funcs I thought to myself once you get above 3 or 4 of them you are going to want to put them in their own file which tidies things up but then you would still have to register all the routes in your application at one central location ie. in a Startup class or as part of the WebHostBuilder setup. Whilst that's ok for some I didn't like it particularly so I came up with the BotwinModule. Now I'm sure many of you who are <a href="http://nancyfx.org">Nancy</a> lovers are thinking this looks exactly the same as a NancyModule and you'd be correct but sometimes you can't improve on perfection so I took what I knew from Nancy and made it work in a similar fashion. Each BotwinModule is found and each route is registered with ASP.NET Core. This is all under the hood, all the user has to do is below:</p>
<pre><code>public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddBotwin();
}
public void Configure(IApplicationBuilder app)
{
app.UseBotwin();
}
}
</code></pre>
<h2>Extensions</h2>
<p>When I had got the initial routing complete I then started going through some simple scenarios and realised I needed to add some extensions to make usability better. At the moment this comes at a cost of dependencies and an opinionated approach.</p>
<h3>Binding &amp; Validating</h3>
<pre><code>this.Put("/actors/{id:int}", async (req, res, routeData) =&gt;
{
var result = req.BindAndValidate&lt;Actor&gt;();
if (!result.ValidationResult.IsValid)
{
res.StatusCode = 422;
await res.Negotiate(result.ValidationResult.GetFormattedErrors());
return;
}
//Update the user in your database
res.StatusCode = 204;
});
</code></pre>
<p>The above code uses FluentValidation under the hood and validates the incoming request body to <code>Actor</code>. The result of BindAndValidate is a Tuple of ValidationResult and T. At the moment the ValidationResult is from FV so that probably needs abstracting at some point however, as you can see you can check if validation is valid and if not act accordingly. In this example I return the errors from the validation result using an extension <code>GetFormattedErrors</code> and also use a <code>HttpRequest</code> extension that negotiates the result ie. if the user asked for JSON with their <code>Accept</code> header they get JSON, if they asked for XML or PDF they get that (as long as they implement a <code>IResponseNegotiator</code>). Out of the box Botwin will return JSON.</p>
<p>If the user doesn't want to validate incoming data but does want the body deserialized they can simply use <code>req.Bind&lt;Actor&gt;()</code>.</p>
<h3>Global Before &amp; After Hooks</h3>
<p>There may be circumstances where you want to check something in the request before it hits the route handler or you may want to do something after the route handler has been executed. This can be setup via options:</p>
<pre><code>public void Configure(IApplicationBuilder app)
{
app.UseBotwin(new BotwinOptions(
async (ctx) =&gt; { await ctx.Response.WriteAsync("GlobalBefore"); return true; },
async (ctx) =&gt; await ctx.Response.WriteAsync("GlobalAfter")
));
}
</code></pre>
<p>Here we setup that for each route it will write to the response body on the before and after hook. Notice that the before hook has a boolean to signify if the routing should continue. You may not want to continue the route execution for some reason after inspecting it in the Global before hook so can return false.</p>
<h3>Module Before &amp; After Hooks</h3>
<p>Like the global before &amp; after hooks these can be applied at a module level:</p>
<pre><code>public class TestModule : BotwinModule
{
public TestModule()
{
this.Before = async (req, res, routeData) =&gt; { await res.WriteAsync("Before"); return res; };
this.After = async (req, res, routeData) =&gt; { await res.WriteAsync("After"); };
this.Get("/", async (request, response, routeData) =&gt; { await response.WriteAsync("Hello"); });
}
}
</code></pre>
<p>Again fairly similar to the global hook but you can return the response object to continue execution or return null to stop the request in the before hook.</p>
<h3>IStatusCodeHandler</h3>
<p>An implementation of <code>IStatusCodeHandler</code> means you can determine what happens if your route returns a certain status code. ASP.NET Core provides middleware called <code>UseStatusCodePages</code> but it is not very elegant for users to use so I felt this was a cleaner option:</p>
<pre><code>public class ConflictStatusCodeHandler : IStatusCodeHandler
{
public bool CanHandle(int statusCode)
{
return statusCode == 409;
}
public async Task Handle(HttpContext ctx)
{
await ctx.Response.WriteAsync("Can't we all just get along?");
}
}
</code></pre>
<p>You can obviously do whatever you want in the Handle method.</p>
<h3>IResponseNegotiator</h3>
<p>Mentioned previously, implementing this interface allows you to handle content negotiation if selected in the route:</p>
<pre><code>public class TestResponseNegotiator : IResponseNegotiator
{
public bool CanHandle(IList&lt;MediaTypeHeaderValue&gt; accept)
{
return accept.Any(x =&gt; x.MediaType.IndexOf("foo/bar", StringComparison.OrdinalIgnoreCase) &gt;= 0);
}
public async Task Handle(HttpRequest req, HttpResponse res, object model)
{
await res.WriteAsync("FOOBAR");
}
}
</code></pre>
<p>Obviously here you can make your response return CSV, PDF etc etc. If you call <code>response.Negotiate</code> and Botwin can't find a relevant implementation it will default to JSON.</p>
<p>If you explicitly want to return JSON from your route you can use another extension like so:</p>
<pre><code>this.Get("/actors", async (req, res, routeData) =&gt;
{
var people = actorProvider.Get();
await res.AsJson(people);
});
</code></pre>
<h3>Dependency Injection</h3>
<p>You can inject dependencies into Botwin modules and these are resolved automatically via the ASP.NET Core built in DI so if you use Structuremap, Autofac etc that is plugged into ASP.NET Core then it will work fine:</p>
<pre><code>public class ActorsModule : BotwinModule
{
public ActorsModule(IActorProvider actorProvider)
{
//Do stuff
}
}
</code></pre>
<h2>Summary</h2>
<p>So what have we got here? This is not Nancy re-imagined on ASP.NET Core, this is me wondering whether I could easily and quickly use some of the lower level parts of the routing to use Nancy-esque style routing. It runs on pre-released binaries from Microsoft so just a warning for now! The one thing I have never liked about ASP.NET is the routing whether that be configured in Global.asax or attribute routing or convention based methods in controllers. This is not a framework. Things like authentication and error handling etc should be handled by other middleware that comes with ASP.NET Core but Botwin contains enough functionality to get a decent sized app running. My commitment to Nancy is still as strong but these days finding time to contribute to it is difficult so kind of makes me sad that there is no choice for web frameworks for .NET. The performance is very good as it sits directly within the ASP.NET Core pipeline. If you'd like to help out or have some ideas please visit the repo <a href="https://github.com/jchannon/Botwin">here</a> but today I'm happy to announce Botwin!</p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2016/07/13/building-all-current-dotnet-core-projects-vscode/</guid><link>http://blog.jonathanchannon.com/2016/07/13/building-all-current-dotnet-core-projects-vscode/</link><a10:author><a10:name /></a10:author><category>ASP.Net</category><category>VSCode</category><title>Building all and current dotnet core projects in VSCode</title><description><p>As you may or may not know I try to work on OSX as much as possible and with .Net that's quite painful to be honest. Things are moving along nicely with Jetbrains Rider,
VSCode, Xamarin and Omnisharp. I'll be honest, none of them are perfect and I often find myself using Visual Studio in a VM because it just works (yes, its clunky etc etc).
Recently, VSCode got a 1.3 release with some new features, tabs being one of them. I never really got on with VSCode so dismissed it most of the time but this new release
opened my eyes a bit more and thought I'd give it a go. Its C# support now runs on .Net Core RTM and most of my work at the moment is porting projects to .Net Core so it seemed
this would be worthwhile. I've tried to setup keybindings that are the ones I know from Visual Studio and installed couple of extensions to make things easier and prettier. </p>
<p>As VSCode is language agnostic the one thing I found was how to build .Net Core projects was a bit off. For each project you have you have to configure a task runner. VSCode tries to
help you here and gives you a few languages to choose from. For .Net Core it creates a <code>dotnet build</code> task. The problem with this is that it runs that command from the workspace root,
ie the folder where VSCode is opened. What if you open it from the git root folder and your project(s) are under a src/MyProject folder? It will fail as it cant find project.json.
What you can do is set the <code>cwd</code> to be a specific directory by hardcoding it in the task configuration but thats not great if you have multiple projects. You could use some predefined
variables that VSCode provides eg/<code>${fileDirname}</code> but again if you are in a folder 4 levels deep that wont work either.
</description><pubDate>Tue, 12 Jul 2016 23:00:00 Z</pubDate><a10:updated>2016-07-12T23:00:00Z</a10:updated><a10:content type="html"><p>As you may or may not know I try to work on OSX as much as possible and with .Net that's quite painful to be honest. Things are moving along nicely with Jetbrains Rider,
VSCode, Xamarin and Omnisharp. I'll be honest, none of them are perfect and I often find myself using Visual Studio in a VM because it just works (yes, its clunky etc etc).
Recently, VSCode got a 1.3 release with some new features, tabs being one of them. I never really got on with VSCode so dismissed it most of the time but this new release
opened my eyes a bit more and thought I'd give it a go. Its C# support now runs on .Net Core RTM and most of my work at the moment is porting projects to .Net Core so it seemed
this would be worthwhile. I've tried to setup keybindings that are the ones I know from Visual Studio and installed couple of extensions to make things easier and prettier. </p>
<p>As VSCode is language agnostic the one thing I found was how to build .Net Core projects was a bit off. For each project you have you have to configure a task runner. VSCode tries to
help you here and gives you a few languages to choose from. For .Net Core it creates a <code>dotnet build</code> task. The problem with this is that it runs that command from the workspace root,
ie the folder where VSCode is opened. What if you open it from the git root folder and your project(s) are under a src/MyProject folder? It will fail as it cant find project.json.
What you can do is set the <code>cwd</code> to be a specific directory by hardcoding it in the task configuration but thats not great if you have multiple projects. You could use some predefined
variables that VSCode provides eg/<code>${fileDirname}</code> but again if you are in a folder 4 levels deep that wont work either.
<!--excerpt-->
I wanted a Build All projects command and a Build Current project command but with the above limitations I set about investigating some terminal commands that could be run to get this to work
and below is what I came up with:</p>
<pre><code>{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "0.1.0",
"command": "zsh",
"isShellCommand": true,
"showOutput": "always",
"args": [
"-c"
],
"options": {
"cwd": "${fileDirname}"
},
"tasks": [{
"taskName": "Build Current Project",
"suppressTaskName": true,
"isBuildCommand": true,
"args": [
"setopt extended_glob &amp;&amp; print -l (../)#project.json(:h) | xargs dotnet build"
],
"problemMatcher": "$msCompile"
}, {
"taskName": "Build All Projects",
"suppressTaskName": true,
"isBuildCommand": true,
"args": [
"cd ${workspaceRoot} &amp;&amp; dotnet build ./**/**/project.json &amp;&amp; echo Build Completed"
],
"problemMatcher": "$msCompile"
}]
}
</code></pre>
<p><strong>One thing to note, this will only work for OSX/Linux users with ZSH.</strong></p>
<p>So what we have is a build task that runs a command (the first task) which calls out to <code>zsh</code> with the argument <code>-c</code> that shows the output in the task panel within VSCode and it executes it
within the current file's directory. This then calls <code>setopt extended_glob</code> to turn on ZSH extended globbing, it finds the closest parent directory that has a project.json and then passes
that to <code>xargs</code> which will execute <code>dotnet build</code> with the output from the glob. </p>
<p>We also have another task which will build all projects by changing directory to the workspace root and then running <code>dotnet build</code> with a glob pattern to find all the folders with project.json
inside of them. You will have to change that glob pattern to fit your folder structure but this is what works for the <a href="http://nancyfx.org">NancyFX</a> project.</p>
<p>To invoke these presee <code>CMD + P</code> and type <code>task</code> and add a space after task, VSCode will list your tasks, you can then execute either Build Current Project or Build All Projects.</p>
<p>Have fun!</p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2016/06/27/porting-owin-middleware-aspnetcore/</guid><link>http://blog.jonathanchannon.com/2016/06/27/porting-owin-middleware-aspnetcore/</link><a10:author><a10:name /></a10:author><category>ASP.Net</category><category>OSS</category><category>OWIN</category><title>Porting OWIN middleware to ASP.Net Core</title><description><p>In our application at work we make use of various middleware and as we are making everything run on .Net Core the time has come to port said middleware to .Net Core. If you don't already know ASP.Net Core has a bridge that allows you to use OWIN components in an ASP.Net Core application. This will convert the HttpContext into a OWIN environment dictionary on input and then back again on output.</p>
<p>Lets take an example of some middleware</p>
<pre><code>public class MyMiddleware
{
private readonly Func&lt;IDictionary&lt;string, object&gt;, Task&gt; nextFunc;
private readonly OwinUserMiddlewareOptions options;
public OwinUserMiddleware(Func&lt;IDictionary&lt;string, object&gt;, Task&gt; nextFunc, MyMiddlewareOptions options)
{
this.options = options;
this.nextFunc = nextFunc;
}
public Task Invoke(IDictionary&lt;string, object&gt; environment)
{
//Everything is awesome
return nextFunc(environment);
}
}
public static class MyMiddlewareExtensions
{
public static IAppBuilder UseMyMiddleware(this IAppBuilder app, MyMiddlewareOptions options = null)
{
return app.Use(typeof(MyMiddleware), options);
}
}
</code></pre>
<p></description><pubDate>Sun, 26 Jun 2016 23:00:00 Z</pubDate><a10:updated>2016-06-26T23:00:00Z</a10:updated><a10:content type="html"><p>In our application at work we make use of various middleware and as we are making everything run on .Net Core the time has come to port said middleware to .Net Core. If you don't already know ASP.Net Core has a bridge that allows you to use OWIN components in an ASP.Net Core application. This will convert the HttpContext into a OWIN environment dictionary on input and then back again on output.</p>
<p>Lets take an example of some middleware</p>
<pre><code>public class MyMiddleware
{
private readonly Func&lt;IDictionary&lt;string, object&gt;, Task&gt; nextFunc;
private readonly OwinUserMiddlewareOptions options;
public OwinUserMiddleware(Func&lt;IDictionary&lt;string, object&gt;, Task&gt; nextFunc, MyMiddlewareOptions options)
{
this.options = options;
this.nextFunc = nextFunc;
}
public Task Invoke(IDictionary&lt;string, object&gt; environment)
{
//Everything is awesome
return nextFunc(environment);
}
}
public static class MyMiddlewareExtensions
{
public static IAppBuilder UseMyMiddleware(this IAppBuilder app, MyMiddlewareOptions options = null)
{
return app.Use(typeof(MyMiddleware), options);
}
}
</code></pre>
<p><!--excerpt-->
Here we see some middleware and an extension so it can be used in an application that uses OWIN. This would be called in most commonly in a <code>Startup.cs</code> file like so:</p>
<pre><code>public void Configuration(IAppBuilder app)
{
app.UseMyMiddleware(new MyMiddlewareOptions());
}
</code></pre>
<p>As I said earlier, ASP.Net Core has a bridge to use OWIN components and as long as your middleware can return a <code>MidFunc</code> there is very little required for you to do however if you take the example above there is a tiny bit more to do.</p>
<h2>ASP.Net Core Startup.cs</h2>
<pre><code>public void Configure(IApplicationBuilder app)
{
//Use the ASP.Net Core OWIN bridge
app.UseOwin(x =&gt; x.Invoke(MyMiddleware.ReturnAppFunc()));
}
</code></pre>
<p>Above shows how to use the OWIN bridge if your middleware can already return a <code>MidFunc</code>. For those unclear here's what that looks like <code>System.Func&lt;System.Func&lt;System.Collections.Generic.IDictionary&lt;string, object&gt;, System.Threading.Tasks.Task&gt;, System.Func&lt;System.Collections.Generic.IDictionary&lt;string, object&gt;, System.Threading.Tasks.Task&gt;&gt;;</code></p>
<p>Using our sample middleware above the extension class would now have to look like this:</p>
<pre><code>using System;
using MidFunc = System.Func&lt;System.Func&lt;System.Collections.Generic.IDictionary&lt;string, object&gt;,
System.Threading.Tasks.Task&gt;, System.Func&lt;System.Collections.Generic.IDictionary&lt;string, object&gt;,
System.Threading.Tasks.Task&gt;&gt;;
public static class MyMiddlewareExtensions
{
public static Action&lt;MidFunc&gt; UseMyMiddleware(this Action&lt;MidFunc&gt; builder, MyMiddlewareOptions options = null)
{
builder(next =&gt; new MyMiddleware(next, options).Invoke);
return builder;
}
}
</code></pre>
<p>This can now be used in a ASP.NET Core Startup class like so:</p>
<pre><code>public void Configure(IApplicationBuilder app)
{
//Use the ASP.Net Core OWIN bridge
app.UseOwin(x =&gt;
{
x.UseMyMiddleware(new MyMiddlewareOptions());
x.UseNancy();
});
}
</code></pre>
<p>Tada! Also note that I only call <code>UseOwin</code> once, I don't need to call it for every OWIN middleware I have. Also just to be clear, if you want your middleware to run on .Net Core you will still have to make sure your middleware is compatible. The above shows how you came make your OWIN middleware run in ASP.Net Core pipeline even if its on .Net 4.5 but if it is compatible with .Net Core you can target that from your application and bingo! That is exactly what I have done with my middleware. </p>
<p>Happy coding!</p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2016/04/28/what-is-a-hypermedia-client/</guid><link>http://blog.jonathanchannon.com/2016/04/28/what-is-a-hypermedia-client/</link><a10:author><a10:name /></a10:author><category>hypermedia</category><category>REST</category><title>What is a Hypermedia client?</title><description><p>I've been interested in Hypermedia for quite a while. I bugged <a href="http://twitter.com/darrelmiller">Darrel Miller</a> and <a href="http://twitter.com/gblock">Glenn Block</a> (Glenn Miller) so much so they created a <a href="https://www.youtube.com/playlist?list=PLbc9sDUxHqX60XJaTnNnKvI2mRighInDW">YouTube show</a> called "In The Mood for HTTP". I bought their book <a href="http://webapibook.net/">"Designing Evolvable Web APIs with ASP.NET"</a>, I am waiting for <a href="http://shop.oreilly.com/product/0636920037958.do">"RESTful Web Clients Enabling Reuse Through Hypermedia"</a> by <a href="http://twitter.com/mamund">Mike Amundsen</a>, I have <a href="http://blog.jonathanchannon.com/2015/08/07/hypermedia-and-nancyfx/index.html">written</a> about how to return different media types with NancyFX and I am looking at going to <a href="http://2016.uk.restfest.org/">restfest.org</a> in Edinburgh this year, a REST conference. </p>
<p>The one thing that I have always discussed with Glenn Miller is that there seems, or from my perception, that there is a lot of emphasis on the server returning media types(HAL,Siren,JSON-LD, Collection+Json) and very little information about hypermedia clients. The information that I have come across which is very little, again coulkd be due to my lack of Google-fu, seems to generate a mis-conception. The mis-conception I have come across is that if you have an API that returns hypermedia then your client should be able to magically work with it. It should know everything that is required to browse the API and discover its way around. I never quite grasped how that was supposed to happen and was serioulsy confused. I had seen a video that showed when the server returned its responses, using Javascript it would loop over all the properties in the payload and then display them in a HTML page. The emphasis was that if new bits of data were added then they would appear magically in the UI. That seemed like a nice feature but I still didn't quite get how it went from hitting the root of the API to finding its way into the guts of it. The server would return links in the payload with "rels" and I was baffled how this magic client knew what to do with a rel or even how it knew what rels it would return.<br />
</description><pubDate>Wed, 27 Apr 2016 23:00:00 Z</pubDate><a10:updated>2016-04-27T23:00:00Z</a10:updated><a10:content type="html"><p>I've been interested in Hypermedia for quite a while. I bugged <a href="http://twitter.com/darrelmiller">Darrel Miller</a> and <a href="http://twitter.com/gblock">Glenn Block</a> (Glenn Miller) so much so they created a <a href="https://www.youtube.com/playlist?list=PLbc9sDUxHqX60XJaTnNnKvI2mRighInDW">YouTube show</a> called "In The Mood for HTTP". I bought their book <a href="http://webapibook.net/">"Designing Evolvable Web APIs with ASP.NET"</a>, I am waiting for <a href="http://shop.oreilly.com/product/0636920037958.do">"RESTful Web Clients Enabling Reuse Through Hypermedia"</a> by <a href="http://twitter.com/mamund">Mike Amundsen</a>, I have <a href="http://blog.jonathanchannon.com/2015/08/07/hypermedia-and-nancyfx/index.html">written</a> about how to return different media types with NancyFX and I am looking at going to <a href="http://2016.uk.restfest.org/">restfest.org</a> in Edinburgh this year, a REST conference. </p>
<p>The one thing that I have always discussed with Glenn Miller is that there seems, or from my perception, that there is a lot of emphasis on the server returning media types(HAL,Siren,JSON-LD, Collection+Json) and very little information about hypermedia clients. The information that I have come across which is very little, again coulkd be due to my lack of Google-fu, seems to generate a mis-conception. The mis-conception I have come across is that if you have an API that returns hypermedia then your client should be able to magically work with it. It should know everything that is required to browse the API and discover its way around. I never quite grasped how that was supposed to happen and was serioulsy confused. I had seen a video that showed when the server returned its responses, using Javascript it would loop over all the properties in the payload and then display them in a HTML page. The emphasis was that if new bits of data were added then they would appear magically in the UI. That seemed like a nice feature but I still didn't quite get how it went from hitting the root of the API to finding its way into the guts of it. The server would return links in the payload with "rels" and I was baffled how this magic client knew what to do with a rel or even how it knew what rels it would return.<br />
<!--excerpt-->
After speaking to Darrel he told me that's the one thing clients do know ie/ what rels an API should return. See <a href="https://twitter.com/jchannon/status/719486875484991488">here</a>. I was still confused at this, I assumed that the client would have an in memory set of rels that it knew about and therefore understood them but then I was confused what would happen if a new rel was introduced by the API, how would the client know what to do? </p>
<p>I had it in my head and maybe from some of the hypermedia client articles and videos that I'd read/seen that a hypermedia client was some kind of magic client that just knew how to navigate an API. I then came across <a href="https://jeffknupp.com/blog/2014/06/03/why-i-hate-hateoas/">this</a> article written by someone who also had the notion that a client is some magical thing and he states : <code>a single client that could make use of *every single (properly built) REST API in existence* without requiring documentation</code> At this point I kind of agreed with him, where are the magical clients and libraries that I can just plug into my API? They must exist as people keep going on about how if you have a client it should know how to work with your API. I then came across <a href="https://signalvnoise.com/posts/3373-getting-hyper-about-hypermedia-apis#comments">this</a> article, it also poo-poos the idea of hypermedia and magic clients but then I started to read the comments and saw comments from <a href="http://twitter.com/gblock">Glenn Block</a>, <a href="http://twitter.com/mamund">Mike Amundsen</a> and <a href="https://twitter.com/mikekelly85">Mike Kelly</a>, the heavy hitters of the API world and it clicked from one of Glenn's replies. </p>
<p>There is no magic client. Its that simple. Yes they could potentially loop over a resource in a response and display data that could in time be added to by the server but the way it navigates the API by pre-defined rels is because the developer who is using the client has documentation about the rels and the payload. Glenn's comment <code>"Hypermedia api’s don’t prevent documentation, that is a central part. The documentation centers around the link rels and payload, not the uri structure."</code> This also confirms what Darrel said, I just didn't get it at the time. </p>
<p>Clients know about rels, or to be more precise, the developer writing the program that uses a client library to navigate an API has documentation about the rels the API uses and the payloads it returns. So you explicitly put in your code "execute a request to the URL returned in the link that has the rel X", there is no discovery in that sense, it doesn't magically find its way around as such. It doesn't hardcode URLs which is one thing where hypermedia clients aim to succeed ie/no hard coding of URLs to follow and new features and/or rels will need developer interaction to change how their client uses these new features. Glenn sums it up quite well in his quote (I tried to link to the comment but that's not available):</p>
<blockquote>
<p>Clients have knowledge of what to expect. In the real world the average hypermedia client knows about certain types of links that may come back. That is all the rel is, and identifier that the client can use to match up against links it cares about. If new links come back that it doesn’t know about, well it will ignore those links. Although the client knows about a set of rels, it does not know if they will or will not be present. The advantage is that logic sits with the server where it can, and often does at some point change. The change might be due to several reasons, including scale as the server can tell the client go get that resource over here rather than where you got it last time. The client also doesn’t know or care about the URL, all it knows is if the rel is this, and that is a resource I want to access, follow that URL.</p>
<p>As to the new links that were returned which that client ignored, newer clients can come along and they are coded to understand those rels. They know what to do with it so they follow it.</p>
<p>In both cases I mentioned there’s no magic and there’s still hard coding of some logic. It’s just the type of logic is different than it is today. Instead of hardcoding uris, and logic of whether or not those resources can be called, the logic is looking for the presence of rels, and decided which one to access. And often the decision of what to do is still decided by a human being, or it maybe be a combination where the machine first looks at the available set and if it’s logic allows it to proceed it does, otherwise it gets interaction from a human.</p>
</blockquote>
<p>So there it is, hypermedia clients are not some magic tool/library that can see your API and go "I got this", the developer has to deal with new features however, the key is that if the server modifies existing urls in a rel the client will not have to change because it hasn't hardcoded anything inside it to go to a certain URL. New features in an API should in theory not break existing clients.</p>
<p>Bringing this back to all the different media types that the server can return and all the discussion I see around them. In simple terms the client will always look for a rel, that is common across all media types. The difference comes where the client looks for them in the response payload. Each media type has its own format and therefore where the rels exist. One thing to clarify, the different media types all contain rels for link objects but media types like Siren and Collection+Json have objects in the payload that describe how to add/modify data that may not have a rel but will have a href to show where to send the request to add/modify data.</p>
<p>My next plan is to write a simple API and a simple client using libraries that know how to deal with the media type I use and to navigate it to see how I get on. I've already been told that dynamic languages are more suited to clients than statically typed languages due to static languages requiring a binding of a payload to a resource/model class. With dynamic languages you dont need to map a payload to a class, you can just use a property from the payload directly without the need to bind to a class of 10 properties and then only use 2 (although in theory you wouldn't need to create the 8 properties if you weren't going to use them).</p>
<p>Anyway I hope that has highlighted and answered anyone's question of what a hypermedia client is. If you read this and thought "jeez you're a dumbass, you still don't get it" feel free to let me know in the comments although if I still haven't grasped it by this point, I might cry a little inside. Thanks for reading. </p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2016/03/30/vq-communications-funds-coreclr-nancyfx/</guid><link>http://blog.jonathanchannon.com/2016/03/30/vq-communications-funds-coreclr-nancyfx/</link><a10:author><a10:name /></a10:author><category>.net</category><category>community</category><category>coreclr</category><category>nancyfx</category><category>oss</category><title>VQ Communications Funds NancyFX to run on CoreCLR</title><description><p>Nearly 2 years ago I was employed by <a href="http://www.vqcomms.com">VQ Communications</a> primarily because of my open source contributions to <a href="http://nancyfx.org">NancyFX</a>. They had started work on a v2 of their flagship product and had begun work with Nancy and needed someone to help drive a HTTP API and architect a scaling solution as their v2 product was addressing a requirement they had for it cope with large volumes of traffic. Also of interest to me was their aim to deliver all of this as a black box appliance to customers on a VM running a custom embedded version of Linux using Postgres as the database. I would work four days a week remotely and go into the office one day a week. They already had completely remote employees and since I have been there they have taken on more. There are lots more juicy technical examples in the stack I could go into however, this is not the point of this post.</p>
</description><pubDate>Tue, 29 Mar 2016 23:00:00 Z</pubDate><a10:updated>2016-03-29T23:00:00Z</a10:updated><a10:content type="html"><p>Nearly 2 years ago I was employed by <a href="http://www.vqcomms.com">VQ Communications</a> primarily because of my open source contributions to <a href="http://nancyfx.org">NancyFX</a>. They had started work on a v2 of their flagship product and had begun work with Nancy and needed someone to help drive a HTTP API and architect a scaling solution as their v2 product was addressing a requirement they had for it cope with large volumes of traffic. Also of interest to me was their aim to deliver all of this as a black box appliance to customers on a VM running a custom embedded version of Linux using Postgres as the database. I would work four days a week remotely and go into the office one day a week. They already had completely remote employees and since I have been there they have taken on more. There are lots more juicy technical examples in the stack I could go into however, this is not the point of this post.</p>
<!--excerpt-->
<p>Two years on and our API has developed well, our v2 version goes from strength to strength thanks to a great team and we are still using Nancy however, we got to a point at the end of 2015 where we were seeing occasional seg faults when running under <a href="http://www.mono-project.com/">Mono</a>. We spent a lot of time delving deep into the issues, finding different behaviour under different Linux kernel versions and even submitted a <code>keep-alive</code> HTTP PR to Mono to try and fix it. </p>
<p>Around the same time I was involved with <a href="http://ominsharp.net">OmniSharp</a> and trying to make developing .Net applications in Sublime Text a viable thing on OSX. It worked well up to a point. Whilst this was going on Microsoft announced that .Net was going open source and that they were working on CoreFX a set of base class libraries that would work cross platform. This was very impressive. Once could also interpret this as fate. </p>
<p>A few weeks after the MS announcement, <a href="http://twitter.com/thecodejunkie">@thecodejunkie</a> released a <a href="http://thecodejunkie.com/2015/11/27/support-the-development-of-nancy-financially/">blog post</a> asking the community to help fund the development of Nancy. Him, myself and the other folks that help maintain and drive Nancy forward had always done this in our own time but we were now asking for some donations to keep the project going. It was felt that if companies are using and shipping applications built on top of Nancy it would be nice if they could contribute pull requests and/or donate some money to the project. (We have over 200 contributors to Nancy and we get some great PRs from the community but I assume the majority are from people using Nancy not on company time.) Whilst ASP.Net has the backing of a multi billion dollar company, Nancy doesn't so to keep it going, to give the maintainers a drive and in the big picture keep the project open source to aid .Net developers a choice when developing web based applications it would be great if users could see the benefits of donating a few dollars here and there. Four months on from the blog post, Nancy has received $50. When the blog post came out, I sent it to my boss as a fire and forget email, what happened next I hope will be an example for other businesses using open source software.</p>
<p>On my next trip into the office, myself and my boss were having a chat about a few things and the topic of donating some funds to Nancy came up. We discussed a couple of options and over the next two weeks or so we had a couple more discussions but by the end of it we had decided that what a good thing to do would be to pay to get Nancy running on CoreCLR so we could use it on our Linux system. One thing to note is that I knew <a href="http://twitter.com/thecodejunkie">@thecodejunkie</a> was in a position where the company he worked for, <a href="http://tretton37.com">tretton37</a> would accept contract work for him to work on Nancy (see his blog post <a href="http://thecodejunkie.com/2015/08/28/i-am-now-taking-contract-work-for-nancy/">here</a>).</p>
<p>VQ felt that by funding this, it would address issues we had on Mono and therefore give value to the company whilst also allowing them to give back to the community.</p>
<p>We at VQ have other code, not just the API so we embarked on making our other code CoreCLR compatible and over time record the advantages/disadvantages of it running on this platform. This is still on-going but initial feedback showed all good things so there was no turning back.</p>
<p>We at Nancy, had a call to discuss how our plans could be met with this funding. We already had a big list of things we wanted to get into a v2 of Nancy (current version 1.x) so what could we do about them if we were to accept funding to get Nancy on CoreCLR. One thing we were adamant about was that even with funding, we were not prepared to introduce something that did not align with the vision of how we saw Nancy running. Luckily this was never going to be a problem as VQ simply wanted it run as it did on Mono &amp; .Net. We decided we could trim our list of features for a v2 but we would still need to get some things done for v2 which would enable CorecLR development. The CoreCLR work was not a VQ, me and <a href="http://twitter.com/thecodejunkie">@thecodejunkie</a> thing only, nothing in Nancy's contribution workflow would change, it was simply that there was people working on Nancy full time. </p>
<p>VQ agreed that it would fund the development of Nancy on CorecLR from February 1st 2016 initially until April 30th 2016 but if another month was required so be it. We are now two months in, Nancy runs on CoreCLR on RC1 packages. RC2 will introduce some major changes so the plan is we get onto that as soon as its released. Nancy's unit tests and Fluent Validation library is still not compatible with CoreCLR as they are dependent on upstream sources that are still in the progress of transitioning their code to CoreCLR. Nicely, some work has been done by Microsoft in this regard. Nancy reached out to Microsoft informing them of VQ's plan to move their code and fund Nancy onto CoreCLR and they have helped with discussions surrounding <a href="https://github.com/castleproject/Core/issues/90">Castle.Core</a> and code contributions to repositories such as <a href="https://github.com/FakeItEasy/FakeItEasy/pull/617">FakeItEasy</a> so this has been welcomed greatly.</p>
<p>Whilst we wait for RC2, <a href="http://twitter.com/thecodejunkie">@thecodejunkie</a> is working on performance improvements to Nancy and as a team Nancy is looking for feedback for a <a href="https://www.nuget.org/packages/Nancy/2.0.0-alpha">v2-alpha release</a> that has been made possible thanks to VQ.</p>
<p>From an open source contributor's perspective it's great to see a company out there with the vision of funding a project to enable it to move forward and address needs they have as a company. This benefits the open source project as well as the company.</p>
<p>From the perspective of an employee there was no day long meetings discussing implications due to licences, what it would mean regarding support issues, no legal department involvement. It was simple, the company used an open source project, the company had a vision for their product that the open source project did not meet yet so rather than waiting or look to use another project why not fund that development to meet the requirements?</p>
<p>It can be that simple. Does your company hit a bug with an open source project? Allow your developers to spend time and submit a PR to fix the issue. Does your company have a vision that an open source project could help with? Fund that project to make it a reality.</p>
<p>Whilst each open source project is unique, how you enable full time work on it and how companies execute the funding for it will all be different, in essence what I believe VQ has managed to do is set a precedent to say it is possible for companies to fund open source projects. This doesn't mean companies funding projects kill people's passion in contributing to open source projects, it means passionate developers can contribute as they do now but if there are companies out there able to contribute by funding or having developers contribute whilst in working hours this can only be a good thing for the project. It's not just a discussion for a couple of developers in the company to have over the water cooler which never materialises into anything and it doesn't mean months on end checking on potential legal/licencing/support issues, its something any company can do if they really want to. Just to be clear however, in case you're screaming "what a cavalier attitude", VQ were not cavalier in their approach to this, what we did was have a balanced discussion, things like licence issues pretty much went away at the start as Nancy is MIT so we were happy to donate any work paid for by us to the Nancy team, there were no legal issues and any other issues that were raised were either discussed and resolved or we checked with <a href="http://twitter.com/thecodejunkie">@thecodejunkie</a> for clarification. I'm just highlighting that each circumstance is different but luckily for VQ and Nancy it all came together nicely and I believe there can be many companies and open source projects that can have this relationship.</p>
<p>Checkout the <a href="http://www.vqcomms.com/blog/2016-03-30/nancyfx-coreclr/">blog post</a> from VQ about how the process above came about and what it means to them and the <a href="https://tretton37.com/blog/tretton37_open_source">blog post</a> from tretton37 about the relationship has evolved between VQ and NancyFX.</p>
<p>If your company has a problem, if no one else can help, and if you can find them, maybe you can fund an open source project.</p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2016/02/11/profiling-coreclr-application/</guid><link>http://blog.jonathanchannon.com/2016/02/11/profiling-coreclr-application/</link><a10:author><a10:name /></a10:author><category>.net</category><category>c#</category><category>coreclr</category><category>oss</category><title>Profiling a CoreCLR application with dotMemory</title><description><p>I had ported an application over to CoreCLR (that's a whole other blog post), along with my colleague <a href="http://twitter.com/yantrio">James Humphries</a> put it in a docker image and sat back and watched it do its thing. After 6 hours of running the docker container had crashed. Ah nuts we thought, so pulled up the logs from docker and the last line looked like this <code>2016-02-10T20:18:31.728783069Z Killed</code>. I'm pretty sure when you have a log entry with <code>Killed</code> in it, things can't be good. To the interweb...</p>
<p>I opened up the CoreFX repository on Github to search for the term <code>Killed</code> and there were 2 comments but nothing that was logged out anywhere. I then Googled for docker and killed and there was an entry that someone else had spotted on their container and the feedback was essentially it was probably out of memory.</p>
</description><pubDate>Thu, 11 Feb 2016 00:00:00 Z</pubDate><a10:updated>2016-02-11T00:00:00Z</a10:updated><a10:content type="html"><p>I had ported an application over to CoreCLR (that's a whole other blog post), along with my colleague <a href="http://twitter.com/yantrio">James Humphries</a> put it in a docker image and sat back and watched it do its thing. After 6 hours of running the docker container had crashed. Ah nuts we thought, so pulled up the logs from docker and the last line looked like this <code>2016-02-10T20:18:31.728783069Z Killed</code>. I'm pretty sure when you have a log entry with <code>Killed</code> in it, things can't be good. To the interweb...</p>
<p>I opened up the CoreFX repository on Github to search for the term <code>Killed</code> and there were 2 comments but nothing that was logged out anywhere. I then Googled for docker and killed and there was an entry that someone else had spotted on their container and the feedback was essentially it was probably out of memory.</p>
<!--excerpt-->
<p>I had spotted when I was debugging the appliation that memory usage was creeping up relatively slowly but assumed all would be ok when garbage collection kicked in. Well it seems its true what they say about assumptions as running it for 6 hours caused the container to be killed. I had mentioned to <a href="http://twitter.com/citizenmatt">Matt Ellis</a>, he of Jetbrains fame, a few months ago that having profiling tools such as dotMemory &amp; dotTrace would be awesome for apps running on Linux. Last I heard there may be plans to have a remote agent that could run on Linux and this would send the info back to dotMemory which would run on Windows as its all in WPF I believe. Anyhow he pointed me to a blog post JB had done. This illustrated dotMemory profiling a <code>*.exe</code> that had been built for CoreCLR but I wanted to do it from Visual Studio and not have to produce a binary. Remeber CoreCLR apps don't produce binaries unless you explicitly tell dnx/dotnet cli to do so. Anyhow long story short, we couldn't get VS to launch the startup project and monitor my app. I assume they will solve this issue in the long run.</p>
<p>To get it working, we opened dotMemory and told it to profile a standalone application. That application was dnx.exe. Depending on your runtime target on Windows this executable will either be in <code>C:\Program Files\DNX\runtimes</code> or <code>C:\Users\[USERNAME]\.dnx\runtimes</code>. The next thing to do is click the Advanced option. Here you can enter any arguments which in this case it will be the command you use in <code>project.json</code> to start your application. The final option you have is the working directory. For me this was where the source code was. So you should have something looking like this:</p>
<p><img src="http://blog.jonathanchannon.com/images/blogpostimages/dotmemoryrun.png" alt="dotMemory Usage" /></p>
<p>Once you click Run it will start you application up and begin doing its thing. What we did was to create a snapshot as soon as the app seemed to have started up, wait 60secs, take another snapshot, wait 120 seconds and repeat. You can then compare the snapshots by selecting the snapshots checkbox and then clicking compare. You can then see the differences.</p>
<p><img src="http://blog.jonathanchannon.com/images/blogpostimages/dotmemorysnapshotselect.png" alt="dotMemory Snapshots" />
<img src="http://blog.jonathanchannon.com/images/blogpostimages/dotmemorysnapshot.png" alt="dotMemory Differences" />
<img src="http://blog.jonathanchannon.com/images/blogpostimages/dotmemorydiff.png" alt="dotMemory Drilldown" /></p>
<p>On the comparison I was a bit confused with all this information and fumbled around for a bit but we came across the Back Traces option and then picked the object with the largest amount of allocated bytes and drilled down until we saw something that might have been in my code. As we drilled down and saw system classes we knew roughly where in the app it would be until we finally hit the class and method name.</p>
<p>Now being a guy on the cutting edge I knew the issue I had was this new piece of technology called XML, you should try it. We looked in the code and found it was doing this, which was getting called every second.</p>
<pre><code>var stream = new MemoryStream(Encoding.UTF8.GetBytes(myXml)));
var sr = new StreamReader(stream));
var serializer = new XmlSerializer(typeof(MyObject), new XmlRootAttribute("rootNode"));
var obj = (MyObject)xml.Deserialize(stream);
</code></pre>
<p>So did you spot it? Clue : Its not the fact that the memorystream or streamreader isn't within a <code>using</code> statement (although that was fixed).</p>
<p>From the dotMemory data it said it was the <code>XmlSerializer</code> constructor. I had a look at that object and there was no <code>Dispose</code> method so decided I would create a static instance of it at the top of the class to save recreating it every time.</p>
<p>I made the changes, put them into a new docker image and watched the memory usage of my app with <a href="https://t.co/gR1uTzKCwq">cAdvisor</a> and bingo, rock solid and stable memory usage!</p>
<p>After a quick Google just out of curiosity, I found the same exact issue on Stackoverflow and luckily the answer was what we had decided to do with the static instance. Basically <code>XmlSerializer</code> uses assembly generation and they cannot be collected hence the memory issues we saw.</p>
<p>Anyway hope that helps you on the path to profiling your CoreCLR app and hopefully things with Jetbrains will improve so we can profile apps running on Linux.</p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2015/11/16/content-negotiation-golang/</guid><link>http://blog.jonathanchannon.com/2015/11/16/content-negotiation-golang/</link><a10:author><a10:name /></a10:author><category>golang</category><category>nancyfx</category><category>oss</category><title>Introducing Negotiator - a GoLang content negotiation library</title><description><p>In my continued experience learning GoLang I started looking at how to best use it when dealing with HTTP. The idiomatic way to use GoLang and HTTP is to use the standard library which keeps things minimal but there are a few features missing. The first thing is a router. OOTB GoLang doesn't have a router and the majority seem to suggest using a package called Mux from Gorilla Toolkit, a set of libraries that aims to improve the standard library from Go. After having a play with it I didn't really warm to it so spent some time looking into the alternatives (and there are plenty!) and eventually decided upon <a href="https://goji.io">Goji</a></p>
<p>Once I had started using Goji I then wanted to handle content negotiation in my HTTP handler. As I said earlier GoLang is minimal in its offerings OOTB and this is a good thing. Just for the record there are a few frameworks out there if you want/need and all encompassing framework such as Martini, Revel and Echo. These tend to bend the idioms of GoLang a bit and even the author of Martini blogged on this fact due to feedback from the community that although its capabilities are great they aren't idiomatic Go.
</description><pubDate>Mon, 16 Nov 2015 00:00:00 Z</pubDate><a10:updated>2015-11-16T00:00:00Z</a10:updated><a10:content type="html"><p>In my continued experience learning GoLang I started looking at how to best use it when dealing with HTTP. The idiomatic way to use GoLang and HTTP is to use the standard library which keeps things minimal but there are a few features missing. The first thing is a router. OOTB GoLang doesn't have a router and the majority seem to suggest using a package called Mux from Gorilla Toolkit, a set of libraries that aims to improve the standard library from Go. After having a play with it I didn't really warm to it so spent some time looking into the alternatives (and there are plenty!) and eventually decided upon <a href="https://goji.io">Goji</a></p>
<p>Once I had started using Goji I then wanted to handle content negotiation in my HTTP handler. As I said earlier GoLang is minimal in its offerings OOTB and this is a good thing. Just for the record there are a few frameworks out there if you want/need and all encompassing framework such as Martini, Revel and Echo. These tend to bend the idioms of GoLang a bit and even the author of Martini blogged on this fact due to feedback from the community that although its capabilities are great they aren't idiomatic Go.
<!--excerpt--></p>
<h3>Introducing Negotiator</h3>
<p>After realising that Goji didn't have content negotiation seeing as its just a router (although there are Goji compatible middleware, which in turn are standard library compatible) I started playing on how to implement conneg.</p>
<p>My first attempt was a piece of middleware that allowed the request to go to Goji and then on the way back up it would interrogate the context for a model which the HTTP handler would have inserted, it would then interrogate the Accept header obviously and then write out the JSON/XML to the response.</p>
<pre><code>//***** HTTP Handler *****
func HelloWorldHTTPHandler(ctx web.C, w http.ResponseWriter, req *http.Request) {
user := &amp;User{"Joe","Bloggs"}
ctx.Env["model"] = user
}
//*****First stab at content negotiation midleware *****
package conneg
import (
"encoding/json"
"encoding/xml"
"net/http"
"github.com/zenazn/goji/web"
)
func Conneg(c *web.C, h http.Handler) http.Handler {
fn := func(w http.ResponseWriter, r *http.Request) {
h.ServeHTTP(w, r)
accept := r.Header.Get("Accept")
if model := c.Env["model"]; model != nil {
switch accept {
case "application/json":
w.Header().Set("Content-Type", "application/json")
js, err := json.Marshal(model)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
if statuscode := c.Env["statuscode"]; statuscode != nil {
w.WriteHeader(statuscode.(int))
}
w.Write(js)
case "application/xml":
x, err := xml.MarshalIndent(model, "", " ")
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/xml")
w.Write(x)
}
}
}
return http.HandlerFunc(fn)
}
</code></pre>
<p>As you can see its pretty rudimentary but does the job but if I developed multiple web applications I would have to copy and paste this into every app that I wrote. I would also have to add to the switch statement for every media type I wanted to handle.</p>
<p>I wanted to write a library that I could reference for every web application, separate out each response processor instead of having a switch statement, have the ability to write new response processors that conformed to an interface plus get more experience with Go and of course make it OSS.</p>
<p>To cut a long story short if you install a reference to <code>github.com/jchannon/negotiator</code> you can how have a HTTP handler like so:</p>
<pre><code>func getUser(w http.ResponseWriter, req *http.Request) {
user := &amp;User{"Joe","Bloggs"}
negotiator.Negotiate(w, req, user)
}
</code></pre>
<p>This in my humble opinion keeps things pretty tidy. If you want to extend the base functionality of JSON/XML handling you can implement this interface for your own response processor:</p>
<pre><code>type ResponseProcessor interface {
CanProcess(mediaRange string) bool
Process(w http.ResponseWriter, model interface{})
}
</code></pre>
<p>CanProcess is called when <code>negotiator</code> loops over the media types in the Accept header. This loop is also ordered by the weighted value in the Accept header eg. <code>Accept: application/json,application/xml;q=0.8,text/plain;q=0.5</code>, some great work by <a href="https://twitter.com/pdoh00">Phil Cleveland</a> who helped with writing <code>negotiator</code> (note: if there is no accept header or relevant response processor then <code>negotiator</code> will return a 406). The response processor will return a boolean saying whether it can handle the current media type. If it returns true then <code>Process</code> is called and it will then handle writing the body to the response in the format that is applicable to that response processor.</p>
<p>To add your new custom processor to <code>negotiator</code> simple pass it to the <code>New</code> method.</p>
<pre><code>func customHandler(w http.ResponseWriter, req *http.Request) {
user := &amp;user{"Joe", "Bloggs"}
textplainNegotiator := negotiator.New(&amp;PlainTextResponseProcessor{})
textplainNegotiator.Negotiate(w, req, user)
}
type PlainTextResponseProcessor struct {
}
func (*PlainTextResponseProcessor) CanProcess(mediaRange string) bool {
return strings.EqualFold(mediaRange, "text/plain")
}
func (*PlainTextResponseProcessor) Process(w http.ResponseWriter, model interface{}) {
w.Header().Set("Content-Type", "text/plain")
val := reflect.ValueOf(model).Elem()
for i := 0; i &lt; val.NumField(); i++ {
valueField := val.Field(i).String()
typeField := val.Type().Field(i)
w.Write([]byte(typeField.Name + " : " + valueField + " "))
}
}
</code></pre>
<p>This is a slightly contrived example but you can see what needs to be done to add your own response processor for it to be used by <code>negotiator</code>. One thing I don't like about this is that you need to call <code>New</code> in every handler however, you may only want this processor in certain route handlers in your application. Going back to my first example, what you could do is insert the pointer returned from calling <code>New</code> and insert it into the http context and then in the handlers pull it out and then call <code>Negotiate</code>. You can see a demo of this in the Github repo <a href="https://github.com/jchannon/negotiator/blob/master/demo/main.go">here</a></p>
<p>Hopefully other Go developers will find this library useful but I'm happy with what I've done here and its allowed me to learn more Go and interact with the community. I'm also pretty chuffed that its got 100% unit test coverage according to the built in golang tools. I'm very impressed with the receptiveness of the Go community and the amount of libraries and blog posts out there to learn from. I've managed to pick up Go pretty easily and I'm really loving it. Another big thanks to <a href="https://twitter.com/pdoh00">Phil Cleveland</a>, another .Net developer trying to pick up Go, for his help with <code>negotiator</code>.</p>
<p>For more information check out the <a href="http://github.com/jchannon/negotiator">Github repository</a> and the <code>godoc</code> documentation <a href="https://godoc.org/github.com/jchannon/negotiator">here</a></p>
<p>Shout out to <a href="http://nancyfx.org">NancyFX</a>, my first love, for inspiring the design of the ResponseProcessor interface.</p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2015/10/27/introducting-pogo-golang-twitter-pocket/</guid><link>http://blog.jonathanchannon.com/2015/10/27/introducting-pogo-golang-twitter-pocket/</link><a10:author><a10:name /></a10:author><category>golang</category><category>oss</category><title>Introducing PoGo - a GoLang, Twitter favourites to Pocket importer</title><description><p>I've always kept myself up to date with the latest languages arriving on the scene and I've spent time in the past learning Node and last year I learnt Python by doing the Omnisharp plugin for Sublime. I have recently been looking for a static language that I can transfer my C# skills too and I had narrowed it down to 3; Swift, Kotlin and GoLang. I started out with Kotlin and setting up a dev environment with IntelliJ and running the koans that Jetbrains advise you step through to pick up the language. Whilst it seemed relatively straightforward I got "noob confused" when I saw examples of Java calling into Kotlin with get/set prefixes somehow magically added to Kotlin properties. It turns out the Kotlin compiler does this for Java libraries so it can communicate with it, to me it seemed strange that I code a library in one language and the compiler then exposes these methods and properties slightly differently. Superficial as this sounds I also didn't really like the mammoth that appears to be IntelliJ. Coming from a predominantly Visual Studio background but working with Omnisharp I wanted a lightweight editor with some refactoring, intellisense and error highlighting.</p>
</description><pubDate>Tue, 27 Oct 2015 00:00:00 Z</pubDate><a10:updated>2015-10-27T00:00:00Z</a10:updated><a10:content type="html"><p>I've always kept myself up to date with the latest languages arriving on the scene and I've spent time in the past learning Node and last year I learnt Python by doing the Omnisharp plugin for Sublime. I have recently been looking for a static language that I can transfer my C# skills too and I had narrowed it down to 3; Swift, Kotlin and GoLang. I started out with Kotlin and setting up a dev environment with IntelliJ and running the koans that Jetbrains advise you step through to pick up the language. Whilst it seemed relatively straightforward I got "noob confused" when I saw examples of Java calling into Kotlin with get/set prefixes somehow magically added to Kotlin properties. It turns out the Kotlin compiler does this for Java libraries so it can communicate with it, to me it seemed strange that I code a library in one language and the compiler then exposes these methods and properties slightly differently. Superficial as this sounds I also didn't really like the mammoth that appears to be IntelliJ. Coming from a predominantly Visual Studio background but working with Omnisharp I wanted a lightweight editor with some refactoring, intellisense and error highlighting.</p>
<!--excerpt-->
<p>I had come across <a href="http://openmymind.net/The-Little-Go-Book/">The Little Go Book</a> a while ago and even went to the extent of getting it printed as a book via <a href="https://www.epubli.co.uk">ePubli</a> as I'm old school and would like to read something that lengthy on paper not a screen. Some months passed with it gathering dust but I had made the decision that I wanted to branch out into another language and community so I ditched Kotlin and started reading this book. Its only 50 pages long but due to work and family life this took a week to complete but it also got me questioning my understanding of C#. </p>
<p>After listening to <a href="https://twitter.com/vansimke">Michael Van Sickle</a> on a recent podcast I setup my Atom editor with the <a href="https://atom.io/packages/go-plus">Golang plugin</a> and also used the Go website (which has an <a href="https://tour.golang.org/welcome/1">online editor</a> which you can use to execute Go code) to test my understanding of how to use Go.</p>
<p>After a week of reading the book and executing the code I thought it was time to try and write a real world application. </p>
<p>For a while I had been wanting to use <a href="https://getpocket.com/">Pocket</a> a tool which helps you save links you want to go back and read at a later date. Currently I use Twitter favourites as a way to save interesting links that I will go back and read but I wondered if a tool like Pocket might be a better. </p>
<p>I had scowered the internet looking for a tool that would go through your favourites and import them into Pocket. I found one website that aimed to do that but basically didn't work. I found <a href="https://ifttt.com/">IFTTT</a> which would automatically move favourites to Pocket but only starting as of now.</p>
<h3>Lets do this</h3>
<p>So I was going to write a tool that would import all your favourites into Pocket. First thing I would need to do is connect to Twitter to get my favourites from my Go app so I fired up Google to see what was out there. I came across the <a href="https://github.com/ChimeraCoder/anaconda">Anaconda</a> library which is a Go wrapper for the Twitter API. This would get my list of favourites but it wasn't going to do the authentication for me. Google to the rescue again, I came across this <a href="https://github.com/mrjones/oauth">OAuth</a> library which would allow me to authenticate against Twitter with a consumer key and secret and then prompt the user to allow the app to have access to tweets. By doing this, Twitter would give the user a code that they could then paste into the app and in return it would make a request to Twitter to get a access token. Easy peasy!</p>
<p>Now I needed to communicate with Pocket so more Googling resulted in this <a href="https://github.com/quekshuy/pocket-golang-sdk">library</a>. Whilst this had the basics to connect to Pocket I had to add extra steps to get authentication working properly. I also had to read their <a href="https://getpocket.com/developer/docs/authentication">documentation</a> about 20 times to understand their workflow. I think their authentication workflow is aimed at mobile clients and websites authenticating against them rather than console applications. Two of their steps require passing in redirect urls which is no good if you're a terminal app. Also their documentation states that if the user authorises the app to access the user's Pocket account it will redirect back to a url, it will also redirect to the same url if the user doesn't allow access. Yeah that makes sense!?! Rather than having it redirect it would have been helpful if it did what Twitter did and provide a code the user can paste into the terminal but alas that's not an option so I had to fire up a webserver in my app that Pocket would redirect to. Surprisingly that only took two lines of code!</p>
<pre><code>http.Handle("/", http.FileServer(http.Dir("./static")))
http.ListenAndServe(":3000", nil)
</code></pre>
<p>Once I had authenticated I had to do a bit more reading regarding the Twitter API, I had to have logic to keep calling the Twitter API to get favourites until I had got them all. By default you can only return 200 favourites in one go and I had over that stored on Twitter.</p>
<p>I then had to ensure that my array of tweets were oldest first because when I looped over them adding them to Pocket I wanted the most recent favourites first. Reversing an array is not as easy as it sounds unfortunately and especially when in C# I'm used to <code>Array.Reverse(array);</code> You could write a <code>for</code> loop starting from the length of the array rather than zero and populate a new array with the last item first but I didn't want to do that really although it might have been simpler in the long run. Instead of doing that I could have looped over my array in a for loop but I wanted to use a <code>foreach</code> loop or in Go thats called <code>range</code>. What I had to do is similar to what you do in C# when doing custom comparisons and that is I had to implement an interface that would implement <code>Len,Less,Swap</code> methods. It was the <code>Less</code> method which was what did the magic for us, ie/order the tweets by date.</p>
<pre><code>func (slice Tweets) Less(i, j int) bool {
firstTime, err := slice[i].CreatedAtTime()
if err != nil {
fmt.Println("oops")
}
secondTime, err2 := slice[j].CreatedAtTime()
if err2 != nil {
fmt.Println("oops again!")
}
return firstTime.Before(secondTime)
}
</code></pre>
<p>Here we take in a slice (a wrapped array) of tweets, we get our first and second tweet and call a method on it that parses a datetime in string format and return a <code>time</code> object specific to Go. We then return a boolean to say whether the first tweet's date is before the second tweet's date. Then to reverse our array now we have that interface implemented we can call <code>sort.Sort(myFavourites)</code>. As I say when used to a one line statement reversing an array or using LINQ to do custom ordering of an array doing all this code seems a bit over the top but I think this is just because Go is reasonably young and the authors are trying to keep it fairly simple. Whether or not they will add LINQ type abstractions to the language in time, who knows but I think they are looking at languages like C# where things have been added and added over time and not wanting to populate Go too much. That's what I've heard anyway but what do I know I've only been using it for 7 days.</p>
<p>Some other things I noticed was that when you import a package it is case sensitive otherwise Go doesn't really like it. Also naming your variables matters. I have a package called <code>twitter</code>, I created a type called <code>Twitter</code> inside that package so to instantiate that I did <code>twitter := twitter.Twitter{}</code>. So that's <code>package.type</code>. I could then call methods by doing <code>twitter.</code> however once I added another type in the <code>twitter</code> package when I did <code>otherVariable := twitter.MyNewType{}</code> it complained that <code>twitter</code> didn't have anything called <code>MyNewType</code> inside it. What it was saying was that my variable called <code>twitter</code> didn't have it not that my package didn't have it so we have to be careful on naming things which is where developers fail!</p>
<p>What I will say is that when I was scratching my head on a Go issue I found a lot of blog posts and resources for Go which is a nice surprise due to its age however, its not actually that young. It was announced in 2009 and seeing its now 2015, 6 years in the tech world is a long time so its good to see all the resources out there for it.</p>
<p>Anyway, back to the code, once I had my array of favourites I could loop over them and do some logic on the tweets. Usually when I favourite something it would have a link to an article so I had to decide how I was going to handle this. I decided upon if the url had a host of github.com or if the extension was empty or it ended in either html, pdf, aspx or md then we would post that link to Pocket otherwise we wouldn't add it. If the tweet didn't contain a link we would post the url of the tweet to Pocket. What this exposed me to was the libraries in Go that dealt with urls and files.</p>
<p>So now I had my app working I needed a decent name and some decent styling on the web page that Pocket redirected to after the user had been authorised. I put out a Tweet asking for help and had a couple of suggestions for a name but I decided upon PoGo which came from <a href="https://twitter.com/AquaBirdConsult">Khalid Abuhakmeh</a> and I had a great pull request from <a href="https://twitter.com/hougasian">Kevin Hougasian</a> which had a great design for the authorised page.</p>
<h3>Introducing PoGo</h3>
<p><img src="http://blog.jonathanchannon.com/images/blogpostimages/pogo.png" alt="PoGo App" title="PoGo App" /></p>
<p><img src="http://blog.jonathanchannon.com/images/blogpostimages/pogoauthorised.png" alt="PoGo Authorised Page" title="PoGo Authorised Page" /></p>
<p><a href="http://github.com/jchannon/pogo">Github link</a></p>
<h3>Conclusion</h3>
<p>I haven't touched on a lot of things in Go that I've learnt for example, <code>goroutines</code> but I hope I've introduced some simple examples to demonstrate how Go can be used quite easily. I would say that Go is quite similar to Python but obviously has its own unique features. It is a statically typed language so has features like intellisense and error information when it compiles. Nicely the Atom plugin gives the intellisense in the editor and also error messages when you save the documents you are working on. I've dabbled quite a bit with Node in the past but never fully immersed myself in the community because I always had the day job of C# needing my attention and that still applies today. However, the company I work for deploys their product on Linux using Mono and so I am getting more involved in Linux and I think its pretty clear that Linux has won the deployment story so much so Microsoft are making ASPNet5 run on Linux and the number of tools on Linux vs Windows is considerably highly so with that I think its important to learn a language that is a first class citizen on that platform. Starting afresh in a new community is daunting as you are starting from scratch but I'm at a point where I'd like to get involved with new people and learn a new language that is a first class citizen on Linux. The problem I have is making the time to make this happen!</p>
</a10:content></item><item><guid isPermaLink="true">http://blog.jonathanchannon.com/2015/08/07/hypermedia-and-nancyfx/</guid><link>http://blog.jonathanchannon.com/2015/08/07/hypermedia-and-nancyfx/</link><a10:author><a10:name /></a10:author><category>.net</category><category>hypermedia</category><category>nancyfx</category><category>oss</category><category>REST</category><title>NancyFX and Hypermedia</title><description><p>I've been slowly educating my self on hypermedia; what it is, how does it help and how to use it. I must say I've found it a very interesting topic and I thought it was time I put some information into a blog post just in case the 2 people that read this blog might find it useful.</p>
<p>In my day job I'm responsible for a HTTP API (notice I didn't use REST) and some months ago I spoke to Glenn Block around a general discussion about hypermedia. Glenn put this on YouTube if you want to watch it.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/G4YeQMIdO6Q" frameborder="0" allowfullscreen></iframe>
</description><pubDate>Thu, 06 Aug 2015 23:00:00 Z</pubDate><a10:updated>2015-08-06T23:00:00Z</a10:updated><a10:content type="html"><p>I've been slowly educating my self on hypermedia; what it is, how does it help and how to use it. I must say I've found it a very interesting topic and I thought it was time I put some information into a blog post just in case the 2 people that read this blog might find it useful.</p>
<p>In my day job I'm responsible for a HTTP API (notice I didn't use REST) and some months ago I spoke to Glenn Block around a general discussion about hypermedia. Glenn put this on YouTube if you want to watch it.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/G4YeQMIdO6Q" frameborder="0" allowfullscreen></iframe>
<!--excerpt-->
<p>At the time I was deliberating whether our API should return payloads that adhere to a hypermedia type but from what I could tell its quite hard if you are the client consuming these payloads. For example, if you have an Angular SPA, you need code to traverse the links that the server returns. What happens if the user bookmarks a page and then comes back to it at a later date? The app will need to go back to the entry point of the API then traverse. What happens if the user presses the refresh button in the browser? Again more traversal. My conclusions were that there is very little tooling and libraries to allow web applications to be hypermedia clients. This was October 2014 and at the time of writing, August 2015 I think there have been some improvements but not a great deal.</p>
<p>After my chat with Glenn, he was using a hypermedia type called <a href="http://amundsen.com/media-types/collection/">Collection+Json</a> and he wondered if I'd be interested in writing support for it for a <a href="http://nancyfx.org">Nancy</a> application. I obliged as I wanted to learn more about hypermedia and Nancy now has a library you can use for Collection+Json if you choose to use that hypermedia type. You can find this on nuget <a href="http://www.nuget.org/packages/Nancy.CollectionJson/">here</a>.</p>
<p>In simple terms you may have a module such as:</p>
<pre><code>public class OrderModule : NancyModule
{
public OrderModule(IOrderRepository orderRepo) : base("/orders")
{
Get["/"] = _ =&gt;
{
var orders = orderRepo.GetOrders();
return orders;
};
}
}
</code></pre>
<p>When you send in a request with <code>Accept : application/json</code> you will get a JSON representation of an array of orders</p>
<pre><code>[
{
"id" : 1,
"status":"Complete",
"itemcount":3
},
{
"id" : 2,
"status":"Pending",
"itemcount":1
}
]
</code></pre>
<p>However if you send in a request <code>Accept : application/vnd.collection+json</code> you will get a Collection+Json representation</p>
<pre><code>{
"version": "1.0",
"href": "http://localhost:9200/orders/",
"links": [
{
"rel": "Feed",
"href": "http://localhost:9200/orders/rss"
}
],
"items": [
{
"href": "http://localhost:9200/orders/1",
"data": [
{
"name": "id",
"value": "1",
"prompt": "Id"
},
{
"name": "itemcount",
"value": "3",
"prompt": "Item Count"
},
{
"name": "status",
"value": "complete",
"prompt": "Status"
}
],
"links": [
{
"rel": "items",
"href": "http://localhost:9200/orders/1/items",
"prompt": "Items"
}
]
},
{
"href": "http://localhost:9200/orders/2",
"data": [
{
"name": "id",
"value": "2",
"prompt": "Id"
},
{
"name": "itemcount",
"value": "1",
"prompt": "Item Count"
},
{
"name": "status",
"value": "pending",
"prompt": "Status"
}
],
"links": [
{
"rel": "items",
"href": "http://localhost:9200/orders/2/items",
"prompt": "Items"
}
]
}
],
"queries": [
{
"rel": "search",
"href": "http://localhost:9200/orders/search",
"prompt": "Search",
"data": [
{
"name": "name",
"prompt": "Value to match against the order number"
}
]
}
],
"template": {
"data": [
{
"name": "productcode",
"prompt": "Product Code"
},
{
"name": "quantity",
"prompt": "Quantity"
}
]
}
}
</code></pre>
<p>How this works is by content negotiation. As with most things, Nancy makes this very simple. In Nancy.Collection+Json I wrote a <code>ResponseProcessor</code> that handles Accept headers for Collection+Json and then finds a "writer" responsible for the entity being requested, which writes all the JSON properties seen above and then this is returned in the response. The code and demo can be found <a href="https://github.com/jchannon/Nancy.CollectionJson">here</a> and more documentation is available <a href="https://github.com/WebApiContrib/CollectionJson.Net#returning-a-read-document-from-a-server">here</a>.</p>
<p>After some months had passed I saw <a href="https://twitter.com/mamund">Mike Amundsen</a> had done a talk at NDC Oslo 2015 about building clients that consume hypermedia payloads. This spiked my interest again in hypermedia and I thoroughly recommend you give it a watch.</p>
<iframe src="https://player.vimeo.com/video/131642790" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
<p>Leading on from this video I realised some of the differences between hypermedia types were how they enabled clients to perform. For example, with <a href="http://stateless.co/hal_specification.html">HAL</a> this hypermedia type will return links about the resource you requested but it won't directly tell how you to add an order to the system in its payload, you have to follow a link to find out that information. Collection+Json does give you that information as a <code>template</code> and so does <a href="https://github.com/kevinswiber/siren">Siren</a> in its <code>action</code>.</p>
<p>It was at this point I thought I'd write another Nancy library that would allow you to return Siren payloads which you can check out <a href="https://github.com/jchannon/Nancy.Siren">here</a>. This works in a similar way as the Nancy.Collection+Json library, in that you as the programmer create a "writer" class that forms the Siren response for a specific resource. Check out the demo <a href="https://github.com/jchannon/Nancy.Siren/tree/master/src/Nancy.Siren.Demo">here</a>.</p>
<p>At around the same time I also listened to a podcast that Mike was on which discussed REST, hypermedia and clients. I would strongly recommend you listen to this. I'd come to a lot of conclusions and had thoughts about REST and hypermedia and they were all validated in this podcast so <a href="https://t.co/V9NoBlWLOc">check it out</a>.</p>
<p>One thing the podcast did touch on is browsers as clients and the complexities involved when they consume hypermedia. I think this area is in need of huge improvement regarding tooling and libraries and I know Mike is working on a <a href="https://twitter.com/LCHBook">book</a> which may help that but I think we need more people talking about this and people hacking on it to try and make it "a thing". Tomorrow, Aug 8th 2015, <a href="https://twitter.com/darrel_miller">Darrel Miller</a> is about to do a keynote at DDD Melbourne titled "Consuming REST APIs, for all interpretations of REST" which will be something that illustrates how we can write client apps that consumer hypermedia APIs and gets that information out there. Without this information we have the potential of making hypermedia a tool that only serves the purpose of machine to machine communication and I'm not sure we want that if we can use it to make our client applications easier to maintain whether they be browsers, desktop apps or mobile apps. </p>
<p>So in conclusion I recommend you go learn a bit more about REST and hypermedia if it interests you, watch the videos and listen to the podcast. Then have a play with returning hypermedia payloads with your Nancy app and then trying to consume them.</p>
<p>One thing I will briefly touch on is when deciding upon the hypermedia type I think you need to discuss this with your consumers. Its very easy to write an API and say "we are going to return X" but if you're not the client developer that has to handle that then you haven't made an informed decision. You should try and make it as easy as possible for your consumers. Discuss the types that they find easiest to use. You as the API developer should try writing a client that consumes the content and realise the difficulties involved which may help you make a more rounded decision on what hypermedia type to return from your API. Now this is assuming you only decide to return one hypermedia type from your API, of course if you wanted to go purist then you would return all the major hypermedia types from your API and all your clients would be happy but that's probably not reality.</p>
<p>A quick shout out to <a href="https://twitter.com/danbarua">Dan Barua</a> who has written Nancy.HAL, a ResponseProcessor that allows your Nancy API to return HAL payloads. Check this library out <a href="https://github.com/danbarua/Nancy.Hal">here</a>. Great work Dan!</p>
<p>So in summary if you're using Nancy and want to return hypermedia payloads we have you covered:</p>
<ul>
<li><a href="https://github.com/danbarua/Nancy.Hal">Nancy.Hal</a></li>
<li><a href="https://github.com/jchannon/Nancy.CollectionJson">Nancy.Collection+Json</a></li>
<li><a href="https://github.com/jchannon/Nancy.Siren">Nancy.Siren</a></li>
</ul>