-
Notifications
You must be signed in to change notification settings - Fork 42
/
ns-CoreImage.xml
158 lines (138 loc) · 6.09 KB
/
ns-CoreImage.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
<Namespace Name="CoreImage">
<Docs>
<summary>The CoreImage namespace provides hardware-accelerated image processing.</summary>
<remarks>
<para>
CoreImage provides a framework for processing still images and
applying a set of operations on them to either produce new
images or perform image analysis (like performing face
detection and photography auto-enhance). The operations are
hardware accelerated and provide better performance than most
hand written implementations.
</para>
<para>
CoreImage can apply either one filter to an image or you can
use the built-in chaining support to apply a series of filters
at once to an image. For example, you could apply a Sepia
Filter, followed by a Hue adjustment filter, followed by a
contrast filter. The idea behind the chaining support is to
avoid intermediate copies and intermediate buffers during
image processing. Additionally, some of these filtering
operations can be performed entirely on the GPU in a single
pass, without having to copy data back and forth from the GPU
to the main memory.
</para>
<para>
The CoreImage runtime sits below the <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=Core%20Graphics&scope=Xamarin" title="N:CoreGraphics">N:CoreGraphics</a></format>, <see cref="N:CoreVideo" /> and <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=Image%20IO&scope=Xamarin" title="N:ImageIO">N:ImageIO</a></format> stacks and provides services to
all of them. Depending on the filter operations and the
hardware platform, CoreImage will execute the filters in the
CPU or the GPU.
</para>
<para>
To use CoreImage on iOS, you follow these steps:
</para>
<list type="bullet">
<item>
<term>
Create a <see cref="T:CoreImage.CIImage" />
object. You can create these from multiple sources:
CoreGraphics images, bitmap data, images on disk or
CoreVideo image buffers. CIImage objects are immutable
and represent recipes, that is, they will be computed on
demand.
</term>
</item>
<item>
<term>
Create and configure a <see cref="T:CoreImage.CIFilter" /> object. This object
combines image inputs or numeric input parameters to
configure the effects of the filter. The configuration is
done by setting various properties using the object
indexer to any of the values documented in the
</term>
</item>
<item>
<term>
Create a <see cref="T:CoreImage.CIContext" /> object to track the processing
pipeline. This is the object trough which CoreImage
draws and it can be CPU or GPU bound.
</term>
</item>
<item>
<term>
Render the filter output into a target image using one of the CIContext methods.
</term>
</item>
</list>
<example>
<code lang="csharp lang-csharp"><![CDATA[
// This sample turns the image into a sepia image
// Load the image using UIImage, it is convenient
var uiimage = UIImage.FromFile ("sample.jpg");
// Create the CIImage, using the UIImage's CGImage as a reference
var image = new CIImage (uiimage.CGImage);
// Create the filter
var sepia = new CISepiaTone () {
Image = image,
Intensity = .8f
};
// Generate the recipe. It is important to note that at this point
// the resulting CIImage merely contains the recipe on how to create
// the image, this is important, because you can use this result as
// the input to another filter to chain filters together and only
// at the last minute, when the CIContext rendering APIs are used,
// does the actual pipeline execute. This allows for a series of
// operations to all be performed in one step without intermediate
// buffer copies.
CIImage output = sepia.OutputImage;
// To render the results, we need to create a context, and then
// use one of the context rendering APIs, in this case, we render the
// result into a CoreGraphics image, which is merely a useful representation
//
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
// The above cgimage can be added to a screen view, for example, this
// would add it to a UIImageView on the screen:
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
<para>
Face detection is done with the <see cref="T:CoreImage.CIDetector" /> class by calling the
<format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=M:CoreImage.CIDetector.CreateFaceDetector(CoreImage.CIContext,bool)&scope=Xamarin" title="M:CoreImage.CIDetector.CreateFaceDetector(CoreImage.CIContext,bool)">M:CoreImage.CIDetector.CreateFaceDetector(CoreImage.CIContext,bool)</a></format>
method which returns an array of <see cref="T:CoreImage.CIFeature" /> objects with any
detected face features found.
</para>
<para>
Auto-adjustments are implemented by calling the <see cref="M:CoreImage.CIImage.GetAutoAdjustmentFilters" />
method. This method returns a list of suggested CIFilter
which you in turn would apply to your image and render to a
context.
</para>
<para>
To chain filters, all you have to do is connect the
OutputImage property from a filter to the Image property of
another filter. This is a modified version of the previous
example showing how to chain two filters together:
</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var uiimage = UIImage.FromFile ("sample.jpg");
var image = new CIImage (uiimage.CGImage);
var sepia = new CISepiaTone () {
Image = image,
Intensity = .8f
};
// Chain the Sepia filter to a Color inversion filter, this is done
// by assigning the CIColorInvert.Input to the sepia.OutputImage
var invert = new CIColorInvert () {
Image = sepia.OutputImage;
}
CIImage output = invert.OutputImage;
var context = CIContext.FromOptions (null);
var cgimage = context.CreateCGImage (output, output.Extent);
myImageView.Image = UIImage.FromImage (cgimage);
]]></code>
</example>
</remarks>
</Docs>
</Namespace>