text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
Grok-like configuration for zope viewlets
Project description
This package provides support to write and register Zope Viewlets directly in Python (without ZCML). It’s designed to be used with grokcore.view which let you write and register Zope Views.
Contents
- Setting up grokcore.viewlet
- Examples
- API Overview
- Changes
- 3.1.0 (2018-02-05)
- 3.0.1 (2018-01-12)
- 3.0.0 (2018-01-04)
- 1.11 (2012-09-04)
- 1.10.1 (2012-05-02)
- 1.10 (2012-05-02)
- 1.9 (2011-06-28)
- 1.8 (2010-12-16)
- 1.7 (2010-11-03)
- 1.6 (2010-11-01)
- 1.5 (2010-10-18)
- 1.4.1 (2010-02-28)
- 1.4 (2010-02-19)
- 1.3 (2009-09-17)
- 1.2 (2009-09-16)
- 1.1 (2009-07-20)
- 1.0 (2008-11-15)
Setting up grokcore.viewlet
This package is set up like the grokcore.component package. Please refer to its documentation for more details. The additional ZCML lines you will need are:
<include package="grokcore.viewlet" file="meta.zcml" /> <include package="grokcore.viewlet" />
Put the first line somewhere near the top of your root ZCML file.
Examples
First we need a view to call our viewlet manager:
from grokcore import viewlet class Index(viewlet.View): pass index = viewlet.Page Template(""" <body> <head>Test</head> <body> <div tail: </div> </body> """)
After that we could define only a manager which display something:
class Content(viewlet.ViewletManager): viewlet.View(Index) def render(self): return u'<h1>Hello World</h1>'
Or a completely different example:
class AdvancedContent(viewlet.ViewletManager): viewlet.name('content') viewlet.view(Index)
And some viewlets for that one:
class StaticData(viewlet.Viewlet): viewlet.view(Index) viewlet.viewletmanager(AdvancedContent) def render(self): return u'<p> Data from %s</p>' self.context.id
Or:
class SecretData(viewlet.Viewlet): viewlet.view(Index) viewlet.viewletmanager(AdvancedContent) viewlet.require('agent.secret') secretdata = viewlet.PageTemplate(""" <p>Nothing to see here.</p> """)
The way templates are binded to components works exactly the way than in grokcore.view, for more information refer to its documentation.
API Overview
Base classes
- ViewletManager
Define a new viewlet manager. You can either provide a render method, a template, which can or not use registered viewlets.
If you define a template, view is added as a reference to the current view for which the manager is rendering in the template’s namespace. It is available as well as an attribute on the manager object.
- Viewlet
Define a new viewlet. You can either provide a template or a render method on it. Like in views, you can define an update method to process needed data.
Like for manager, view is added to the template namespace if used. viewletmanager is defined as well as a reference to the manager in the template’s namespace and as an attribute on the viewlet object.
Directives
You can use directives from grokcore.view to register your viewlet or viewlet manager: name, context, layer and require (for security on a viewlet).
To that is added:
- view
- Select for which view is registered a viewlet or a viewlet manager.
- viewletmanager
- Select for which viewlet manager is registered a viewlet.
- order
- Define a rendering order for viewlets in a viewlet manager. This should be a number, the smaller order render first, bigger last.
Additionally, the grokcore.viewlet package exposes the grokcore.component, grokcore.security and grokcore.view APIs.
Changes
3.1.0 (2018-02-05)
- viewletmanager.viewlets should be a list so we can iterate over it several times in consumer code instead of having to remember it’s an iterable we can only list once.
3.0.1 (2018-01-12)
- Rearrange tests such that Travis CI can pick up all functional tests too.
3.0.0 (2018-01-04)
- Python 3 compatibility.
1.11 (2012-09-04)
- Make the has_render() and has_no_render() symmetrical to those in grokcore.view, grokcore.layout and grokcore.formlib, where a render.base_method attribute is checked.
1.10.1 (2012-05-02)
- Do not require the role extra from grokcore.security.
1.10 (2012-05-02)
- Use the component registration api from grokcore.component.
- Update how the static resources are found on a ViewletManager and a Viewlet, following the new name __static_name__ set by the template association.
1.9 (2011-06-28)
- Introduce the available() method on viewlet component. The viewlet manager will filter out unavailable viewlet by calling this method. The available() method is called after the viewlet’s update() is called, but before the render() is called.
1.8 (2010-12-16)
- Update to use TemplateGrokker from grokcore.view to associate viewlet and viewletmanager templates.
1.7 (2010-11-03)
- The computed default value for the viewletmanager directive is now defined in the directiv itself, not as a separate function that needs to be passed along.
1.6 (2010-11-01)
- Upped version requirements for martian, grokcore.component, and grokcore.view.
- Move the order directive to grokcore.component.
- Move the view directive to grokcore.view.
1.5 (2010-10-18)
- Make package comply to zope.org repository policy.
- Update functional tests to use Browser implementation of zope.app.wsgi instead of zope.app.testing.
- Reduce dependencies (zope.app.pagetemplate no longer necessary).
1.4.1 (2010-02-28)
- Dropped the dependency on zope.app.zcmlfiles.
- Cleaned the code to remove unused imports and ensure the pep8 syntax.
- Updated tests to have a return value consistency. The grokcore.viewlet viewlet manager implementation requires viewlets to return unicode strings. Now, viewlets return unicode strings in the test packages.
1.4 (2010-02-19)
- Define test requires.
1.3 (2009-09-17)
- Reverted the use of grokcore.view.CodeView. We now require grokcore.view 1.12.1 or newer. As of grokcore.view 1.12, the CodeView/View separation has been undone.
1.2 (2009-09-16)
- Remove the reference to the grok.View permission that is no longer in grokcore.security 1.2
- Use the grok.zope.org/releaseinfo information instead of our own copy of versions.cfg, for easier maintenance.
1.1 (2009-07-20)
- Adapted tests to new grokcore.view release: switched from View to CodeView.
- Add grok.View permissions to functional tests (requires grokcore.security 1.1)
1.0 (2008-11-15)
- Created grokcore.viewlet in November 2008 by factoring zope.viewlet-based components, grokkers and directives out of Grok.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/grokcore.viewlet/ | CC-MAIN-2021-39 | en | refinedweb |
I'm not a huge fan of spamming StackOverflow with questions, but I've been trying to get this working for the past two days. Here goes...
I've come up with a small reproduction of a basic C++ file that compiles and runs perfectly on Linux (ubuntu), but compiles and results in an immediate SegFault (or constant access violation that seem to happen on the gtkmm event loop) on Windows 10 using the MSYS2 (mingw64) g++ compiler.
For MSYS2, I'm using the
mingw-w64-x86_64-gtk3 package, recommended by the docs. My thoughts are that this is a problem during the linking process? No GUI appears, only terminal errors.
The line that causes the problem is specifically
App::App : myLabel("HelloWorld") {.
By initialising the list inside the constructor using
label = Gtk::Label("Hello world!");, the program actually works on Windows 10 as well, although I later found another segfault in another small detail.
I'm quite new to C++, my question is, am I doing something very wrong in my code or is it possible that the gtkmm library just isn't optimized for Windows, or that the binaries are out of date? I imagine that doing a lengthly compilation of the gtkmm source would work? Or am I just making a dumb pointer mistake?
MSYS2 setup:
$ pacman -Syu gcc mingw-w64-x86_64-gtk3
Compiled with:
$ g++ -std=c++11 `pkg-config gtkmm-3.0 --cflags` -o app app.cpp `pkg-config gtkmm-3.0 --libs`
The sample that segfaults:
#include <gtkmm/window.h> #include <gtkmm/label.h> #include <string> // Class prototype class Window : public Gtk::Window { public: Window(); Gtk::Label myLabel; }; // Entry point, create app and initialise window int main(int argc, char* argv[]) { auto app(Gtk::Application::create(argc, argv, "ch.epfl.cemes.marcus.test")); Window window; return app->run(window); }; // Extend Gtk::Window and show some text Window::Window() : myLabel("Hello world!") { // this line seems to be the problem add(myLabel); myLabel.show(); };
Running the compiled executable of the code above on Windows results in the following error filling up the console repeatedly:
Exception code=0xc0000005 flags=0x0 at 0x0000000100401E9C. Access violation - attempting to read data at address 0x0000000021646CC2
My main application, that is practically identical but split into more files, exits immediately with the following:
Exception code=0xc0000005 flags=0x0 at 0x0000000063F14B9D. Access violation - attempting to read data at address 0xFFFFFFFFFFFFFFFF 0 [main] archipelago 1909 cygwin_exception::open_stackdumpfile: Dumping stack trace to archipelago.exe.stackdump
and yields a beautiful file with 15 lines of stack frames.
I appreciate the time you took to read this post. Have a great day!
User contributions licensed under CC BY-SA 3.0 | https://windows-hexerror.linestarve.com/q/so60360990-Windows-10-Segmentation-fault-using-the-gtkmm-30-library-and-g-reproduction-included | CC-MAIN-2020-16 | en | refinedweb |
drag on userarea [SOLVED]
On 11/06/2015 at 12:51, xxxxxxxx wrote:
Hello,
I use the Message of a Dialog to revive a drag - in the dialog are several userareas and i like to call a different function when the drag finished over a userarea than on the rest of the dialog - can i check if it over ?
Thanks
On 12/06/2015 at 07:44, xxxxxxxx wrote:
Hi,
I'm not quite sure, what you mean with "revive a drag". Basically all you need to do, is to process the BFM_DRAGRECEIVE message in your GeUserArea. For example see this thread.
On 12/06/2015 at 11:08, xxxxxxxx wrote:
Hi Andreas,
I use the BFM_DRAGRECEIVE in the Message of a gui.GeDialog - to receive the drag on the dialog - but i need to know if the drag was on the free space in the dialog or on a special GeUserArea within the dialog
i try to add the BFM_DRAGRECEIVE in the GeUserArea but the dialog take the message
On 16/06/2015 at 03:29, xxxxxxxx wrote:
Hi conner,
indeed, this was a bit tricky. I guess, that's why Drag&Drop for Python is not documented, yet.
Here's what I did to make it work:
Basically I implemented Message() only for the GeDialog and have a custom MyDragMessage() function on the GeUserArea.
Note: I used the MemoryViewer example from the Python SDK examples. In there I duplicated the mem_info user area and added a static text in between in order to have some "dialog" area.
In my GeUserArea:
def MyDragMessage(self, msg, draginfo, result) : # Don't discard here, if lost drag or if it has been escaped. Already done in parent dialog. # Check drop area and discard if not on the user area if not self.CheckDropArea(msg, True, True) : self.SetDragDestination(c4d.MOUSE_FORBIDDEN) return False # Here: Mouse hovers over user area # Check if drag is finished (=drop) if msg.GetInt32(c4d.BFM_DRAG_FINISHED)==1: print "Dropped on UserArea %s" % self.uaIdx return True # Return current mouse cursor for valid drag operation self.SetDragDestination(c4d.MOUSE_MOVE) return True
In my GeDialog:
def Message(self, msg, result) : if msg.GetId()==c4d.BFM_DRAGRECEIVE: # Get drag object type and data draginfo = self.GetDragObject(msg) # Discard if lost drag or if it has been escaped if msg.GetInt32(c4d.BFM_DRAG_LOST) or msg.GetInt32(c4d.BFM_DRAG_ESC) : # Not sure, why we need to test the UserArea here as well if self.mem_info.MyDragMessage(msg, draginfo, result)==True: return True if self.mem_info_2.MyDragMessage(msg, draginfo, result)==True: return True return self.SetDragDestination(c4d.MOUSE_FORBIDDEN) # if we are here, mouse hovers over dialog if self.mem_info.MyDragMessage(msg, draginfo, result)==True: return True if self.mem_info_2.MyDragMessage(msg, draginfo, result)==True: return True # Check if drag is finished (=drop) if msg.GetInt32(c4d.BFM_DRAG_FINISHED)==1: print "Dropped on Dialog" return True # Return current mouse cursor for valid drag operation return self.SetDragDestination(c4d.MOUSE_MOVE) # Call GeDialog.Message() implementation so that it can handle all the other messages return gui.GeDialog.Message(self, msg, result)
On 19/06/2015 at 11:35, xxxxxxxx wrote:
Hey Andreas,
thanks for you help - In my case it`s a bit more tricky- I have a undefined count of GeUserArea of two different types - is there a way to check that and return the id ?
Many Thanks
On 19/06/2015 at 21:03, xxxxxxxx wrote:
Hi conner,
you need to add a "MyDragMessage" to both types. Then have a index member in your GeUserAreas. In my case it's uaIdx (my code already prints, on which area the user dropped) and I initialize it, when instancing the GeUserAreas. I simply used indeces beginning with 0, but of course you can use anything you like (e.g. DescIds). Sorry, I should have mentioned that. And you are free to change the "MyDragMessage" functions to return the index (and for example -1 in false cases), if you need that information in the parent dialog.
On 20/06/2015 at 04:40, xxxxxxxx wrote:
You probably have the user areas in a list somewhere. If you have mixed types of user areas
in the list, then you could use isinstance() to check the type or (even more portable) call the
method only on objects that support it via hasattr().
for ua in self.user_areas: if hasattr(ua, 'MyDragMessage') and ua.MyDragMessage(msg, draginfo, result) : return True
On 21/06/2015 at 06:02, xxxxxxxx wrote:
great - that work
good help - thanks | https://plugincafe.maxon.net/topic/8813/11646_drag-on-userarea-solved | CC-MAIN-2020-16 | en | refinedweb |
This post gets C# developers started building applications with TypeScript, which allows static typing combined with the enormous ecosystem of JavaScript.
If you ever worked on a JavaScript application in the old days and have never had a chance to do the same with modern tooling and ecosystem, you would be surprised by how both the development and the production experiences are different and immensely improved. This is especially the case if you are coming from a .NET and C# background. C# is a very productive, general purpose programming language. Today, you can build games, web, desktop and mobile applications with C# running on .NET Core, Mono and various other available runtimes. JavaScript mostly offers the same, but the biggest gap is missing the comfort and safety of static typing.
Static typing allows us to catch many typical programming errors and mistakes during compilation time instead of noticing these in runtime. When you combine this fact with great tooling and code editor support, you achieve a much faster feedback loop which then gives you a higher chance to build more robust applications in a much quicker way.
TypeScript gives you just that, and it does it without taking the best of JavaScript from you: the enormous ecosystem! All the packages available on npm, for instance, are available for you to tap into even if they are not implemented in TypeScript. This is huge, and it's possible due to one of the core principles of TypeScript: it starts and ends with JavaScript. We will shortly see an example of how this is actually made possible in practice.
The main aim of this post is to get you started on building applications with TypeScript and also to give you a sense of how this process looks from the development perspective. It's highly encouraged that you first have a look at “TypeScript in 5 Minutes” if you don't have prior experience with TypeScript and how it functions at a high level.
My experience proves Uncle Bob's quote on the ratio of code reading versus writing being over 10-to-1. So, it's only fair to evaluate and see the power of TypeScript in terms of code reading aspect.
Let's clone one of the biggest open-source TypeScript projects: Visual Studio Code. What I am after here is to learn how we should be implementing a command in order to execute it through the Visual Studio Code command palette. A quick search on GitHub got me the blockCommentCommand.ts file, and it revealed a class which implemented an interface: editorCommon.ICommand. Let's get the tooling help from Visual Studio Code (meta!) and find out about the interface definition:
editorCommon.ICommand
We are directly inside the interface, and the most important part here is the explicitness, how apparent the extensibility point is. I want to understand this interface further though. So, let's look into the getEditOperations() function of the interface and find out all of its implementations:
getEditOperations()
This is really nice, and what's really good here is that it's not a string find. It's actually finding all the interface function implementations. sortLinesCommand seems interesting to me and I want to dive into it. Let's go into it and start changing it. The first thing I am noticing is the help I get from the code editor.
sortLinesCommand
You can see that all of this tooling help, which comes from the power of static analysis abilities of the language itself, makes it super easy for me to explore an unfamiliar and fairly large codebase. This is the core value proposition of TypeScript.
Now that we understand the main value of the language, let's look at some of its aspects by going through a sample.
Let's look at a sample application to understand how we can structure the codebase with TypeScript. In this sample, we will see:
The core logic of the application is around voting for particular topics that have some options. I have structured the code logic of this application inside the domain.ts file, which has the below content.
domain.ts
import { v4 as uuid } from 'uuid';
export class TopicOption {
readonly name: string;
readonly id: string;
constructor(name: string) {
this.name = name
this.id = uuid()
}
}
interface Winner {
readonly option: TopicOption;
readonly votes: number;
}
export class Topic {
readonly name: string;
readonly options: TopicOption[];
private readonly _votes: Map<string, number>;
constructor(name: string, options: TopicOption[]) {
this.name = name
this.options = options;
this._votes = new Map<string, number>();
options.forEach(option => {
this._votes.set(option.id, 0);
})
}
vote(optionId: string) {
const votesCount = this._votes.get(optionId);
if (votesCount != undefined) {
this._votes.set(optionId, votesCount + 1);
}
}
getWinner(): Winner {
const winner = [...this._votes.entries()].sort((a, b) => a[1] > b[1] ? 1 : -1)[0];
const option = this.options.find(x => x.id == winner[0]);
if (option == undefined) {
throw "option has no chance to be undefined here";
}
return {
option: option,
votes: winner[1]
};
}
}
If you have a C# background, this code should be mostly self-explanatory. However, it's worth specifically touching on some of its aspects to understand how TypeScript lets us structure our code.
The Class construct sits at the heart of the C# programming language and it's mostly how we model our applications and domain concepts. TypeScript has the same modeling concept, which is close to classes in ECMAScript 6 (ES6) but with some key additional “safety nets.”
Assume the following:
export class Topic {
readonly name: string;
readonly options: TopicOption[];
private readonly _votes: Map<string, number>;
constructor(name: string, options: TopicOption[]) {
this.name = name
this.options = options;
this._votes = new Map<string, number>();
options.forEach(option => {
this._votes.set(option.id, 0);
})
}
// ...
}
A few things to note here:
private
readonly
The TypeScript type system is based on structural typing. This essentially means that it's possible to match the types based on their signatures. As we have seen with the getWinner() method implementation above, the signature requires us to return the Winner interface. However, we didn't need to create a class which explicitly implemented the Winner interface. Instead, it was enough for us to new up an object which matched the interface definition:
getWinner()
Winner
getWinner(): Winner {
// ...
return {
option: option,
votes: winner[1]
};
}
This is really efficient when implementing logic, as you don't need to explicitly implement an interface. This aspect of the language also protects us in case of potential changes to the signature of the Winner interface. Let's take the following change to the signature of the Winner interface by adding a new property called margin.
margin
interface Winner {
readonly option: TopicOption;
readonly votes: number;
readonly margin: number;
}
As soon as making this change, Visual Studio Code highlights a problem:
Don't you love NullReferenceException when working with C#? I can hear you! Not so much, right? Well, you are not alone. In fact, null references have been identified as a billion dollar mistake. TypeScript helps us on this problem as well. By setting the compiler flag --strictNullChecks, TypeScript doesn't allow you to use null or undefined as values unless stated explicitly.
NullReferenceException
null
--strictNullChecks
undefined
In our example of returning a winner from the getWinner() method, we would see an error if we were to assign null to the option property.
option
TypeScript provides the same level of compiler safety across the board for all relevant members. For example, we can see that we are unable to pass null or undefined as a parameter to a method which accepts a string:
string
However, there are sometimes legitimate cases for us to represent a value as null or undefined. In these circumstances, we have several ways to explicitly signal this. For example, one way is to mark the type as nullable by adding ? at the end of the member name:
nullable
?
interface Winner {
readonly option?: TopicOption;
readonly votes: number;
}
This allows us to pass undefined as a legitimate value to this or entirely omit the assignment. For example, both the below representations are correct to match the object to Winner interface:
return {
option: undefined,
votes: winner[1]
};
// ...
return {
votes: winner[1]
};
TypeScript doesn't stop here. It also helps you when you are consuming a member which can be null or undefined. For example, we would see an error like below if we were to try to reach out to the members of the TopicOption value returned from the getWinner() method:
TopicOption
This behaviour of TypeScript forces us to guard against null and undefined. Once we perform the check, the compiler will then be happy for us to access the members of TopicOption:
As you can see in the below code, we are able to import a package called uuid which we installed through npm install uuid --save.
uuid
npm install uuid --save
import { v4 as uuid } from 'uuid';
This package is implemented in JavaScript but we are still able to consume it and get the tooling and type safety support as we get for pure TypeScript code. This is possible thanks to the notion of TypeScript declaration files. It's a very deep topic to get into here, but the best piece of advice I can give here is to check TypeScript type search, where you can find and install the declaration files.
TypeScript compiles down to JavaScript, and this means that we can run our applications anywhere we are able to execute JavaScript, as long as we compile our source code in a way that the executing platform knows how to interpret. This mostly means targeting the correct ECMAScript version through the –target compiler flag and specifying the relevant module code generation through the --module compiler flag.
--module
Let's look at how we can compile this small voting logic to be executed on Node.js. The below is the content of index.ts:
index.ts
import { Topic, TopicOption } from './domain';
const topics: Topic[] =[
new Topic("Dogs or cats?", [
new TopicOption("Dogs"),
new TopicOption("Cats")
]),
new Topic("C# or TypeScript?", [
new TopicOption("C#"),
new TopicOption("TypeScript"),
new TopicOption("Both are awesomse!")
])
];
for (let i = 0; i < 100; i++) {
const randomTopic = topics[Math.floor(Math.random() * topics.length)];
const randomOption = randomTopic.options[Math.floor(Math.random() * randomTopic.options.length)];
randomTopic.vote(randomOption.id);
}
topics.forEach(topic => {
const winner = topic.getWinner();
console.log(`Winner for '${topic.name}' is: '${winner.option.name}'!!!`);
})
You can think of this file scope as the Main() method of your .NET Console application. This is where you will be bringing all of your application together and execute appropriately.
Main()
I have shown you the tip of the iceberg here. The truth is that TypeScript actually gives you way more than this with handful of advanced type features such as Discriminated Unions and String Literal Types, which makes it possible to model complex scenarios in much more expressive and safe ways.
Want to read more about getting started with TypeScript? Check out these related posts:
Tug | https://www.telerik.com/blogs/uncovering-typescript-for-c-developers | CC-MAIN-2020-16 | en | refinedweb |
This tutorial explains how to use the predefined collector returned by
Collectors.collectingAndThen() method with examples. It first explains the method definition and then shows
collectingAndThen() method’s usage using two Java 8 code examples, along with detailed explanation of the code.
downstream which is an instance of a
Collector<T,A,R> i.e. the standard definitionClick to Read Tutorial explaining Basics of Java 8 Collectors of a collector. In other words, any collector can be used here.
– 2nd input parameter is
finisher which needs to be an instance of a FunctionClick to Read Tutorial on Function functional interfaces
<R,RR> functional interface. This function instance takes as input an object of type
R which is the output from downstream collector, and it returns an output of type
RR which is the final return type of
collectingAndThen collector as well.
– output is a Collector with finisherClick to Read tutorial on 4 components of Collectors incl. ‘finisher’(return type) of type
RR.
How the Collectors.collectingAndThen() method works
CollectingAndThen() method first collects the elements of type
T of
Stream<T> using the
Collector<T,A,R> passed to it as the first parameter. As a result of applying the collector, stream elements are collected into an object of type
R. Using the
Function<R,RR> instance passed as the second parameter, the collected object of type
R is then transformed to an object of type
RR. This object of type
RR is the final object/value returned by the collectingAndThen collector.
Thus, when there is a scenario where the stream elements need to be collected and then the collected object needs to be transformed using a given rule\function, then using the collectingAndThen collector both these tasks of collection and transformation can be specified and executed together.
Example#1 of Collectors.collectingAndThen
Problem Description: Given a stream of employees, we want to –
- Find the employee with the maximum salary for which we want to use the maxBy collector.
- The output of the maxBy collector being an
Optionalvalue, we want to check whether a value is present and then print the max salaried employee’s name.
Solution code for the above problem is –
package com.javabrahman.java8.collector; import java.util.Arrays; import java.util.Comparator; import java.util.List; import java.util.Optional; import java.util.stream.Collectors; import com.javabrahman.java8.Employee; public class CollectingAndThenExample {) { String maxSalaryEmp = employeeList.stream().collect( Collectors.collectingAndThen( Collectors.maxBy(Comparator.comparing(Employee::getSalary)), (Optional<Employee> emp)-> emp.isPresent() ? emp.get().getName() : "none") ); System.out.println("Max salaried employee's name: "+ maxSalaryEmp); } } //Employee.java POJO class package com.javabrahman.java8; import java.text.DecimalFormat; public class Employee { private String name; private Integer age; private Double salary; public Employee(String name, Integer age, Double salary) { this.name = name; this.age = age; this.salary = salary; } //Standard setters and getters for name,age and salary go here public String toString(){ DecimalFormat dformat = new DecimalFormat(".##"); return "Employee Name:"+this.name; } //Standard hashcode() & equals() implementations go here }
CollectingAndThenExampleclass contains a static list of
Employeeobjects –
employeeList.
- In the
main()method a stream of
Employeeobjects is created using
List.stream()method.
- Stream of employees is pipelinedClick to Read Tutorial explaining Concept of Pipelines in Computing to the
collect()terminal operationClick to Read Tutorial explaining intermediate & terminal Stream operations.
- To the
collect()method,
Collectorreturned by
Collectors.collectingAndThen()method is passed as a parameter.
- collectingAndThen collector takes 2 parameters –
- Collectors.maxBy collector with the
Employee’s salary passed as the sort key using method reference
Click to Read Tutorial on Java 8’s Method References – “
Employee::getSalary”.
- Ternary operator which checks if the
Optional<Employee>value returned by maxBy collector is present. If yes, it extracts and returns the employee name from the
Employeeobject returned by maxBy collector. If no, it returns the string “
none”.
- Output printed is as expected – the name of the employee with maximum salary – “
Tom Jones” is printed.
Example#2 of Collectors.collectingAndThen
Problem Description: Given a stream of employees, we want to –
- Find the average salary of all employees using averagingDouble collector.
- Print the average salary after formatting it using
DecimalFormat.
Solution code for the above problem is –
(Note – The
Employee class and
employeeList objects with their values remain the same as the previous code usage example and hence are not shown below for brevity.)
public static void main(String[] args) { System.out.println("Max salaried employee's name: " + maxSalaryEmp); String avgSalary = employeeList.stream().collect( Collectors.collectingAndThen( Collectors.averagingDouble(Employee::getSalary), averageSalary -> new DecimalFormat("'$'0.00").format(averageSalary))); System.out.println("Average salary in $: " + avgSalary); }
- collectingAndThen collector takes 2 parameters –
Collectors.averagingDoublecollector with the
Employee’s salary passed as the attribute to be averaged using method reference – “
Employee::getSalary”.
Function<Double,String>instance specified using a lambda expression -“
averageSalary -> new DecimalFormat("'$'0.00").format(averageSalary))” which specifies that the average salary returned by the averagingDouble collector is to be formatted using
DecimalFormatwith specified format – “
'$'0.00”.
- Output printed is as expected – properly formatted average salary of all employees – “
$9800.00” is printed.<< | https://www.javabrahman.com/java-8/java-8-how-to-use-collectors-collectingandthen-method-with-examples/ | CC-MAIN-2020-16 | en | refinedweb |
The passed around a program just like any other type of value.
Now, when a function accepts another function as its argument, or it yields another function as its return value - or both - it is said to be a higher-order function. We actually already saw an example in the previous article, if you recall the Sieve of Eratosthenes exercise, which had this function in it:
private Predicate<Integer> notInNonPrimesUpTo(int limit) { var sieve = new HashSet<Integer>(); for (var n = 2; n <= (limit / 2); n++) for (var nonPrime = (n * 2); nonPrime <= limit; nonPrime += n) sieve.add(nonPrime); return candidate -> !sieve.contains(candidate); }
That function is returning a
Predicate. A predicate is a function that yields a boolean value. This means that
notInNonPrimesUpTo is a higher-order function: it builds the sieve and yields a function that tests whether a number is within the sieve or not.
We’ve seen other examples too. Do you remember
map from part three? It takes a function and applies it to all the elements in an array, yielding another array.
map is a higher-order function. So is
filter because it takes a predicate, tests it on every element of an array, and uses the result of the predicate to decide whether to keep the element or discard it.
qsort is a higher-order function too, because it takes a comparator function and uses it to determine the order of any two elements in the array, without knowing the types of the elements. So the previous article was full of higher-order functions, and you shouldn't be intimidated by the term. It does not mean anything rarified or exalted. You are almost certainly using some kind of higher-order functions regularly in your work. In fact, first-class functions are useless without higher-order functions to pass them into or return them from.
Function composition.
You'll hear about this a lot in the functional programming world. To compose two functions means to arrange them so that the result of one function is applied directly as the input of the other function. Your code is probably full of examples of this, but if the code is not structured so as to highlight this fact then you may not always notice. Functional programmers are always alert to notice when functions are arranged this way, because it allows the possibility of certain programming structures, which we will come to shortly. Programmers steeped in the functional style often find it useful to consider two composed functions as a third function in its own right. Let me explain what I mean by that.
Say you have a function f that takes a value x as its argument and returns a value y :
f ( x ) = y
and you have another function g that takes y as its argument and returns z :
g ( y ) = z
clearly, then, you can apply g to the output of f like this :
g ( f ( x )) = z
This implies, therefore, that there is a third function h that maps x directly to z :
h ( x ) = z
Functional programmers would say that h is the composition of functions f and g. In Haskell this would be defined like:
h = g . f
In Haskell minimalism is prized as a virtue. In Clojure, rather more verbose, it would be defined like this:
(def h (comp f g))
Functional programming devotees tend to view function composition this way. Personally, I don't find the practice of explicitly naming composed functions like that especially useful. In particular I don't see any difference between the Clojure above and this:
(defn h [arg] (g (f arg)))
other than that the first example is slightly more concise. FP devotees like to wax lyrical about the power of function composition, while my own outlook is rather more prosaic.
Function composition as plumbing.
The idea of composing functions together is not novel. In 1964, Doug McIlroy wrote this in a memo:
We should have some ways of coupling programs like garden hose – screw in another segment when it becomes necessary to massage data in another way.
The idea Doug was getting at was later realised in Unix as pipes, probably the single feature that makes Unix shell scripting so powerful. Unix pipes are a system of inter-process communication; they can be created and used directly by processes via system calls, but they can also be created in the shell by using the | symbol, like this:
program1 | program2
The effect is to create a pipe that reads everything written to standard output by
program1 and feeds it verbatim to
program2 via its standard input. This means that you can chain programs together like building blocks to accomplish tasks that none of the programs can do by themselves. For example, if I wanted to find the top 3 largest Java programs in a directory by lines of code, I could do this:
wc -l *.java | grep \.java | sort -nr | head -n 3 82 Book.java 43 Isbn.java 38 Genre.java
McIlroy put it this way:
This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together.
Replace “programs” with “functions” and you have the principle of composability.
Connascence of execution.
So, I think the value of writing functions that “do one thing and do it well” pretty much self-evident, but it might not be clear yet why it is a good idea to write functions to be composable, i.e. to work together. You may have heard of connascence. Connascence is a way of describing things that are coupled to each other in various kinds of ways. There are many different types of connascence, including:
- Connascence of name - if the name of a thing is changed, other things must be renamed to match or the program will break. Usually function calls work by connascence of name. Modern refactoring IDEs can help you when renaming things by automatically updating all the other names that need to be changed to match.
- Connascence of type - two or more things must have the same type. In statically-typed languages this can usually be enforced by the compiler, but if you’re working in a dynamically typed language then you must take care to match up types by yourself.
- Connascence of meaning - also often referred to as “magic values”, this refers to things that must be set to specific values which have certain meanings and, if altered, will break the program.
- Connascence of execution - things must happen in a certain order, in other words, temporal coupling.
It is the last one which is important to us here. It is frequently critical in programming that things are done in a certain order:
In this code, an email object is created, then the sender, recipient and subject are set, then the email is sent. After the email has been sent, it then sets the email body. Almost certainly this is wrong, and the likely outcome is the email will be sent with an empty body. Slightly less likely, but an outcome that cannot be ruled out, is that setting the body on the email after it has been sent might cause an error. Either way it is bad.
But we can design things so that it becomes impossible to do things out of order:
mailer.send(emailBuilder() .sender("[email protected]") .addRecipient("[email protected]") .subject("Proposal") .body("Let's go bowling") .build())
Since we need an email object to pass to
mailer.send, we make it so that the only way to create one and set it up is to use the builder. We remove all setter methods on the email class so that impossible to modify anything to the email after it has been built. Therefore the object that is passed to
mailer.send is guaranteed not to be tampered with afterwards. The builder pattern seen above is a very common way to turn imperative operations into composable functions. You can use it to wrap things that aren’t in the functional style and make them seem like they are.
The dread Monad.
When I first envisaged this series of articles, I thought I was not going to mention monads at all, but as it developed I realised that any discussion of the functional style would be incomplete without them. Moreover, Monads turn up sometimes without announcing themselves. I struggled for a long time to understand the Monad, and the explanations I found were quite unhelpful, and I believe this is why they have got their reputation for being hard to understand. I will try to explain it here in terms of code, which I hope will convey the concept clearly enough. As always, I have an example to illustrate the point with; it is a little Java project that I use to try out ideas on, which implements a simple webservice API comprising a set of endpoints that pretend to serve a library. You can search for books with it, view their details, borrow and return them, etc. There is an endpoint to retrieve a book by its ISBN number, and its implementation looks like this:
public LibraryResponse findBookByIsbn(Request request) { try { Isbn isbn = isbnFromPath(request); Book book = findBookByIsbn(isbn); SingleBookResult result = new SingleBookResult(book); String responseBody = new Gson().toJson(result); return new LibraryResponse(200, "application/json", responseBody); } catch (IllegalArgumentException e) { return new LibraryResponse(400, "text/plain", "ISBN is not valid"); } catch (Exception e) { LOG.error(e.getMessage(), e); return new LibraryResponse(500, "text/plain", "Problem at our end, sorry!"); } }
I deliberately messed up this code a little for our purposes here - though it's still better than much code I have seen in the wild - so let’s critique it. I really don’t like the exception handlers here. They represent special cases, and one of the things I have learned through experience is that special cases are the enemy of clean code. They disrupt the flow of the program and they make ideal hiding places for bugs.
Exceptions bring their own evil with them, being essentially gotos in disguise, but worse still, only one of the exception handlers here is handling genuinely exceptional behaviour. The other is handling part of the API's specified behaviour. We'll come back to that in a moment.
Now, we don’t need to go into the details of the web framework being used here (it’s spark-java); suffice to say that all web frameworks can be configured to trap unhandled exceptions and return a preconfigured HTTP response when they happen. Different responses can be mapped to different exception classes: it would be appropriate to return the HTTP 500 response when a top-level
Exception is thrown, so we can remove that
catch block from the
findBookByIsbn method.
On the other hand, the 400 response “ISBN is not valid” is due to invalid input from the client and is very much part of the specified API behaviour. The
isbnFromPath method is throwing an
IllegalArgumentException when the parameter value from the client does not match the right format for an ISBN number. This is what I meant by a disguised GOTO; it obscures the logic because it is not immediately obvious where the exception is coming from.
There is something more that seems to be missing entirely there. What happens when
findBookByIsbn does not find the book? That should result in an HTTP 404 response and, in use, so it does, so where did that happen? Examining
findBookByIsbn we see the answer:
Book findBookByIsbn(Isbn isbn) { return bookRepository.retrieve(isbn).orElseThrow(() -> Spark.halt(NOT_FOUND_404, BOOK_NOT_FOUND)); }
This makes things even worse! Here we're making use of a framework feature by which an exception encodes an HTTP 404 response within it. This is important control flow that is completely obscured in the endpoint implementation.
So what can we do about it? We could improve things by creating specific exception types for the different outcomes, but we would still be using exceptions as a means of control flow. Alternatively, we could rewrite the code not to depend on exceptions at all:
public LibraryResponse findBookByIsbn(Request request) { Isbn isbn = isbnFromPath(request); if (isbn.valid()) { Optional<Book> book = findBookByIsbn(isbn); if (book.isPresent()) { SingleBookResult result = new SingleBookResult(book.get()); String responseBody = new Gson().toJson(result); return new LibraryResponse(200, "application/json", responseBody); } else { return new LibraryResponse(404, "text/plain", "Book not found"); } } else { return new LibraryResponse(400, "text/plain", "ISBN is not valid"); } }
At least all the different execution paths are now present in the method. This code is hardly great either, although a better solution is hinted at in there by the
findBookByIsbn method which has been modified now to return an
Optional<Book>. That
Optional type speaks something to us: it says that it may or may not return a book and that we must handle both eventualities, although Optional can be used far more neatly than it is there. What we need is a way to make it similarly explicit that
findBookByIsbn will return either a valid ISBN number or some kind of invalid request error.
Maybe it's valid, maybe it isn't.
In Haskell there is the
Either type that lets you do exactly that, and it is frequently used for error handling.
Either values may be either
Left or
Right and the programmer must deal with both. Conventionally, the
Left constructor is used for indicating an error and the
Right constructor for wrapping a non-erroneous value. Personally I’m not a fan of the use of “left” and “right” in this way: those words only have meaning to me in terms of spatial orientation. Anyway, Java has its own stereotypical construction for this kind of thing, which has been established by the
Stream and
Optional classes. We could create a
MaybeValid type to wrap values that may be valid or not, and by designing it to resemble the built-in types we could cause the least astonishment:
interface MaybeValid<T> { <U> MaybeValid<U> map(Function<T, U> mapping); <U> MaybeValid<U> flatMap(Function<T, MaybeValid<U>> mapping); T ifInvalid(Function<RequestError, T> defaultValueProvider); }
The
ifInvalid method is the terminating operation. It is meant to return the wrapped value in the case that it is valid, and the
defaultValueProvider function will supply the value when it is not valid. We can conveniently provide separate implementations for valid values and invalid values, respectively:
public class Valid<T> implements MaybeValid<T> { private final T value; public Valid(T value) { this.value = value; } @Override public <U> MaybeValid<U> map(Function<T, U> mapping) { return new Valid<>(mapping.apply(value)); } @Override public <U> MaybeValid<U> flatMap(Function<T, MaybeValid<U>> mapping) { return mapping.apply(value); } @Override public T ifInvalid(Function<RequestError, T> unused) { return value; } }
The key parts here are:
ifInvalidreturns the wrapped value rather than executing the supplied function.
mapapplies the wrapped value to the mapping function and returns a new
MaybeValidinstance wrapping the mapped value.
flatMapapplies the mapping function and simply returns its result, which is already wrapped in a
MaybeValidinstance.
public class Invalid<T> implements MaybeValid<T> { private final RequestError error; public Invalid(RequestError error) { this.error = error; } @Override public <U> MaybeValid<U> map(Function<T, U> unused) { return new Invalid<>(error); } @Override public <U> MaybeValid<U> flatMap(Function<T, MaybeValid<U>> unused) { return new Invalid<>(error); } @Override public T ifInvalid(Function<RequestError, T> defaultValueProvider) { return defaultValueProvider.apply(error); } }
The crucial differences are:
- The
mapand
flatMapmethods do not execute the mapping functions; they simply return another
InvalidRequestinstance. The reason they have to create a new instance is because the wrapped type might change (from
Tto
U).
- The terminating
ifInvalidmethod uses the
defaultValueProviderfunction to supply the return value.
- The default value provider is provided with the request error as its argument in case it needs it in order to return the appropriate result.
All of this means that we need to wrap the
isbnFromPath method in order to return a
MaybeValid instance:
MaybeValid<Isbn> maybeValidIsbn(Request request) { Isbn isbn = isbnFromPath(request); return isbn.valid() ? new Valid<>(isbn) : new Invalid<>(new RequestError(400, "ISBN is not valid")); }
And we must give a similar treatment to
findBookByIsbn:
MaybeValid<Book> maybeValidBook(Isbn isbn) { return findBookByIsbn(isbn) .map(book -> new Valid<>(book)) .orElseGet(() -> new Invalid<>(new RequestError(404, "Book not found"))); }
Please note that
RequestError is not an exception; it does, however, contain an HTTP status code, therefore this code must live in the application component that is dealing with HTTP requests and responses. It would be inappropriate for it to live anywhere else: in a service class, for example.
Now we can rewrite the endpoint like this:
public LibraryResponse findBookByIsbn(Request request) { return maybeValidIsbn(request) .flatMap(isbn -> maybeValidBook(isbn)) .map(book -> new SingleBookResult(book)) .map(result -> new Gson().toJson(result)) .map(json -> new LibraryResponse(200, "application/json", json)) .ifInvalid(error -> new LibraryResponse(error.httpStatus(), "text/plain", error.body())); }
Some of the lambdas could be replaced with method references but I left them as they are to bear the closest resemblance to the original code. There are other possibilities for further refactoring too. But notice how it reads clearly now as a sequence of chained operations. This is possible because the original was a indeed chain of composable functions: the return value from each function was passed as the sole argument to the next. The use of higher-order functions has allowed us to encapsulate the logic pertaining to validation errors inside the
MaybeValid subtypes. In the library service there are several endpoints with requirements similar to this and the
MaybeValid class could be used to simplify all of them.
So what about the monad...?
I mentioned the dread word “monad” earlier, and you've probably guessed that
MaybeValid is one, otherwise I wouldn’t have brought it up. So what is a monad exactly? First we need to clear one thing up, because you may have heard the word in the context of a “monadic function” - this is a completely different usage. It means a function with one argument (a function with two arguments is dyadic, and one with three arguments is triadic, etc.); this usage originated in APL and it has nothing to do with what we're talking about here. The monad we are talking about is a design pattern.
Doubtless you are already familiar with design patterns. The ones you already know, like Strategy, Command, Visitor etc. are all object-oriented design patterns. Monad is a functional design pattern. The Monad pattern defines what it means to chain operations together, enabling the programmer to build pipelines that process data in a series of steps, just like we have above:
- Retrieve the ISBN number from the request (may be invalid, i.e. wrong format).
- Look up the book by its ISBN number (may be invalid, i.e. not found).
- Create a
SingleBookResultDTO from the retrieved book.
- Map the DTO to a JSON string.
- Create a
LibraryResponsewith status 200 containing the JSON.
Each step may be ‘decorated’ with the additional processing rules provided by the monad. In our case, the additional rules are:
- The step actions are only to be performed when the value is valid.
- When the value is invalid then the error is passed along instead.
The terminating operation
ifInvalid makes the final decision about what to return: it returns the wrapped value if it is valid, otherwise it uses the supplied default value provider to build a suitable response from the client request error.
A formal definition.
More formally, the monad pattern is usually defined as an assemblage of the following three components, which together are known as a kleisi triple:
- A type constructor that maps every possible type to its corresponding monadic type. This wording does not make much sense in Java. To understand it, think of generic types, e.g:
Isbn→
MaybeValid<Isbn>.
- A unit function that wraps a value in an underlying type with an instance of the corresponding monadic type, e.g:
new Valid<Isbn>(isbn).
- A binding operation that takes a function and applies it to the underlying type. The function returns a new monadic type, which becomes the return value of the binding operation, e.g:
map(book -> new SingleBookResult(book))which yields a
MaybeValid<SingleBookResult>.
If you have these three components, you have a monad.
I heard Monads are all about encapsulating side-effects.
If you first came across the Monad pattern while learning Haskell, then most likely you would have learnt about it in the shape of the I/O Monad. The Haskell tutorial on I/O literally advises you not to worry about the Monad part for now, that you don't need to understand it in order to do I/O. Personally, that would just have the effect of making me worry more. Probably because of this, people who learn Haskell think that the purpose of a Monad is to encapsulate side-effects such as I/O. I'm not going to disagree, I cannot comment on that, but I have not come to understand the Monad pattern that way.
In my view, a Monad wraps a typed value (of any type) and maintains some additional state separately from the wrapped value. We have seen two examples here. In the case of the
Optional monad, the additional state is whether or not the value is present. In the case of the
MaybeValid monad, it is whether or not the value is valid, plus a validation error in the case that it is not. Notice that there are two types here: the monadic type (e.g.
Optional) and the wrapped type.
You can supply the Monad with a function that operates on the wrapped value. Whatever the type is of the wrapped value, the function's argument must match it. The Monad will pass its wrapped value to the function and will yield a new Monad, of the same monadic type, encapsulating the value returned by function. This is called a “binding operation”. The wrapped type of the new Monad may be different and that is fine. For example, if you have an
Optional wrapping a
Date, you may bind a function that maps a
Date to a
String and the result will be an
Optional wrapping a
String. If there is some functionality associated with the Monad's additional state, the Monad handles it as part of the binding operation. For example, when you pass a function to an empty
Optional, the function will not executed; the result is another empty
Optional. In this way, you can call a chain of composed functions in sequence, morphing from type to type, all within the context of the Monad.
Finally, the Monad provides a means for you to handle the value, taking account of the additional monadic state, in whatever the appropriate manner is given the context of your program. The appropriate behaviour is, naturally, handled using first-class functions. The other functions used in the binding operations are thus decoupled from the additional state maintained in the Monad and freed from all responsibility for dealing with it.
In other words, the Monad provides another tool in your box for creating abstractions, helping you to reduce the global complexity of your programs.
Next time.
In the next article we will continue our investigation of higher-order functions. We will take a look at currying, and how, despite seeming on the face of it very arcane, in fact it is very useful. To do this we will solve an exercise in Clojure, which will be a rather more involved exercise than the others we have seen in this series so far. We will go through it step by step and get a glimpse of the power of REPL-driven development.
Part 3 - First-Class Functions I: Lambda Functions & Map
Part 4 - First-Class Functions II: Filter, Reduce & More
Part 5 - Higher-Order Functions I: Function Composition and Monads
Part 6 - Higher-Order Functions II: Currying
Part 8 - Persistent data structures
Follow the author on Twitter | https://functional.works-hub.com/learn/the-functional-style-part-5-higher-order-functions-i-function-composition-and-the-monad-pattern-bc74a?utm_source=rss&utm_medium=automation&utm_content=bc74a | CC-MAIN-2020-16 | en | refinedweb |
It has taken a bit longer than originally planned, but I'm proud to release the first public version
of the RELAX-NG () support plugin for IntelliJ IDEA 7.0.x. The plugin can be
installed through IDEA's plugin manager and is also available here:
Features
The plugin provides IDEA with the capability to edit RELAX-NG schemas, with on-the-fly error
checking, quick fixes & completion - both for schemas using the XML syntax as well as the compact
syntax.
It also provides validation and code completion for XML instance documents. This works just like
IDEA's built-in support for XML Schema that uses the schema file that is associated with a certain
namespace URI for the validation & completion of elements in this namespace.
Using and setting up a RELAX-NG schema is just as easy as using an XML Schema, either by
automatically downloading it with the "Fetch external resource" intention or by setting up a mapping
via Settings -> Resources.
While the plugin provides IDEA with some basic knowledge of which elements/attributes are possibly
allowed at a certain point, it delegates the full-blown validation against the schema to the popular
RELAX-NG validator "Jing" () and maps its output into
IDEA's editor. This assures high-quality validation results.
A more complete feature overview will be posted soon.
Some background: RELAX-NG vs. XML Schema
Well, there's been a lot of criticism about XML Schema recently, namely it's said to be overly
complex, hard to learn, read & write and it's even inconsistent in certain aspects, e.g. regarding
the treatment of elements and attributes.
RELAX-NG is significantly easier to learn, read & write even for a casual user while at the same
time it is more powerful than XML Schema in certain aspects.
Many more compelling reasons to use RELAX-NG can be found here:
Known Issues
- Using Tools -> Validate from the Main Menu does not work currently because it is hard-coded to do
an XML Schema validation and obviously Xerces does not understand any RELAX-NG and will produce a
bunch of errors ().
- Especially the compact syntax editors may occasionally show semantically duplicate error messages.
This can happen due to IDEA showing both the plugin's parse errors as well as the ones from the
external parser.
- There are some caching issues regarding the highlighting updates when a schema changes which would
make an instance document valid/invalid.
Feedback
Please let me know about any bugs you encounter or if there's anything you think should be improved
or added. And of course messages like "It's been about time IDEA gets RELAX-NG support" are
appreciated as well ;)
Happy RELAXing,
Sascha | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206759425--ANN-RELAX-NG-Support-0-9 | CC-MAIN-2020-16 | en | refinedweb |
This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
Netbeans' hints generate the following equals() implementation:
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
According to Effective Java 2nd Edition Item 8 this is wrong because it violates the "Liskov substitution principle
[that] says that any important property of a type should also hold for its subtypes, so that any method written for the
type should work equally well on its subtypes". Joshua goes on to write:
"While there is no satisfactory way to extend an instantiable class and add a value component, there is a fine
workaround. Follow the advice of Item 16, 'Favor composition over inheritance.' Instead of having ColorPoint extend
Point, give ColorPoint a private Point field and a public view method (Item 5) that returns the point at the same
position as this color point"
(I unfortunately do not have Effective Java at hand, so this comment is based mostly on information from:
)
Well, in Java, composition is unusable in some cases (using the example: an instance of ColorPoint cannot be assigned to
variable of type Point). Also, I am not sure how much is this recommendation actually followed in the practice.
The advantage of getClass() != obj.getClass() is that if it fails, it is likely to fail much more predictably. If the
symmetry contract of equals is broken because of use of "instanceof", the behavior of Collections (for example) is
likely to be pretty unpredictable.
Anyway, your patch will be very welcome. But, please introduce an option for the user to choose the behavior.
I have no clue how to implement this. Is it possible to leave it open as an enhancement request and have someone in the
Netbeans team to add it?
Here is an excerpt from the first edition of the book on exactly this topic:
search for "There is simply no way to extend an instantiable class and add an aspect while preserving the equals contract"
In the second edition he goes to say:
"You may hear it said that you can extend an instantiable class and add a value component while preserving the equals
contract by using a getClass test in place of the instanceof test in the equals method:
// Broken - violates Liskov substitution principle (page 40)
@Override public boolean equals(Object o) {
if (o == null || o.getClass() != getClass())
return false;
Point p = (Point) o;
return p.x == x && p.y == y;
}
This has the effect of equating objects only if they have the same implementation class. While this may not seem so bad,
the consequences are unacceptable."
My understanding is that the "liskov substitution principle" simply states that object inheritance involves a "is-a"
relationship. When you use getClass() you violate the basic principal of object-oriented programming (inheritance).
Sure, the equals() contract doesn't explicitly discuss this, but it shouldn't have to: Java is a OOP language. Just my 2
cents.
Reassigning to the subcomponent owner since (as mentioned before) I have no experience working with the Netbeans code.
OK, we will have a look. This functionality should be optional.
Product Version: NetBeans IDE 6.7.1 (Build 200907230233)
In addition, Bloch J. also says that:
"For each "significant" field in the class, check to see if that field of the argument matches the corresponding field
of this.) "
Auto inserted equals() method does not do that:
public class Clazz {
float f;
double d;
public Clazz(float f, double d) {
this.f = f;
this.d = d;
}
}
Auto inserted equals() method: (Alt+Insert -> equals() and hashCode()... -> Select float -> Generate)
@Override
public boolean equals(Object obj) {
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final Clazz other = (Clazz) obj;
if (this.f != other.f) {
return false;
}
if (this.d != other.d) {
return false;
}
return true;
}
To add a test case, using the Clazz class above:
Clazz x = new Clazz(Float.NaN);
System.out.print(x.equals(x)); // returns false
However, the Object equals() documentation [1] states that:
"The equals method implements an equivalence relation:
- It is reflexive: For any reference value x, x.equals(x) must return true"
[1]
Integrated into 'main-golden', will be available in build *201002120200* on (upload may still be in progress)
Changeset:
User: Jan Lahoda <[email protected]>
Log: #156994: use Float.floatToIntBits and Double.doubleToLongBits to compare doubles and floats.
*** Bug 218064 has been marked as a duplicate of this bug. ***
Jan,
This issue was closed as FIXED back in 6.7 but it did not actually fix the problem I reported. It's nice that you fixed handling for float and double, but equals() is still wrong (violates Liskov's substitution principle).
Should I leave this issue as reopened or create a separate issue?
(In reply to comment #9)
> Jan,
>
> This issue was closed as FIXED back in 6.7 but it did not actually fix the
I don't see this issue to be marked as fixed or resolved, and history does not say it ever was. True, the Target Milestone is set to 6.7, which I am going to reset.
> problem I reported. It's nice that you fixed handling for float and double,
I have fixed float and double as the handling was wrong without any discussion.
but
> equals() is still wrong (violates Liskov's substitution principle).
First, I assume these are the equals methods you are proposing to generate:
--------------
public class Test {
public static void main(String... args) {
A a = new A();
B b = new B();
assert a.equals(b) == b.equals(a);
}
static class A {
private int i = 0;
@Override
public int hashCode() {
int hash = 7;
hash = 53 * hash + this.i;
return hash;
}
@Override
public boolean equals(Object obj) {
if (!(obj instanceof A)) {
return false;
}
final A other = (A) obj;
if (this.i != other.i) {
return false;
}
return true;
}
}
static class B extends A {
private int j = 0;
@Override
public int hashCode() {
int hash = 3;
hash = 61 * hash + this.j;
return hash;
}
@Override
public boolean equals(Object obj) {
if (!super.equals(obj)) return false;
if (!(obj instanceof B)) {
return false;
}
final B other = (B) obj;
if (this.j != other.j) {
return false;
}
return true;
}
}
}
--------------
But, these equals methods fail to fulfil the symmetry contract, and the behaviour of such objects is quite unpredictable in Collections - how are we going to respond to bugs claiming (validly, IMO), that the equals implementation is wrong?
Having said that, I am not excluding possibility to eventually introduce a checkbox "Generate equals violating its contract" - you can make that happen faster by providing a patch, though.
Jan,
Can you please clarify how instanceof comparison violates equals() for
Collections?
Also, it's worth noting that the getClass() implementation of equals() violates
symmetry for sub-types that add convenience methods without modifying state.
(In reply to comment #11)
> Jan,
>
> Can you please clarify how instanceof comparison violates equals() for
> Collections?
It can cause surprising or hardly predictable behaviour. Consider this example (an extension of the above):
------
public static void main(String... args) {
A a = new A();
B b = new B();
List<Object> l1 = new ArrayList<Object>();
List<Object> l2 = new ArrayList<Object>();
addUnique(l1, a);
addUnique(l1, b);
addUnique(l2, b);
addUnique(l2, a);
System.err.println("l1.size=" + l1.size());
System.err.println("l2.size=" + l2.size());
}
private static <T> void addUnique(Collection<T> addTo, T toAdd) {
if (!addTo.contains(toAdd)) addTo.add(toAdd);
}
------
It prints this for me:
l1.size=2
l2.size=1
Which I would say is surprising at least. The example is artificial, because I tried to avoid dealing with hashcodes (which begs a question: should "a" and "b" from the example have the same hashcode or not?) - not following the hashcode&equals contracts for objects that are put into HashSet (or used as keys in HashMap) typically leads to hard-to-track bugs.
I realize there are usecases where the subclass is only modifying behavior, not data, where using instanceof is more or less OK. But in other usecases, it is likely to lead to "random" problems, and I prefer generating code that works in most cases, and fails more predictably (allowing to track and fix the failure) over code that works in most cases and fails less predictably.
Good example. I still think you're wrong and I think I've got the proof to back it up...
All object-oriented languages are bound by the rules of Liskov Substitution Principal (LSP) as defined here:. The principal is similar to Design by Contract.
Among other things, it states the following:
1. A method's specification (contract) consists of pre-conditions, post-conditions and invariants.
2. When a subclass inherits from a superclass, it cannot strengthen its pre-conditions, weaken its post-conditions or violate its invariants.
The equals() example you quoted is similar to the often-quoted "A square is not a rectangle" problem. Quoting
"According to geometry, a square is a rectangle, but when it comes to software design... it is not!
The basic problem is that a square has an additional invariant with respect to the rectangle, which is that all the sides are equal and because of that you cannot independently change the height and the width like you would do with a rectangle"
LSP states that square may not inherit from rectangle because replacing a Rectangle with a Square would violate its contract.
In your example, B.equals() violates Object.equals()'s contract because:
1. Object.equals()'s contract of symmetry requires that "A.equals(B) should return true if and only if B.equals(A) returns true".
2. LSP states that if B inherits from A, then you must be able to substitute A with B without any noticeable difference.
3. Taking these two into consideration, A.equals(C) should return the same result as B.equals(C), but it does not.
I'll give you a real-life example of why violating LSP is *very* bad. A few years ago I tried using the RXTX library for serial-port communication. Everything was fine until I wrapped their InputStream implementation in a BufferedInputStream. All of a sudden the code began to blow up at random locations (sometimes inside the JDK codebase!). The specification for InputStream.read() clearly states that -1 denotes the end of stream, but they decided that -1 should mean "there is no data for you to read right now, try again later". Now, because they violated LSP you could not substitute their implementation in place of InputStream. When BufferedInputStream saw -1 it naturally assumed that the end-of-stream had been reached. It would cache this result and never invoke -1 again. Because class inheritance cannot work without LSP.
Here is a final link to prove my point (taking an example from Collections):
.)"
Notice:
1. They remind you that Lists must be equal to other Lists and Sets must be equal to other Sets. They imply that when you implement equals(), you must ensure that this remains true (i.e. don't break symmetry!)
2. You can't break symmetry with respect to interfaces. For example, class "Collection" is not relevant to this discussion because it has no concrete implementation of equals().
3. All concrete classes (e.g. ArrayList, HashMap, TreeMap) extend base classes such as AbstractList or AbstractSet.
4. If you examine their implementation, you will discover that the base classes implement equals() using "instanceof" and none of the subclasses override equals(), precisely to avoid violating LSP.
Phew, that was a long post.
So in summary: code that violates LSP will cause unexpected behavior. equals() is just the tip of the iceberg. using getClass() in equals() is wrong because:
1. It violates LSP itself.
2. Programming errors (such as code that violates LSP) should fail-fast (the earlier the better). We shouldn't be accommodating them any more than we should be accommodating
int count = 5++;
throwing NullPointerException. They *should* cause the application to explode.
I strongly agree with gtzabari. The current equals() implementation generated by NetBeans is dangerous and simply broken, and teaches a wrong pattern. An equals() implementation should by default work like this:
if (obj == null)
return false;
if (obj instanceof ThisType) {
ThisType other = (ThisType) obj;
return this.field == null ? other.field == null : this.field.equals(other.field);
// extend the above to all relevant fields
}
return false;
(In reply to comment #14)
> I strongly agree with gtzabari. The current equals() implementation generated
> by NetBeans is dangerous and simply broken, and teaches a wrong pattern. An
> equals() implementation should by default work like this:
>
> if (obj == null)
> return false;
Correction; these two lines should be:
if (obj == this)
return true;
> if (obj instanceof ThisType) {
> ThisType other = (ThisType) obj;
> return this.field == null ? other.field == null :
> this.field.equals(other.field);
> // extend the above to all relevant fields
> }
> return false;
It would be nice to have this RFE implemented by having the checkbox (not checked by default) in the NetBeans 8.
feel why the fix should be disabled by default.
(In reply to comment #17)
> .
(In reply to comment #18)
>.
Jan,
You won't find Liskov substitution principle mentioned in any Javadoc. It is an architectural-level invariant of all object-oriented languages and affects much more than just Object.equals(). I encourage you to re-read comment #13 for a detailed explanation.
Here is an additional point (on top of comment #13 which I hope you've read by now):
"getClass() != obj.getClass()" prevents you from subclassing. Certainly you can extend a class and make equals() return false, but then you're not really subclassing. What you are doing is implementation inheritance, not behavior inheritance. What you *should* be using is composition:. This is true for C++ as much as it is true for Java.
(In reply to comment #18)
> instanceof of is in general breaking Object.equals contract.
It only does if the wrong type is used on the right-hand side of instanceof. You always have some interface type or common base class that specifies the concrete equals() contract (e.g. "two lists are equal if and only if..."). It's clear that this type is the one that must be used on the RHS of instanceof.
Generally, there are two types of situations:
1) You have a simple value class with just that one implementation. In general such a class should be final. In that case, instanceof is perfectly symmetric and cannot go wrong.
2) You have a base type that specifies an equals() contract with multiple concrete implementations. In this case, the "this.getClass() == obj.getClass()" is likely to be wrong, unless the subclasses happen to represent, without exception, a partioning of the value space of the base type, which in my experience is not too common. It's more likely to have special-case subclasses similar to how for List you have empty-list and singleton-list implementations without every empty or singleton list necessarily being an instanceof the respective class, or to have added-guarantee or added-functionality subclasses like for example how LinkedHashSet is to HashSet.
The problem I have with the "this.getClass() == obj.getClass()" implementation is that it prevents creating subclasses with a compatible and symmetric equals() implementation. Suppose for example that Set was a concrete class instead of an interface (with an implementation like HashSet) and whose equals() implementation would use this test. Then it wouldn't be possible to create a LinkedSet or EnumSet subclass that would be compatible with Set in terms of equals(). All these type-2 situations require use of instanceof.
To me, the rule when implementing equals() is quite simple: Determine which is the type that specifies the particular equals() contract. Then test for instanceof that type. This also has the added benefit that the implementation is self-documenting in which type is the anchor of the equals() contract. When you see "this.getClass() == obj.getClass()", you always have to wonder whether the author did really think things through.
Ideally, the IDE would ask the user for that base type, for the purpose of creating an equals() implementation. Furthermore, the IDE would warn the user if a super class other than Object already provides an equals() implementation.
> Using instanceof is (in general) breaking symmetry and symmetry *is* part of the
Object.equals contract.
I think this example will help get the point across:
1. I create class Rectangle which honors the Object.equals() contract.
2. I create class Square which extends class Rectangle. At this point, Square either violates Object.equals() or LSP.
3. If Rectangle.equals() uses instanceof, then Square violates symmetry.
4. If Rectangle.equals() uses getClass(), then Square violates LSP.
Why does violating LSP matter so much? Consider the following code:
class Test
{
public static void main(String[] args)
{
Square square = new Square(10);
// violates Square invariant which states that width == height
rectangle.setWidth(20);
}
}
Square is exposing implementation details, methods setWidth(), setHeight(), which violate its own invariants. What you should be doing instead is using composition:
class Square
{
private final Rectangle rectangle = ...;
public void setSize(int size)
{
rectangle.setWidth(size);
rectangle.setHeight(size);
}
}
(In reply to comment #21)
> 3. If Rectangle.equals() uses instanceof, then Square violates symmetry.
Huh, why should that be the case?
What would violate symmetry is if Square would override Rectangle's equals() implementation and test for "instanceof Square", because then "new Rectangle(5, 5).equals(new Square(5))" would yield true while "new Square(5).equals(new Rectangle(5, 5))" would yield false. However, there is no necessity for Square to override equals(); and if it does want to override equals(), it should test "instanceof Rectangle" and implement equals() in terms of shape equality to the other Rectangle.
>.)
Regarding LSP: let us imagine a property "an object is only 'equals' to itself". This is definitely true for j.l.Object. But is not true for many subclasses of j.l.Object, which implement their own equals - why is this *not* a violation of LSP?
Regarding Collection.equals and like: yes, (sometimes), there is a way to implement correct equals in that case using instanceof over a super interface. But that is generally not what this feature is doing: this feature is generating equals based on the fields, but doing Collection.equals-like comparison generally requires usage of methods (rather than fields) and very deep understanding of the desired behaviour (List's and Set's equals apparently must be different). This feature is simply intended to relieve the tedious task of creating equals for simple value-holding classes.
Regarding composition: I don't know how it relates to this discussion. Sure, creating Square like above is wrong. Does the IDE suggest to create such Square? How does it do so? What the IDE is trying to do (and should do that by default in the future, IMO) is creating a code that will be behave more predictably if misused. Or is this meant that the IDE should only generate equals for final classes, to encourage composition over inheritance (which seems to be the only feasible explanation for bringing this theme up over and over)? I doubt that would work very well.
Having said all that - I understand there are technologies that basically require instanceof based equals. But that is the only argument so far that seems valid to me to add an option like this.
?
Object provides a _default_ equals() implentation for all types that do not define a different equivalence relation. The contract of Object is NOT "an object is only equal to itself". Rather, that is just the default implementation provided by the Object class; it is not part of Object's interface contract. The contract of Object#equals() is specified by the five bullet points in the Javadoc of the method. Subtypes that, in their contract, define a more concrete equivalence relationship still conform to Object's contract, hence no LSP violation here.
[...]
> This feature is simply intended to relieve the tedious task
> of creating equals for simple value-holding classes.
In my opinion instanceof is perfectly correct for simple value-holding classes.
What I can understand to some degree is if you want to cater to inexperienced developers who don't really understand equals(), and provide a default implementation of equals() that is at least formally correct in the sense that it attempts to guarantee symmetry. However, this is to the detriment to developers like myself who use instanceof according to the principles I outlined in comment #20, and, more importantly, it teaches a pattern which is incorrect in the general case. It is almost never a good idea to have the getClass()-based equality test in a class designed for subclassing; at the same time the use of getClass() in the generated code suggests that it is in particular intended for the case of subclassing.
(In reply to comment #22)
> (In reply to comment #21)
> > 3. If Rectangle.equals() uses instanceof, then Square violates symmetry.
>
> Huh, why should that be the case?
My mistake. instanceof does not violate symmetry but this case would still violate LSP: "Invariants of the supertype must be preserved in a subtype."
> >?
You're confusing implementation details with the method contract. LSP only states that the class and its sub-types must implement the same contract. It does not state that the underlying implementation must remain the same. More specifically, can you provide an example where A is an Object and B is a sub-type, A.equals(B) but !B.equals(A)?
> Having said all that - I understand there are technologies that basically
> require instanceof based equals. But that is the only argument so far that
> seems valid to me to add an option like this.
No. You are missing a fundamental point here: implementations *must* honor LSP, otherwise you are breaking the basic premise behind OOP. The *only* time it is acceptable to use a getClass()-based equals is for final classes. That makes them the exception to the rule, not the opposite.
(In reply to comment #25)
> (In reply to comment #22)
> > .
For an immutable solution you simply wouldn't provide setters. The client code would pass the dimensions of any desired new instance to the constructor, and that's it.
(In reply to comment #26)
> For an immutable solution you simply wouldn't provide setters. The client code
> would pass the dimensions of any desired new instance to the constructor, and
> that's it.
Fair enough, but to reiterate my point: you should never use class inheritance for implementation reuse. In the aforementioned case, the design would be better off using composition and only exposing Square-specific methods. Using immutability doesn't protect you from having to expose Rectangle-specific methods to all subclasses. Once you add a couple of levels of implementation inheritance you end up with a method soup, as we have in Swing.
Getting back to the point. This issue is assigned to Jan and found the sound of it, he's yet to agree that LSP is a fundamental requirement of the Java language. Jan, if this is true, would it be possible to get two other high-ranking committers to weigh in (we need an odd number of people to break a tie)?
Let's not forget about it into the next NetBeans version.
*** Bug 163561 has been marked as a duplicate of this bug. ***
There is just no way to get equals() and hashCode() done right IF you do let JPA set the Id.
The only way it works is by either use a UUID or a manually managed sequence counter and set the Id yourself _before_ you do anything with that entity.
Thus I filed an issue to completely remove equals() and hashCode() at all.
@struberg,
I disagree for two reasons:
1. Non-JPA code is not affected (and it makes up the majority of Java code).
2. I consider JPA, as a technology, to be completely broken. See. As such, I don't use it at all. Take it from someone who used Hibernate for over 4 years, it will cause more pain than it'll solve.
Please update Target Milestone (it is now out of date).
This thread is tl;dr so here goes nothing:
I'd never consider two concrete classes of different `getclass()` types to be equal regardless of `instanceof` relationships.
Take for example EJ Item 8: I consider `Point` and `CounterPoint` instances as always different (`equals()` returns false) just because `Point` has two properties and `CounterPoint` in total has 3 properties.
Another example: I consider `JButton` to be different from `MyJButton extends JButton` even if I only add functionality and no extra properties. If I wanted to know if the properties of my `MyJButton` are the same as that of another `JButton`, I'd compare properties instead of using `equals()`. This also has the advantage of deciding which properties contribute to your definition of 'equality' instead of having the library designers decide that for you.
Zom,
I understand what it is you want, but this is not what equals() is designed for. What you are asking for violates equals() requirement of symmetry (JButton equals MyJButton but the latter is not true).
If you want to add sameClass() to all your implementations so be it, but you can't implement equals() this way without violating the specification. If you do so, you *will* run into problems.
This issue is about Netbeans respecting the Java Specification for Object.equals(). Users who want something different are welcome to declare a different method.
This was reported back in 09. Is there any way that we can get an option to generate the alternate version?
Still there's no conclusion to be implemented, most of the thread tries to present one of the options as the "optimal", but none of them is.
There probably is no good way to implement "correct equals". IMHO, arguments as "this is not what equals was desinged for" are misleading, because in the first place, the Object.equals contract was wrongly designed from the start, as Effective Java book and discussion in this thread shows: equals on subclasses will either break subtyping (use getClass()) or will break symmetry (equals contract - use instanceof).
Given that I do not consider this issue a defect, rather an enhancement because we lack features, the code as it is designed is working well.
In a very few selected cases, it is possible to instanceof compare to base class, but those cases are IMHO not detectable by the IDE.
So the here's the solution:
* add a fix that Object.equals generates instanceof. Add a checkbox to the 'generate hashCode and equals' feature. Add option to hint to choose from instanceof or getClass()
* issue a warning when generating an overriding equals(), while adding some new felds (it would be best to show the warning only if the superclass uses instanceof)
I would go even so far to suggest that the instanceof should be preferred, because for subclasses with not-important fields, instanceof works while getClass() not. But what is a key or unimportant key is a matter of design and THAT's why we choose fields to be included in equals().
It is not effective to add a hint/warning which would warn the user against equals() in a non-final class, since this is how we all write usually.
It COULD be possible to warn if an instance of class A extends B:
1/ is used in a collection, whose type parameter is a class B or superclass of B
2/ type parameter class defines equals
3/ A also defines equals
(4/ or some other subclass of B defines equals - requires index access)
In this case, it is just a matter of optimization of collection operations, whether A.equals(B) or B.equals(A) is called. Sure this does catch only a very few situations and is rather complex.
Svata,
+1. I like the overall direction you are taking this in.
Leave it up the user to decide which form of equals is correct / the one they want.
- A checkbox to use instanceof
- A checkbox to test against super
If instanceof is chosen, we can have a little warning message saying that it breaks the contract for equals / may have undesired side-effects. Also, retain the state of the checkboxes across the running NetBeans session -- in case the user is adding equals/hashCode for a batch of similar classes.
'test against super' checkbox would also apply to hashCode. It would be nice to have 'super' as an option when generating toString as well.
I can try writing a patch for this if someone can point me in the right direction.
I would suggest to not warn against instanceof, at least not unconditionally. Instead I would make instanceof the default as proposed by Svata. Warnings, if any, rather belong in the subclass, like for example FindBugs' EQ_OVERRIDING_EQUALS_NOT_SYMMETRIC and EQ_DOESNT_OVERRIDE_EQUALS.
One way I see it is this:
– Using getClass() prohibits equals()-compatible subclasses
– Using instanceof prohibits equals()-incompatible subclasses (unless the class is abstract)
– For final classes, it doesn't make a difference.
(By equals()-compatible, I mean the situation we have with subclasses of List or Set, for example. By equals()-incompatible, I mean the partitioning of the equality relation that results from using getClass().)
So which option anyone prefers depends on whether one prefers equals()-compatibility or equals()-incompatibility across subclasses. As I believe that classes should only be extended for implementing a subset of the value space of the super class (= LSP-compatible class specialization), and that implementation-only inheritance should be replaced by delegation, I am firmly in the equals()-compatibility camp.
Now, I don't expect to convince everyone to join that camp. But please recognize that it is a sound design philosophy, and that following it _requires_ using instanceof. Hence a general warning against it would be counterproductive.
I second matthies and svata's suggestion [1] but I suggest against using loaded terminology such as "equals-compatible" because users who favor getClass() dispute that their approach cannot be compatible (it is... but for a smaller subset of designs than instanceof).
The key difference between instanceof and getClass() is that the requirement that implementations be "symmetric" (if A equals B, then B must equal A). instanceof can be symmetric in some cases, getClass() can never be.
[1] I am in favor of:
1. Making "instanceof" the default.
2. Warning if a class uses "instanceof" and subclasses override equals/hashCode.
3. Allowing users to configure whether they prefer instanceof or getClass()-based implementations of equals/hashcode on a project-level.
4. Warning if a class uses "getClass()" and has any subclasses.
5. Warnings should appear on the subclasses, not the parent class.
What's being proposed, in terms of FB-style analysis and what not, sounds like a lot of work.
Just give the user the relevant options, with the established standard being the default. Correct me if I'm wrong, but generating equals() with getClass() seems to be the established standard across major Java IDEs -- it's the only option in NetBeans and the default option in Eclipse.
offbynull,
It's not that simple. The entirety of the JDK source-code and many libraries in the wild (e.g. Guava) contradict what you consider to be "standard". I encourage you to read the entire discussion thread to gain a better understanding of why that is.
I think the most important thing at this point is to just get an implementation going. We can worry about finer details later. | https://bz.apache.org/netbeans/show_bug.cgi?id=156994 | CC-MAIN-2020-16 | en | refinedweb |
Objections to burdening hosts with more RA responsibilities
Short version: Even if it were easy to change host stack and
application software (and I think it is very
difficult to change every host, except over decades)
here is an argument directly against what I think
some RRG people prefer.
There is an ideal of "dumb network - smart hosts"
which has served the Internet well, at least in
comparison to the telephone network.
However, extending this principle to *require* that
sophisticated scalable, routing and addressing
functions be performed in hosts is a bad idea.
Its fine to make such functions optionally
implemented in hosts, but burdening *all* end-user
hosts with Routing and Addressing responsibilities,
beyond the current DNS lookup and IP address stuff,
is undesirable since it causes major problems for
hosts which either must be simple and minimal
and/or which connect via slow, unreliable and/or
expensive links.
Even where the hosts are well-connected and
have plenty of CPU resources, these new schemes
typically involve complex interchanges of packets
so two hosts can identify and authenticate each
other's application level identity or address or
whatever.
At present, a packet can be sent directly and incur
only the delay inherent in the physical network path.
With the proposed protocols in which the end-user
host must perform new routing and addressing
functions, there typically needs to be multiple
packets, including to and from each host, before
user-level communications can occur. So the inherent
physical delays are multiplied by 2 or more.
Such proposals, while conceptually elegant, would
slow down the establishment of user communications
in general, and be unworkably expensive, slow and
less reliable if one or both hosts relies on a slow,
unreliable and/or expensive WiFi, 3G or GEO/LEO
satellite link.
Further to Christian Vogt's message:
Host changes vs. network changes
it is possible to imagine a global network in which every physical
address for end-user networks and their publicly accessible hosts is
PA space. Then only ISPs get PI space and the BGP system keeps
working, with a scaling problem limited to the number of PI prefixes
the ISPs advertise.
In order to do this, either the host networks or the hosts themselves
need to do extra work to create logical addresses to which
applications are bound, where these addresses are not at all tied to
the particular one or more PA addresses the host uses for physical
connection to the Net.
The core-edge separation schemes (LISP, APT, Ivip and TRRP) provide
scalable routing without changing host responsibilities at all.
LISP-CONS/ALT and TRRP frequently involve significant delays to
initial packets, sometimes equivalent to dropping the packets for a
some time like a second or two, while the ITR waits for a map reply
from their global query server networks. APT and Ivip use
full-database local query servers so mapping replies come quickly and
reliably - in a few tens of milliseconds which is typically an
insignificant delay to the commencement of a new communication session.
Other than these delays, the core-edge separation schemes involve no
new delays - and no new responsibilities or management packets - for
hosts.
The idea of moving all new RA functionality to hosts is to do away
with the need for ITRs, ETRs etc. - though there still may need to be
a mapping database with either local query servers or a global system
to handle map requests.
No-one seems to be advocating such a "change hosts so they do all the
new work" approach for IPv4, but there are various proposals to adapt
IPv6 to this approach, or to develop new addressing regimes. I think
most of these involve new stack <-> application interfaces - and
therefore completely rewritten application software.
Conceptually, this is simple and elegant. I can't imagine how such
as system could be widely adopted on a voluntary basis, but this
message is about the in-principle objections to the outcome, not
about how difficult it would be to have such a scheme widely adopted.
As long as the extra work must be done by the hosts themselves, then
there are some fundamental problems:
1 - This significantly raises the minimum complexity of any
host, in terms of CPU capacity, software requirements and
storage. This is bad for mobile devices and embedded
devices such as electricity meters and the famed IPv6 light
switch.
2 - AFAIK, in every potentially practical scheme, a lot of the
burden is in complex cryptographic exchanges which are needed
in order for hosts to be able to reliably identify and
authenticate each other.
3 - There is extra management traffic between the hosts and
probably between each host and some network-centric
support system, such as PKI or a mapping system.
4 - Since no user communications can take place before these
extra tasks are performed, and since these tasks involve
exchange of packets, these proposals mean that it will
generally take longer to begin a communication session
than it does now.
5 - All delays in these packets required for session
establishment - and worse-still, loss of these packets -
will further delay the establishment of user-level
communication.
I propose that in any scalable routing system or clean-slate network
redesign, that it better not to require host functionality beyond the
what is currently expected of all hosts:
1 - Except where configuration, software or a previous packet
provides an IP address, use a DNS lookup to get an IP
address of the other host.
2 - Send and receive packets using that IP address and one of the
potentially multiple IP addresses of the current host.
The core-edge separation schemes (LISP, APT, Ivip and TRRP - except
draft-meyer-lisp-mn-00) preserve existing host responsibilities.
Some, such as Ivip, make it possible for sending hosts to perform
their own ITR function, but this is not required of any host. This
is a low-cost and generally efficient approach, except for when the
host is on a slow, expensive or unreliable last-mile link. It would
work ten in principle, but it would tend to slow the establishment of
new sessions due to the delays and packet losses in the local link.
LISP mobile draft-meyer-lisp-mn-00 involves the MN being its own ETR.
The MN does not need to be an ITR, and the ETR function is purely
for decapsulating packets - AFAIK it is not an authoritative query
server for mapping querys sent over the ALT network. Although
draft-meyer-lisp-mn-00 involves this extra ETR responsibility, AFAIK,
this doesn't involve much extra management traffic or delay - so in
terms of this critique, it is fine. The problem is that the MN's CoA
(care of - physical - address) can't be behind NAT. Maybe that's not
such a limitation in IPv6 if IPv6 turns out to be NAT-free. A
critique is at:
The TTR approach to mobility:
retains conventional DNS and IP address host responsibilities, but
has an extra piece of software to provide tunnels to one or more
Translating Tunnel Routers which perform ITR and ETR functions. The
TTRs and potentially some other servers also communicate with this
extra piece of software to find out where the MN is and to coordinate
how it establishes two-way encrypted tunnels to (typically) nearby TTRs.
These are extra host responsibilities, but they do not alter the main
stack or the application software at all. Furthermore, while there
is some overhead of management traffic, such as that required to
ensure full delivery of packets in both directions between the TTR
and the MN, these extra packets do not delay the establishment of new
communication sessions. (Note: this proposal could be adapted to
allow non-retry of some classes of packets, such as streaming media
and VoIP packets.)
What I am arguing against is proposals of the "dumb network, smart
host" variety which require *all* hosts to do some additional complex
things in order to make the whole network more elegant, immune to
scaling problems or whatever.
With hosts today, or with the hosts using a core-edge separation
scheme, each host can send or receive a packet with no extra delay or
traffic on its link (other than the initial packet delays of
LISP-CONS/ALT and TRRP).
We could design a conceptually elegant Internet, with perfectly good
scaling properties, simple routers and PA space for all end-user
hosts, by making every host behave like a MN with its own portable
application layer address (or multiple such addresses) which it
maintains no matter what one or more physical address or addresses it
is physically using. The physical address and logical (application
layer) address could be separate namespaces, or separate sections of
the one addressing range in a single namespace.
HIP is an adaptation of IPv6 which does this. However, it requires
packets flow back and forth between two hosts, and I think packets to
and from other network-based systems, before the two hosts can
establish a communication session upon which actual application-level
packets can travel.
Even if if every host had plenty of CPU power etc. and had fast,
reliable, low-cost links, it would still be unacceptable since it
slows the establishment of every communication, including the
equivalent of a send-and-forget UDP packet, due to the need for
complex cryptographic exchanges. Whatever the delay time due to the
physical separation of the hosts, the speed of light in fibre, or of
radio waves in air, the delay in establishing communications will be
some multiple of the physical delay time, whereas today, there is no
such multiplication of delays beyond the DNS lookup and the TCP
handshake.
Even if we weren't forced to rely on voluntary adoption to solve the
routing scaling problem, I would still probably favour, for the
"Ideal Internet" design, a system like that of the core-edge
separation schemes, with distinct ITR and ETR functions.
The system would not require any host to perform the ITR function.
However, the ITR function should be optionally possible to implement
in the sending host, since this will often be possible and desirable,
and can be done for no hardware cost whatsoever. (Ivip has this,
though not if the sending host is behind NAT. It would be possible
to extend this to sending hosts behind one or more layers of NAT.)
Maybe a host could be its own ETR, but I the system should never
require this (LISP-MN does require this).
To require every host on the Net to perform all its functions on the
shifting sands of one or more essentially transient PA addresses,
like the CoA addresses of mobile hosts, with no special
transformation of packets in the network itself, is at odds with the
need for mobile devices to be inexpensive, simple and robust when
packets are lost or the link is slow or expensive.
Let the network support the hosts by centrally (such as within each
ISP or end-user network) transforming packets, including tunneling
them between ITRs and ETRs, so all end-user hosts can just get on
with being hosts - so they do not need to be involved in the complex
business of portability, multihoming and inbound traffic engineering.
- Robin | http://www.firstpr.com.au/ip/ivip/RRG-2009/host-responsibilities/ | CC-MAIN-2020-16 | en | refinedweb |
Message-ID: <273500990.34600.1579316740642.JavaMail.tomcat@2a0fd85a362c> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_34599_660621729.1579316740639" ------=_Part_34599_660621729.1579316740639 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Is described here. These steps = are necessary to do to change your code in accordance with changes in MPS= p>=20
User can copy Java code as text and paste into MPS editor as base langua=
ge.
There are 3 modes of such a paste: paste as c= lass, paste as methods and paste as statements.
If a user selects an appropriate paste mode and all names in the copied p= iece of code are known then the pasted base language code is fully legal.
If a user selects some statements and pastes as= methods or selects several methods and pastes as statements MPS will not b= reak but the result of paste will be strange.
Unresolved names from c= opied code are pasted as unresolved references with an appropriate resolve = info.
=20
QueryMethodIdEditorProviderExpression concept was removed from editor la= nguage (it was visible in completion menu with query method cell pr= ovider alias). Looks like nobody uses this expression anymore, so = no migration script was created. If you by any chance open a model created = in previouse version of MPS and using this expression:=20
=20
now you'll see just red error editor cell instead of it.=20 =20
=20
jetbrains.mps.analyzers=20
This language allows to create user-defined data flow analyzers.=20
To create a custom dataflow analyzer you need to create your own instruc= tions, rules and analyzer.=20
There are two types of dataflow constructors (rules): concept rules &= ; pattern rules.=20
Pattern rules are applied to every node which matches containing pattern= . Custom instructions can be inserted before or after nodes.=20
Pattern rule example:=20
For every node matching this pattern, nullabl= e analyzer inserts instruction "nullable(p)" after the node, matc= hing ifStatement and instruction "notNull(p)" before the node, ma= tching ifTrue.
Concept rules are applied to all instances of concrete= concept.
Concept rule example:=20
This rule inserts notNull instruction after each= dot expression.
See Nullable Analyzer in baseLanguage dataflow aspect for more examples.==20
The first usage of analyzers language. Provides nullable state data flow= analysis in baseLanguage.=20
Analyzer checks that operand of dot expression can't be null. Othe= rwise it reports error as in examples below:=20
Analyzer supports @Nullable and @NotNull ann= otations:
Also, Nullable analyzer reports errors if checking something for n= ull/not null is superfluos:=20
=20
=20
=20
Three levels of export are available now:=20
@export(public)
@export(module)
@export(namespace=3Dxxx.yyy)
Any root node can be marked with the
@export() annotation b=
y using intentions.
Unannotated nodes in MPS are=
@export(public) by default.
References or instances of concepts that are not allowed (when correspon= dent target or concept was not exported) are marked with the "usage of= nonpublic API" error.=20
For stub libriaries default is
@export(module). It is possi=
ble to annotate all roots in stub library with
@export() using=
export line in library descriptor or save annotations fro=
m library creator untouched (when export: do not set).
=20=20
Example of a custom viewer for
List:
<=
/p>=20
Editor state is saved into workspace file, including folding state, sele= cted cell and text selection inside the selected cell.=20
Referent set handlers acted like listeners: a referent was always set an= d only then they were invoked. Now you are able to specify whether original= referent is to be set or not. Use "keeps original reference" cla= use.=20
=20
JUnit run configuration was able to execute tests written on unitTest la= nguage. Now it also supports tests written on baseLanguage using both JUnit= 3 and JUnit 4 technology.=20
Added "Mute Breakpoints" button which disables all breakpoints= for current debugger session.=20
=20
Annotation is a form of a root editor presentation that shows detailed i= nformation for each line of code. In particular, for each line you can see = the version from which this line originated, user ID of the person who has = committed that line, and the commit date. Shortly, annotated view of a root= node helps you find out who did what.=20
To open it, use context menu of left editor margin.=20
=20
Once you opened annotation, you can see the following information for ea= ch line:=20
=20
If you hover mouse cursor over annotation line, you can see tooltip with= commit description. If you click on it, you will see list of changes for t= he current commit. Using context menu, you can also view full difference fo= r this commit and copy revision number to clipboard.=20
=20
The traditional workflow cycle in MPS has always been like the following= : edit =E2=80=93 generate =E2=80=93 reload classes. The generation phase he= re understood as "generate Java files and compile", if you're wor= king with the Java as the target language that is. For the next version of = MPS we're planning to make this process more transparent and extensible. Fo= r that we're introducing the Make Scripting Framework.=20
The key concept in the make framework is facet. A facet contrib= utes a set of targets, which represent states in the (abstract) ma= ke script to be executed. Targets are interlinked with the dependencies suc= h as "before" or "after", organizing themselves into a = DAG (direct acyclic graph). The script is constructed at runtime from the f= acets declared in the languages used in the model being made, and the targe= ts are executed in the natural order, starting with the "earliest"= ; target and proceeding forward to the requested final target, which is usu= ally "make". Facets can be extended by introducing new targets an= d/or overriding existing targets.=20
The set of targets in the current implementation (which BTW is only a pr= ototype and is expected to change greatly) is roughly the following:= =20
Targets: [ configure -> generate -]-[-> textGen -]-[-> compil= e -> reloadClasses -]-[-> make ] Facets: Generate TextGen JavaComp= ile Make=20
The set of facets (Generate, TextGen, ...) is defined at the time the ma= ke script is constructed and should be controlled by the languages imported= into each of the models. For now this set is statically defined to reflect= the exact same generation workflow has been used before.=20
The above mentioned facets and corresponding actions can be found in the=
language jetbrains.mps.make.facet. The two actions provided =E2=
=80=93 Make and Build =E2=80=93 can be used instead of
=20=20
The main "Build" menu has been re-worked completely. Here's wh= at changed:=20
=20
=20
A new action "Preview Generated Text" opens preview in an mult= i-tab editor. Each tab corresponds to a single file.=20
The action is available on a model and is also bound to a shortcut ctrl+= alt+shift+F9 (MacOS cmd+alt+shift+F9).
=20
=20
Watches API and low-level watches for java debugger are implemented. &qu= ot;Low-level" means that user can write expressions using variables, a= vailable on the stack. To edit a watch, a so-called "context"(use= d variables, static context type and this type) must be specified. If the s= tack frame is available at the moment, context is filled automatically.= =20
=20
Watches can be viewed in "Watches" tree in "Debug" t= ool window. Watches could be created, edited and removed via context menu o= r toolbar buttons.=20
=20
=20=20
Trace information generation is done now via textGen language. Concepts =
that require trace information generation should implement one of the three=
interface concepts:
TraceableConcept,
ScopeConcept and
UnitConcept. Use script "Upgrade Trace Info Genera=
tion" from plugin language to upgrade.
See more infomation in Debugger documentation page=20
=20
=20
Specified TableModel instance used to create and edit table grid. Each t= able child cell contains common MPS editor specified for given child node. = TableModel interface as well as a number of it's reusable implementations a= re defined in jetbrains.mps.lang.editor.table.runtime solu= tion.=20
=20
On this screenshot rows is a name of child role keeping= Row concept instances and cells is a nam= e of child role defined within Row concept keeping DataCel= l concept which will be displayed within table grid. In addition optional <= strong>headerRowlinkDeclaration - multiple child reference keeping= nodes representing table header row - can be specified there.=20
=20
The tabbed editors part of plugin language was slightly= improved. Now new aspects can be added for a concept from any language, wh= ich will allow to "extend" tabbed editors, in contrast with old T= abbedEditor allowing to define the set of aspects only once.=20
=20
The editors themselves changed their appearance - now there is no 3rd le= vel of tabs. Instead, an editor has a toolbar showing all aspects available= for the main node and allowing to create a new ones.m=20
=20
Now it's possible to create a non-language plugin for MPS. This will all=
ow to create "standalone" plugins, which will not require the pre=
sence of plugin's module for the plugin to work (e.g. a VCS plugin).
= Note: Icons in actions don't work properly in non-language= plugins for now.
In addition, plugin components are no more created via reflection. This = will improve reloading performance of large plugins.=20.=20
=20
=20
First version of MPS-specific contextual help was published on:. You can press F1 at any time in= MPS to navigate to associated help topic. We are going to improve this hel= p and finally provide our users with cotext-specific assistance in all MPS = dialogs in upcoming release.=20
Merge dialog was rewritten almost from scratch. Now it has new, more tra=
nsparent and easy behavior. New merge dialog is so far experimental and hav=
e several issue (for instance, no model imports merging), so if you want to=
temporary switch it off, add
-Dmps.newmerge=3Dfalse line in <=
code>bin/mps.vmoptions.
When MPS model files become corrupted because of merge conflicts, you ge= t dialog with list of files (including models) which need merging. This dia= log was not changed at all, everything is left as it was.=20
=20
On clicking "Merge Files" button on
.mps model fi=
le light and easy dialog, "Merge Model" appears. There you can se=
e model tree with roots which are modified in at least one of model version=
s. Roots are colored if they have changes which are not applied yet:
=20
Merge roots dialog is invoked on double clicking root in tree. There you= can see three editors. By default, all changes are not applied. To apply a= ll non-conflicting changes in root, you can click corresponding button on t= oolbar. Conflicting changes can be applied only manually. When you apply ch= ange, all other changes which are conflicting, automatically become exclude= d. When you exclude change, all conflicting become automatically applied. O= n clicking "Apply" dialog button result of merging root becomes s= aved, on clicking "Cancel" =E2=80=94 descarded.=20
=20
=20
TODO=20
The language jetbrains.mps.platform.conf can be used to define = configurations (such as plugin.xml), as well as present existing configurat= ion files as stub models.=20
MPS has switched to stable Idea platform version 103.72.=20
Type system trace is a new tool which provides information about type ch= ecking.=20
It can be used to better understand why a type error occurred.=20
The tool window is divided into two parts. In the left part the trace is= shown.=20
Type system engine applies type system rules to the program consequently= and the effect of this is shown in the trace.=20
The trace is a tree of elementary operations such as=20
Also there are some operations which don't have effect on nodes' types. = They are shown for better understanding of type checking process.=20
The other part of this tool shows type system state - the result of appl= ying all operations up to the selected one.=20
So the trace and state together can be used as "type system debugge= r".=20
=20
This example shows typical usage of typesystem trace. Here we can see a = simple type error - an integer constant put into a list of strings.=20
You can select "Trace for selected node" if you want to see on= ly information about specific node (it should be selected before opening th= e trace).=20
=20
Trace supports navigation to typesystem rules and nodes they were applie= d to.=20
Now MPS could be started from MPS. To do so, create an instance of run c= onfiguration MPS.=20
=20
Configuration "MPS" starts a new instance of MPS with differen= t configuration and caches directory (by default located in $HOME$/.MPSDebu= g1x/). It could be started under debugger.=20
=20
We are constantly working on MPS performance but still sometimes MPS edi= tor works too slow especially with some large model elements. By using Powe= r Save Mode editor option you can manually switch off/on automatic backgrou= nd error highlighting in editor. In Power Save Mode F5 key can be pressed t= o update error highlighting in active editor.=20
=20
Bundled ant library was updated to version 1.8.2.=20
Separate confluence space was created for MPS 2.0 Documentation=20
Now you don't need to spend much time migrating your code to MPS 2.0. Ju=
st execute "Main Menu -> Tools -> Start Migration to MPS 2.0&quo=
t;, and it will execute all activities needed to migrate the code to 2.0 fo=
r you.
Read more about migration= to 2.0
When you tried to use ctrl-n to lookup a node in your project, you alway= s got some nodes not from project models. Now it's fixed.=20
Editor tabs (it's what can be seen on the bottom of concept's editor, fo=
r example) were slightly extended.
Now you can choose between 3 modes= :
See Main Menu -> File ->Settings -> MPS Editor for available op= tions=20
Structure tool is now available in MPS. Moreover, if you wrote a TabbedE=
ditor for some concept before, the structure tool will work with that conce=
pt, because it's based on the same relations.
Along with the structure tool, a structure navigation became available:<=
br />
J= ust press ctrl-F10 (cmd-F10 for Mac users). Context search on typing is als= o available.
Press ctrl-alt-insert (ctrl-alt-n for Macs) and create a node aspect rig=
ht from the editor:
This is also brought by TabbedEditors, so you can add your= own aspects there.
By default MPS displays an error highlighting the fact that newly create=
d non-abstract concept has no editor definition associated with it:
<= img class=3D"confluence-embedded-image" src=3D"/download/attachments/397838= 43/NonAbstractConceptWithoutEditor.png?version=3D1&modificationDate=3D1= 308928820000&api=3Dv2" data-image-src=3D"/download/attachments/39783843= /NonAbstractConceptWithoutEditor.png?version=3D1&modificationDate=3D130= 8928820000&api=3Dv2" />
"Generate Default Editor" quick= -fix is available from intentions menu (Alt+Enter) to create new default ed= itor definition for it now:
Whole table Row/Column can be selected by placing a cursor inside partic= ular editor cell and pressing Shift+Left/Right/Up/Down keys:=20
=20 | https://confluence.jetbrains.com/exportword?pageId=36016481 | CC-MAIN-2020-05 | en | refinedweb |
Handles data related with the board to be visualized. More...
#include "../3d_rendering/ccamera.h"
#include "cinfo3d_visu.h"
#include <3d_rendering/3d_render_raytracing/shapes2D/cpolygon2d.h>
#include <class_board.h>
#include <3d_math.h>
#include "3d_fastmath.h"
#include <geometry/geometry_utils.h>
Go to the source code of this file.
Handles data related with the board to be visualized.
Definition in file cinfo3d_visu.cpp.
Definition at line 238 of file cinfo3d_visu.cpp.
Definition at line 239 of file cinfo3d_visu.cpp.
This is a dummy visualization configuration.
Definition at line 47 of file cinfo3d_visu.cpp. | https://docs.kicad-pcb.org/doxygen/cinfo3d__visu_8cpp.html | CC-MAIN-2020-05 | en | refinedweb |
Generating Pink Noise (Flicker, 1/f) in an FPGA
Intro
The Harmon Instruments signal generator provides simulated phase noise modulation. Typical signal sources include a flicker noise component at low offset frequencies. The 3 dB/octave slope is more complex to create than 6 dB/octave which can be produced with a simple integrator.
Stochastic Voss-McCartney
The Voss-Mcartney method of generating pink noise is a multirate algorithm. The output is the sum of many random variables updated at sample rates that are power of two multiples of each other. There are a few other useful references. In the stochastic version, rather than updating each random variable in the sum at fixed intervals, they are updated at randomized intervals averaging the update rates in the non stochastic version.
In this implementation, a sum of 32 values is used, numbered 0 to 31. Value 0 has a probability of update of 0.5 each clock cycle, value 1 0.25, value n 2^-(n+1), etc. It might be desirable to add a value that updates every cycle for improved high frequency conformance, but it's not required in this application.
Here's a simple Python model:
import numpy as np import random ram = np.zeros(32) def get_pink(): for i in range(len(ram)): if random.getrandbits(1): ram[i] = random.gauss(0, 1) break return np.sum(ram)
The low frequency deviation in the plot below is due to the number of samples used, not the generator.
The plot below is the output of the phase noise source set to pure pink as measured with a spectrum analyzer. This shows good performance over at least 6 decades of frequency range.
nMigen Implementation
I'm unaware of a closed form solution to the output spectral density. Numerical evaluation gives aproximately 48100 / sqrt(Hz) at 1 Hz assuming the gaussian input has a standard deviation of 10102.5.
The value in RAM at index 31 is updated twice as often as it should be in this code. That may be fixed at some point in the future. At 250 MSPS, that results in noise that should be below 0.058 Hz being at 0.058 Hz.
On Artix-7, usage is 58 LUTs, 96 registers. An external gaussian white noise source is required as well as 31 pseudo random bits per clock.
class PinkNoise(Elaboratable): def __init__(self, i, urnd): self.i = i # 20 bit signed white gaussian noise self.urnd = urnd # 31 pseudo random bits self.o = Signal(signed(25)) def elaborate(self, platform): m = Module() # count trailing zeros bits_1 = self.urnd cond_2 = bits_1[:16] == 0 bits_2 = Signal(15) result_2 = Signal() cond_3 = bits_2[:8] == 0 bits_3 = Signal(7) result_3 = Signal(2) cond_4 = bits_3[:4] == 0 bits_4 = Signal(3) result_4 = Signal(3) ptr = Signal(5) m.d.sync += [ result_2.eq(cond_2), bits_2.eq(Mux(cond_2, bits_1[16:31], bits_1[:15])), result_3.eq(Cat(cond_3, result_2)), bits_3.eq(Mux(cond_3, bits_2[8:15], bits_2[:7])), result_4.eq(Cat(cond_4, result_3)), bits_4.eq(Mux(cond_4, bits_3[4:7], bits_3[:3])), ptr.eq( Mux(bits_4[0], Cat(C(0,2),result_4), Mux(bits_4[1], Cat(C(1,2),result_4), Mux(bits_4[2], Cat(C(2,2),result_4), Cat(C(3,2),result_4) ) ) ) ), ] ram = Memory(width=len(self.o) - len(ptr), depth=2**len(ptr)) wrport = m.submodules.wrport = ram.write_port(domain='sync') rdport = m.submodules.rdport = ram.read_port(domain='comb') m.d.comb += [ wrport.en.eq(1), wrport.addr.eq(ptr), wrport.data.eq(self.i), rdport.addr.eq(ptr), ] i_pipe = Signal(signed(len(self.i))) ram_pipe = Signal(signed(len(rdport.data))) m.d.sync += [ i_pipe.eq(self.i), ram_pipe.eq(rdport.data), self.o.eq(self.o + i_pipe - ram_pipe), ] return m | https://harmoninstruments.com/blog/ | CC-MAIN-2020-05 | en | refinedweb |
I'll be at MIX this year. If you want to meet up for a chat, drop me a note.
I
Type names look like a simple concept. Every type has a unique name within the assembly that defines it.
It turns out that there is a slight complication. Even though the CLI specification and the reflection API suggest that a type name is simply a string, in reality it is a pair of strings: { namespace, name }
Here's some code that uses IKVM.Reflection to generate an interesting assembly:
using IKVM.Reflection;using IKVM.Reflection.Emit;class Program { static void Main() { var universe = new Universe(); var ab = universe.DefineDynamicAssembly(new AssemblyName("Test"), AssemblyBuilderAccess.Save); var modb = ab.DefineDynamicModule("Test", "Test.dll"); modb.__DefineType("A.B", "C").CreateType(); modb.__DefineType("A", "B.C").CreateType(); ab.Save("Test.dll"); }}
This creates a valid (and verifiable) assembly containing two different types, both named A.B.C.
If you disassemble this assembly with ildasm and the resulting IL is reassembled with ilasm you won't end up with the same assembly. I don't know if there are any obfuscators that use this trick, but maybe they should.
Reflection APIs assume that the last dot in the type name separates the namespace from the name, so doing Type.GetType("A.B.C") will return the first type {"A.B", "C"}. You can get the second type by enumerating all types in the assembly.
Note that static binding just works, because in that case the {namespace, name} pair is specified explicitly.
This is the final part of three part series on exception performance that started in 2008. Previous parts are:Exception Performance Part 1Exception Performance Part 2
Let's introduce a slight variation of ExceptionPerf1 where we throw 10000 exceptions instead of 100000 and throw the exception from a method, instead of directly in the loop.(); }}
When this is compiled with the standard Debug configuration in Visual Studio 2010 it yields the following performance numbers (times in milliseconds) when run either with Ctrl-F5 (i.e. no debugger attached) or F5 (debugger attached):
For comparison, the table also includes the time it takes to run the equivalent code in HotSpot 1.6 Client VM on x86.
As we saw in the previous two articles on exception performance, .NET is significantly slower in handling exceptions, so that is not surprising. However, what is surprising is how much the overhead is of simply having the debugger attached.
Another depressing thing to note is that things have gotten much worse with .NET 4.0.
Unfortunately, many developers have the habit of always running their code in the debugger, so when they first try code IKVM.NET compiled code from within Visual Studio they often get a very bad impression of the performance, simply because the debugger sucks.
I considered filing a connect bug for this, but you know they'll just close it as By Design. I guess the CLR is only a Common Language Runtime, if you language doesn't use exceptions for control flow.
Mono 2.10 was released this week. It includes a version of the Mono C# compiler that uses IKVM.Reflection as its back end.
Last year in the two days before FOSDEM I hacked mcs to use IKVM.Reflection and while at FOSDEM I showed this hack to Miguel and was met with his usual enthusiasm and he told me that he was already planning on talking to me about using IKVM.Reflection for mcs.
In May, Kornél Pál did a, much more complete than my hack, prototype port of mcs to IKVM.Reflection and that resulted in a number of IKVM.Reflection bug fixes and enhancements.
Last November, Marek Safar starting work on integrating IKVM.Reflection support for real and made sure that everything was production ready. This resulted in some mcs restructuring and yet more IKVM.Reflection fixes, features and also performance improvements.
The result of all this is that now IKVM.Reflection is a great library to use if you have a codebase that has both a dynamic and a static compilation mode (as both IKVM.NET and mcs have), because you can easily share the bulk of your code between System.Reflection and IKVM.Reflection without having to suffer from the (significant) limitations of System.Reflection for the static scenario. | http://weblog.ikvm.net/default.aspx?date=2011-04-01 | CC-MAIN-2020-05 | en | refinedweb |
- Type:
Bug
- Status: Closed (View Workflow)
- Priority:
Major
- Resolution: Done
- Affects Version/s: 9.2
-
- Component/s: Dynamic VDBs
- Labels:
import a dynamic vdb in Teid designer, the vdb and related models are generated, however, the descriptions are not taken into account properly. We expect to see the description in the Teiid designer.
Same goes when we export a Teiid vdb to a vdb xml: the descriptions are not generated as xml elements. | https://issues.redhat.com/browse/TEIIDDES-2810?attachmentOrder=asc | CC-MAIN-2020-05 | en | refinedweb |
Pretty is the New Prozac.
import auto parts
Online holiday packages ) Video
Online holiday packages Online holiday packages Sorry, the fields marked in red need your attention, please fix them to continue. International & India tour packages Travel ...
The post Online holiday packages ) Video appeared first on Answer .
Santa-ana Finance
BONUS FREE DOFOLLOW Links SITE | http://www.beautifultoo.com/blog/pretty-is-the-new-prozac/?unapproved=7288&moderation-hash=a5ebf6e68b7e44606044c45d57802f9d | CC-MAIN-2020-05 | en | refinedweb |
Domain Component (DC) interfaces are abstractions above eXpress Persistent Objects (XPO) classes. Notwithstanding its benefits, this abstraction imposes certain technical limitations. DC is only suitable for certain usage scenarios. Mobile and SPA User Interfaces are not supported for DC. DC is in maintenance mode and we do not recommend its use in new software projects.
With Domain Components, you can define interfaces instead of regular business objects inherited from XPO classes. These interfaces will declare the required properties or data fields. The way this data is to be processed (the Domain Logic) is then defined by the creation of special classes that determine how interface members behave when an object is constructed, when properties are changed, etc. Actual business classes are automatically implemented by XAF at runtime, based on the logic and interfaces provided. You can package interfaces and domain logic in an assembly, and use it as a domain library. If you then create a new XAF application, you can reference the domain library and reuse the domain components. Since interfaces support multiple inheritances, the required business objects can be combined into new domain components. With interfaces, you can make your domain model independent of implementation.
The following snippet illustrates a typical Domain Component definition.
[DomainComponent]
public interface IPerson {
string LastName { get; set; }
string FirstName { get; set; }
string FullName { get; }
void Copy(IPerson target);
}
<DomainComponent> _
Public Interface IPerson
Property LastName() As String
Property FirstName() As String
ReadOnly Property FullName() As String
Sub Copy(ByVal target As IPerson)
End Interface
As you can see in the code above, the interface decorated with the DomainComponentAttribute is considered to be a Domain Component. This interface must expose properties of the business class to automatically be generated when the application runs. You can use attributes applicable to regular business classes and their properties. For instance, you can decorate the LastName property with the RuleRequiredFieldAttribute, and the interface itself with the NavigationItemAttribute. The FullName property is declared as read-only. Thus, it is required that the logic of its calculation is defined. Additionally, the Copy method implementation is required. The required logic should be implemented in a Domain Logic class associated with the Domain Component via the DomainLogicAttribute attribute.
To specify what classes should be generated, register the required domain components with the application. Edit the Module.cs file and invoke the ITypesInfo.RegisterEntity method in the ModuleBase.Setup method override.
using DevExpress.Persistent.BaseImpl;
// ...
public override void Setup(XafApplication application) {
base.Setup(application);
XafTypesInfo.Instance.RegisterEntity("Person", typeof(IPerson));
}
Imports DevExpress.Persistent.BaseImpl
' ...
Public Overrides Sub Setup(ByVal application As XafApplication)
MyBase.Setup(application)
XafTypesInfo.Instance.RegisterEntity("Person", GetType(IPerson))
End Sub
Based on the code above, the Person class derived from the DCBaseObject class will be generated. The generated class will expose properties and utilize the Domain Logic of the IPerson Domain Component.
To use custom base class, pass the baseClass parameter to the RegisterEntity method. | https://docs.devexpress.com/eXpressAppFramework/113262/concepts/business-model-design/business-model-design-with-xpo/domain-components/domain-components-basics?v=18.2 | CC-MAIN-2020-05 | en | refinedweb |
import "go.chromium.org/luci/config/server/cfgclient/access"
Package access implements a config service access check against a project config client.
Note that this is a soft check, as the true access authority is the config service, and this check is not hitting that service.
If access is granted, this function will return nil. If access is explicitly denied, this will return ErrNoAccess.
This is a port of the ACL implementation from the config service:
ErrNoAccess is an error returned by CheckAccess if the supplied Authority does not have access to the supplied config set.
Check tests if a given Authority can access the named config set.
Package access imports 10 packages (graph) and is imported by 4 packages. Updated 2020-01-18. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/config/server/cfgclient/access | CC-MAIN-2020-05 | en | refinedweb |
Opened 10 years ago
Closed 8 years ago
#1683 closed Feature Request (Rejected)
Displaying thumbnails of files in a GUI
Description
It would be great to have a function who can get and display thumbnails of files in a GUI.
Not only JPG, BMP but PDF, DOC...
Thanks
Eric
Attachments (0)
Change History (7)
comment:1 Changed 10 years ago by anonymous
comment:2 Changed 10 years ago by Eric
GUICtrlCreateIcon works fine for icons but not for thumbnails.
comment:3 Changed 10 years ago by Jpm
Which API can be used to display thumbnails?
comment:4 Changed 10 years ago by Eric
I don't know but i have found a method who works for Windows XP but not for Windows 7.
comment:5 Changed 10 years ago by Jpm
- Owner set to Jon
- Status changed from new to assigned
comment:6 Changed 8 years ago by guinness
Your request can be achieved by using the following example.
#include <APIConstants.au3> ; Download from #include <Constants.au3> #include <GUIConstantsEx.au3> #include <WinAPIEx.au3> ; Download from Local $iIcon GUICreate("", 128, 128) $iIcon = GUICtrlCreateIcon("", 0, 48, 48, 32, 32) _Icon_Set($iIcon, @ScriptFullPath) GUISetState(@SW_SHOW) Do Until GUIGetMsg() = $GUI_EVENT_CLOSE Func _Icon_Clear($iControlID) Local Const $STM_SETIMAGE = 0x0172 If $iControlID = -1 Then $iControlID = _WinAPI_GetDlgCtrlID(GUICtrlGetHandle($iControlID)) EndIf Return GUICtrlSendMsg($iControlID, $STM_SETIMAGE, $IMAGE_ICON, 0) EndFunc ;==>_Icon_Clear Func _Icon_Set($iControlID, $sFilePath) ; Idea initially from Yashied. Local Const $STM_SETIMAGE = 0x0172 Local $hIcon, $tInfo If $iControlID = -1 Then $iControlID = _WinAPI_GetDlgCtrlID(GUICtrlGetHandle($iControlID)) EndIf $tInfo = DllStructCreate($tagSHFILEINFO) _WinAPI_ShellGetFileInfo($sFilePath, BitOR($SHGFI_ICON, $SHGFI_LARGEICON), 0, $tInfo) $hIcon = DllStructGetData($tInfo, 'hIcon') Return _WinAPI_DestroyIcon(GUICtrlSendMsg($iControlID, $STM_SETIMAGE, $IMAGE_ICON, $hIcon)) EndFunc ;==>_Icon_Set
comment:7 Changed 8 years ago by trancexx
- Resolution set to Rejected
- Status changed from assigned to closed
The accent is on using UDFs.
Guidelines for posting comments:
- You cannot re-open a ticket but you may still leave a comment if you have additional information to add.
- In-depth discussions should take place on the forum.
For more information see the full version of the ticket guidelines here.
GUICtrlCreateIcon() works just fine.
Search the forum for possible UDF's in this area. | https://www.autoitscript.com/trac/autoit/ticket/1683 | CC-MAIN-2020-05 | en | refinedweb |
FPS based on Unity’s FPS Controller Part 8 - Mob prototype, enemy FOV, player health, game over screen
The time has come to create some basic enemies for the FPS prototype. The implementation includes basic AI behaviour, field of view and mobs being able to kill the player.
First of all, let me mention the excellent tutorials from Unity in Action and Sebastian Lague. Both helped me a lot in making the mob and FOV scripts work.
Mob prefabs are simple capsules with a HealthManager and a BasicAi script attached to them. BasicAi makes the enemies move and also casts a sphere in every update to check if an obstacle is blocking the way of the mob. If an obstacle is hit by the ray and is within the distance set in obstacleRange then a random angle is chosen within [-110, 110] which becomes the new orientation for the mob. This simple logic stops enemies from bumping into or going through obstacles.
Lesson learned: always check the number of arguments when using a layermask with raycast. I spent more than an hour debugging the code because the layermask didn’t seem to work at all only to find out that the layermask was implicitly cast to a float and was considered as a distance value by the raycast method :D.
BasicAi also has a ShootTargets() coroutine. This iterates over all the visible targets registered by the FOV script, instantiates a bullet prefab and rotates it towards the target. My decision to move the center of the player’s character controller to its feet now proved to be problematic because using LookAt() on all axes made the bullet fly towards the feet and colliding with the floor. At the moment, the bullet’s y position remains intact so the bullet does not change its vertical angle. The coroutine yields for attackCooldown seconds before firing again.
using System.Collections; using System.Collections.Generic; using System.Collections.ObjectModel; using UnityEngine; public class FieldOfView : MonoBehaviour { [SerializeField] private float viewRadius; [SerializeField] [Range(.0f, 360f)] private float viewAngle; [SerializeField] private LayerMask targetMask; [SerializeField] private LayerMask obstacleMask; private List<Transform> visibleTargets = new List<Transform>(); public float ViewRadius { get { return viewRadius; } } public float ViewAngle { get { return viewAngle; } } private void Start() { StartCoroutine(FindTargetsWithDelay(0.2f)); } private IEnumerator FindTargetsWithDelay(float delay) { while (true) { yield return new WaitForSeconds(delay); FindVisibleTargets(); } } private void FindVisibleTargets() { visibleTargets.Clear(); Collider[] targetsInViewRadius = Physics.OverlapSphere(transform.position, viewRadius, targetMask); foreach (Collider targetCollider in targetsInViewRadius) { Transform target = targetCollider.transform; Vector3 directionToTarget = (target.position - transform.position).normalized; if (Vector3.Angle(transform.forward, directionToTarget) < viewAngle / 2f) { float distanceToTarget = Vector3.Distance(transform.position, target.position); if (!Physics.Raycast(transform.position, directionToTarget, distanceToTarget, obstacleMask)) { visibleTargets.Add(target); } } } } public Vector3 DirectionFromAngle(float angleInDegrees, bool isAngleGlobal) { if (!isAngleGlobal) { angleInDegrees += transform.eulerAngles.y; } return new Vector3(Mathf.Sin(angleInDegrees * Mathf.Deg2Rad), .0f, Mathf.Cos(angleInDegrees * Mathf.Deg2Rad)); } public ReadOnlyCollection<Transform> GetVisibleTargets() { return new ReadOnlyCollection<Transform>(visibleTargets); } }
Bullet also has its own script. In Update() it moves the bullet with the given speed, if something else enters its trigger collider the bullet is destroyed (except when the other object is also a projectile). A reference is acquired to the target’s HealthManager (if it has one) and its Damage() method is invoked with the bullet’s damage.
using UnityEngine; public class Bullet : MonoBehaviour { [SerializeField] private float speed = 10f; [SerializeField] private int damage = 1; private void Update() { transform.Translate(0, 0, speed * Time.deltaTime); } private void OnTriggerEnter(Collider other) { if (!other.GetComponent<Tags>().Projectile) { Destroy(gameObject); } HealthManager targetHealthManager = other.gameObject.GetComponent<HealthManager>(); if (targetHealthManager != null) { targetHealthManager.Damage(damage); } } }
I won’t include the FOV script here, it is mostly based on Sebastian Lague’s tutorial and also has an editor extension making field of view circles visible when an enemy is selected. Of course, the code can be found on my BitBucket as always.
To make enemies be able to hurt the player I made separate implementations of NonPlayerHealthManager and PlayerHealthManager. These two scripts both inherit from the abstract class HealthManager. Upon reaching a health amount of 0 or less, PlayerHealthManager disables all control scripts, unlocks the mouse cursor and calls the method OnGameOver() which has been implemented in a new script called MainLogic attached to a GameManager object.
This method enables a special game over GUI panel and a restart button which reloads the current scene. This coupling between the manager class and PlayerHealthManager is quite ugly so I’m going to add some messaging system to the code next time. | http://www.mattsnippets.com/fps-based-on-unity-controller-part-8/ | CC-MAIN-2020-05 | en | refinedweb |
Section 1.3 in Structure and Interpretation of Computer Programs is about Formulating Abstractions with Higher-Order Procedures. As an example, the authors use three simple sums:
- a sum of an integer range
- a sum of the cubes of an integer range
- a sum of a series that converges to π/8
The purpose is to highlight what is common between them and what differs. The differences boil down to:
- calculating the current term
- calculating the next value
The authors show that all three functions can be expressed as calls to a higher-order function. Here is the
sum function in Clojure:
(defn sum [term a next b] (if (> a b) 0 (+ (term a) (sum term (next a) next b))))
The function takes four parameters:
[term a next b].
term is the function for calculating the current term,
a is the start of the range to perform the sum on,
next is the function for calculating the next value in the sequence, and
b is the end of the range. The
sum function first calculates the current term by applying
term to
a, then adds that to the result of calling itself again, but this time not with
a, but the next value in the sequence:
(next a). It keeps doing that until
a is greater than
b, then the upper bound is reached and the function is done. It doesn’t call itself any more, but instead returns zero.
sum-integers
Using
sum to calculate a sum of integers is easy. Each number in the sequence will itself be the actual term. We use the
identity function as
term, since it returns its input unchanged. The
inc function increments its argument by one, so that is ideal for
next. Here is the implementation:
(defn sum-integers [a b] (sum identity a inc b))
I’ll test the function in my REPL (
user> is the prompt):
user> (sum-integers 1 10) 55
sum-cubes
The second example is summing up the cubes of an integer range. We define the
cube function:
(defn cube [x] (* x x x))
The implementation of
sum-cubes is very similar to
sum-integers. We simply replace
identity with
cube, so the current term will be the cube of the integer, rather than just the integer:
(defn sum-cubes [a b] (sum cube a inc b))
user> (sum-cubes 1 10) 3025
pi-sum
The third example is a series that happens to converge to π/8:
In order to be able to apply the generic
sum tool to this problem, we need to write two helper functions: one function for calculating the current term:
pi-term, and one function for calculating the next value in the sequence:
pi-next. We can see, by looking at the series above, that each term is
1/(x*(x+2)), and that the next term has an
x that is 4 greater than the previous term:
(defn- pi-term [x] (/ 1 (* x (+ x 2)))) (defn- pi-next [x] (+ x 4))
Note that I didn’t use
defn to define the functions, I used
defn-. That is a Clojure feature for making the helper functions private, so we don’t clutter up the namespace with internal stuff. Now we have what we need to call
sum.
(defn pi-sum [a b] (sum pi-term a pi-next b))
We’ll test this function with the range 1 to 10, multiplying with 8 so we’ll get π (remember that the series converges to π/8):
user> (* 8 (pi-sum 1 10)) 10312/3465
Here we can see that the output is Clojure’s built-in type Ratio. Division of integers that can’t be reduced to an integer yields a ratio, rather than a floating point or truncated value. This can be really useful, but when the numbers get large, it can be hard to see what the ratio represents:
user> (* 8 (pi-sum 1 100)) 3400605476464206445954873476681150352328/1089380862964257455695840764614254743075
Any numeric operation on a Ratio involving Doubles, will yield a Double. That means we can skip the Ratio and go directly to Double by using 1.0 as the starting point:
user> (* 8 (pi-sum 1. 1000)) 3.139592655589783
The Substitution Model
If it’s not immediately obvious to you how
sum works, you can always use the Substitution Model. Evaluate the body of the procedure with each formal parameter replaced by the corresponding argument. Let’s evaluate the call
(sum-integers 1 3):
Retrieve the body of
sum-integers:
(sum identity a inc b)
Replace the formal parameters
a and
b by the arguments
1 and
3:
(sum identity 1 inc 3)
The problem is reduced to evaluating a call to
sum with four arguments. So let’s retrieve the body of
sum:
(if (> a b) 0 (+ (term a) (sum term (next a) next b)))
Now we replace
sum‘s formal parameters
[term a next b] with our arguments
identity,
1,
inc, and
3:
(if (> 1 3) 0 (+ (identity 1) (sum identity (inc 1) inc 3)))
1 is not greater than 3, so it reduces to:
(+ (identity 1) (sum identity (inc 1) inc 3))
(identity 1) evaluates to
1, and
(inc 1) evaluates to
2, so it reduces to:
(+ 1 (sum identity 2 inc 3))
We’ll evaluate the call to
sum again, directly replacing this time:
(+ 1 (if (> 2 3) 0 (+ (identity 2) (sum identity (inc 2) inc 3))))
2 is not greater than 3, so this reduces to:
(+ 1 (+ 2 (sum identity 3 inc 3)))
We retrieve the body of
sum again, replacing parameters with arguments:
(+ 1 (+ 2 (if (> 3 3) 0 (+ (identity 3) (sum identity (inc 3) inc 3))))
3 is not greater than 3, so this reduces to:
(+ 1 (+ 2 (+ 3 (sum identity 4 inc 3))))
We retrieve the body of
sum for the last time:
(+ 1 (+ 2 (+ 3 (if (> 4 3) 0 (+ (identity 4) (sum identity (inc 4) inc 3))))
4 is greater than 3, so this reduces to:
(+ 1 (+ 2 (+ 3 0)))
which evaluates to 6.
The Substitution Model is a useful technique when you’re new to functional programming, or when you just need to know in detail what is really going on in a function.
This Post Has One Comment
Pingback: Numerical Integration (With Precision) | Jayway Team Blog - Sharing Experience | https://blog.jayway.com/2011/03/20/the-substitution-model-a-tool-for-understanding-recursion/ | CC-MAIN-2020-05 | en | refinedweb |
This article is about tinyAVR (ATtiny13, ATtiny25, ATtiny45, ATtiny85) library for 7-segment display modules based on TM1637 chip. These TM1637 modules provide two signal connections (CLK and DIO) and two power connections (VCC and GND). Signal pins can be connected to any pair of digital pins of the AVR chip. Signal pins configuration is defined at the top of library header file, where it can be modifed. Complete TM1637 library code is on GitHub, here.
Key Features
This lightweight library has the following features:
- display digits
- display raw segments
- display colon
- brightness control
- display on/off
- software I2C
Example Code
This sample code demonstrates basic usage of the library
#include <stdint.h> #include <avr/io.h> #include <util/delay.h> #include "tm1637.h" int main(void) { uint8_t n, k = 0; /* setup */ TM1637_init(1/*enable*/, 5/*brightness*/); /* loop */ while (1) { for (n = 0; n < TM1637_POSITION_MAX; ++n) { TM1637_display_digit(n, (k + n) % 0x10); } TM1637_display_colon(1); _delay_ms(200); TM1637_display_colon(0); _delay_ms(200); k++; } }
6 thoughts on “ATtiny13 – TM1637 Library”
Hello Podkalicki,
I’m beginner to this world and I truly don’t understand what does the code “0b01110110” means, Bing says it is equivalent to 10.
Can you please tell me to which code system the code belongs to.
THANK YOU
It’s a binary (BIN) form of expressing numbers. Normally, we use the decimal (DEC) form. Computers usually use the binary or hexadecimal (HEX) form.
Thank You for the libary. It works but I have Problems with Digit2 and its colon. Sometimes colon is on but should be off. Or Digt2 shows wrong segments. Are there known issues?
You’re first reporting it. Could you create an issue on Github? I’ll try to reproduce it.
Hello Podkalicki,
Thank you for this example. How I can use this library to connect tm1637 to a PIC16F88?
Thank you for your help
You’re welcome! I have no idea how to port it to PIC. This library has been designed for tinyAVR family. | https://blog.podkalicki.com/attiny13-tm1637-library/ | CC-MAIN-2020-05 | en | refinedweb |
Introduction
There are number of ways provided by Microsoft to create a setup project for windows application.
But when I started to create one, I got nothing but queries and confusions of how to start and where to start. There are numerous articles I found over network, explaining to create a setup project, but some does not work as they say, and some do not have a live example to follow.
The driving force for me to write this article is my QC team, who except the main application to test also verified my setup installer with their 100% effort L. And guess what, they were successful to find bugs in that too.
In this article I would like to explain a step by step process to create a windows application and a setup installer for the same in a very simple manner, that is easy to understand and follow, alternatively there are n no. of ways to do so.
Start the Show
Firstly let’s create a simple one form windows application, having a text box and a button only.
Creating will.
Just wanted to write few lines of code, so I binded button’s click event to show text box’s text,
Primary Objective
So far so good. Now let’s create a installer for the same windows application. Right click on solution and add a new project to your solution like in following figure,
And add a setup project by Other project Types->Setup and Deployment->Visual Studio Installer
The project will be added to the solution , now open file system editor by clicking on the project and selection the option to open file system editor, like shown Add output project window, select it as a primary output as shown below and click ok.
The Primary output will be added as shown below, having type defined as Output.
In the meanwhile lets add some more functionality to our windows application, lets read a file and show its output in a message box on button click. Therefore just add a text file, I called it Sample.txt to the bin\debug\Input folder, Input is the custom folder I created to place my txt file.
Write few lines of code just to read the txt file from Startup path, in my case bin\debug, it could also be bin\release as per project build, and specify the file folder name and file name to tead the content.I chose to keep my txt file at startup path so that I could explain how we can create files and folders at the time of installation. Now we also need this Input folder and a Sample.txt file at the time of installation to be located at the location of installed application.
For file operations I added the namespace System.IO, needless to specify this though.
Therefor running the application will show two message boxes one after another showing text box text and text from Sample.txt file.
Now this folder creation logic has to be implemented in out Always Create property to True.That means folder will always be created whenever we run the installer, after a fresh build release.
Create Shortcuts
Time to add at desktop when the application will launch. Below figures explain how to add an icon.
Cut the shortcut created at Application Folder and Paste it under User’s Desktop Folder.Job done to create a shortcut to user’s desktop.
For shortcuts to be created at User’s Program Menu, add a new folder to User’s Program Menu, that will be created at program’s menu location, in that folder create a new shortcut pointing the primary output as done for creating desktop shortcut. The three images as following describes,
Uninstall
We always have an option to uninstall the application from control panel’s Programs and Features list as simple as that but how about creating our own uninstaller, that too under programs menu so that we do not have to disturb control panel.
Step1:
Right click on File System on target Machine and Add Special Folder->System Folder as shown in below figure.
Step2:
Right click on newly created system folder and browse for msiexec.exe file in local System.Windows32 folder. This file takes responsibility to install and uninstall the application based on certain parameters specified.
Set the properties of the file exactly as shown in the figure,
Step4:
Now create a new shortcut under User’s program Menu and point its source to msiexec as shown below. You can add icons and name to your shortcut. I have given it the name “Uninstall”.
Step5:
Press F4 key by selecting the setup project, we see a list of properties, we can customize these properties as per out installation needs, like set Product name, Author, Installation location, I’ll not go into a deep discussion about all this, as they are quite easy to understand and set.
Just take a note of product code shown below in the list of properties. We would need product code as a parameter to msiexec for uninstallation.
Step6:
Right click the Uninstall shortcut and set the arguments property as shown in below figure,
/x {product code} /qr
/x is for uninstalltion.
You can get the whole detaild list of parameters and their use at , chose the one you like to.
Step7:
Save all and Rebuild the setup project.
Job Done !
Now our setup is ready to install our windows application.
Just browse the debug folder location of Setup project, we find a msi and a setup.exe, one can run either of two to initiate setup.
When started we see a setup wizard, having screens which welcome’s user, which asks for location to install(already showing default location set.)
After completing the wizard,Click the close button.
Now Job is done, we can see our shortcuts to the application created at desktop and User’s Program Menu like in below given figure.
Now if we navigate to out installation location we can also see the Input folder created and Sample.txt file resting inside it.
Run the application and see the output, works perfectly as was when executed from visual studio.
Click on uninstall to remove the application, the wizard launches as shown below,
Custom Actions
Just wanted to give a glimpse of Custom Actions we can define, while making setup.
Custom actions are the actions which contains customized functionality apart from default one at the time of installation and uninstallation. For e.g. My QC team reported a bug that when the run the application and in background uninstall the application, the application still keeps on running. As per them it should show a message or close while uninstallation. Hard to explain them the reason, I opted for implementing their desire in setup project.
1.Just add an installer class to the windows application we created earlier. When we open the installer class we can see the events specified for each custom action i.e. for Installation, Uninstallation, Rollback,Commit.
My need was to write code for uninstallation, so I wrote few lines to fulfill the need,
The code contains the logic to find the running exe name at the time of uninstallation, if it matches my application exe name, just kill the process. Not going into more details to it.Just want to explain the use of custom actions.
using System;
using System.Collections;
using System.Collections.Generic;
using System.ComponentModel;
using System.Configuration.Install;
using System.Diagnostics;
using System.Linq;
namespace CreatingInstaller
{
[RunInstaller(true)]
public partial class Installer1 : System.Configuration.Install.Installer
{
public override void Install(IDictionary savedState)
{
base.Install(savedState);
//Add custom code here
}
public override void Rollback(IDictionary savedState)
{
base.Rollback(savedState);
/);
}
}
}
}
2. Click on Custom Actions Editor after selecting CreatingInstallerSetup project.
3. We see custom action editor pane on left window, right click it to add custom action and select primary output in Application Folder.
4. We see primary output added as custom actions now. Now at the time of uninstallation my custom action will be fired, and application will be closed while uninstalling it.
.Net Framework
What if installation machine do not have .net framework. We can specify our own package supplied with installation, so that our application do not depend on the .net framework of client machine, but points to the package we supplied to it to run.
Right click on Setup project, to open properties window.
Here we can specify pre-requisites for the application to install. Just click on Prerequisites button and, in the opened prerequisites window, select checkbox for the .net framework application needs to follow, and select radio button at no. 2 i.e. Download prerequisites from the same location as my application. Press OK save the project and re-build it.
Now when we browse to Debug folder of Setup project we see two more folders as a result of actions we did just now.
Now this whole package has to be supplied to the client machine for installation of the application.
Now re-install the application from setup.exe, and launch it using shortcuts.
Conclusion
The tutorial covers the basic steps for creating the installation project. I did not go very deep explaining registry, license agreements though. There are many things to be explored yet to understand and master this topic. However, this article was just a start for a developer to play around with setup and deployments. Happy Coding J
Read more:
- C# and ASP.NET Questions (All in one)
- MVC Interview Questions
- C# and ASP.NET Interview Questions and Answers
- Web Services and Windows Services Interview Questions
Other Series
My other series of articles: | http://csharppulse.blogspot.com/2013/03/creating-msi-package-for-c-windows.html | CC-MAIN-2017-47 | en | refinedweb |
This article will be the first one from a series of articales that will cover System I\O operations you can covered using C# classes.
Classes for I/O operations
C# contain multiple classes that we can used to handle I/O operations (Files /Folders) for this article you need to add the following 'using' in your code:
using System.Text;
using System.Threading.Tasks;
using System.IO;
By adding this namespace we now have access to different classes that can help us perform multiple operations on files and folders.
Main classes in System.Io namespace:
Class Directory
This class contain static methods for Directories manipulation, using this class you can do multiple operations for folders without using/Creating instances from class.
Class Summary:
This table contain the main methods you can use while working with this class.
Code Examples:
Create Folder
This example create root folder under root drive with inner folder.
Directory.CreateDirectory("C:\\New Folder\\SubFolder");
Add Folder for existing Tree
This code will add folder into the root folder we created on previous step:
Directory.CreateDirectory("C:\\New Folder\\SubFolder1");
Delete Folder
Example 1:
This code will delete an ‘Empty’ folder (No files or folder under given path)
Directory.Delete("C:\\New Folder\\SubFolder1");
Note!
Id objects resides under this path the program will crush with “IOException” error.
Example 2:
This code will delete root folder + sub objects (Folder /Files)
Directory.Delete("C:\\New Folder",true);
Note!
If folder opened or used by another process the software failed with “IOException error”
Check for folder existence under specific folder
This example send query to validate if specific folders located under a Directory path
Directory.Exists("C:\\New Folder\\SubFolder");
Extract the Root folder for specific path
This code will return the folder that host the folder supplied in method
string RootFolder = Directory.GetDirectoryRoot("c:\\1");
Note!
You can also extract the project current directory when using the “Directory.GetCurrentDirectory();” method.
Extracting dates for specific operations
This part will show how to receive dates for specific operations performed on a given folder
Creation Time:
DateTime FolderCreation2 = Directory.GetCreationTime("c:\\1");
Last modification time:
DateTime LastModificationTime = Directory.GetLastWriteTime("c:\\1");
Extracting the computer logical drives
This code will extract all logical drives located on computer, pay attention that we need to use an array to host the returned values.
string [] LogicalDrives = Directory.GetLogicalDrives();
foreach (var item in LogicalDrives)
{
Console.WriteLine(item);
}
Extract folder list under specific path
This code will demonstrate how to extract the folder tree under specific path
string [] ArrayOfDirectories = Directory.GetDirectories("c:\\");
foreach (var item in ArrayOfDirectories)
{
Console.WriteLine(item);
}
Note!
You can use search patter by using:
string[] SearchByPattern = Directory.GetDirectories("c:\\1", "FolderNmae");
Extract files list under specific path
This code will demonstrate how to extract the files tree under specific path
string [] ArrayOfFiles = Directory.GetFiles("c:\\1");
foreach (var item in ArrayOfFiles)
{
Console.WriteLine(item);
}
Note!
You can use search patter by using:
string[] ArrayOfFiles = Directory.GetFiles("c:\\1", "Filename"); | http://www.machtested.com/2013/08/c-working-with-file-system-objects-part.html | CC-MAIN-2017-47 | en | refinedweb |
Try/Catch/Finally
Part of HandlingErrorsCategory
Description
catch that error (don't let it catch you!)
Example
import std.asserterror; int main() { try { whatever(); } catch(AssertError) { printf("Whoa! Hold on there. An assertion failed.\n\n"); } finally { /* The finally block executes to allow clean-up of items allocated in the try block. */ printf("This would happen after the errors are dealt with (if there are any errors).\n\n"); } printf("If something happened, it wasn't enought of a problem to end the program.\n\n"); return 0; } void whatever() { assert(0); /* comment out this line to see what happens if no error occurs */ }
More Information
Try-Finally is also available in Java and C#. | http://dsource.org/projects/tutorials/wiki/TryCatchFinallyExample | CC-MAIN-2017-47 | en | refinedweb |
Arduino Weather Station Web ServerAli Hamza
Contents
First.
Program
#include <SoftwareSerial.h> //including the software serial UART library which will make the digital pins as TX and RX #include "DHT.h" //including the DHT22 library #define DHTPIN 8 //Declaring pin 8 of arduino to communicate with DHT22 #define DHTTYPE DHT22 //Defining type of DHT sensor we are using (DHT22 or DHT11) #define DEBUG true DHT dht(DHTPIN, DHTTYPE); //Declaring a variable named dht SoftwareSerial esp8266(2,3); //Connect the TX pin of ESP8266 to pin 2 of Arduino and RX pin of ESP8266 to pin 3 of Arduino. void setup() { Serial.begin(9600); esp8266.begin(9600); // Set the baud rate of serial communication dht.begin(); //This will initiate receiving data from DHT22 sendData("AT+RST\r\n",2000,DEBUG); // Reset the module sendData("AT+CWMODE=2\r\n",1000,DEBUG); // Configure ESP8266 as an access point sendData("AT+CIFSR\r\n",1000,DEBUG); // Get the IP address of ESP8266 sendData("AT+CIPMUX=1\r\n",1000,DEBUG); // Configure ESP8266 for multiple connections sendData("AT+CIPSERVER=1,80\r\n",1000,DEBUG); // Start TCP server at port 80 } void loop() { float hum = dht.readHumidity(); //Reading humidity and storing in hum float temp = dht.readTemperature(); //Reading temperature in celsius and storing in temp // Check if any reads failed and exit early (to try again) float f = dht.readTemperature(true); if (isnan(hum) || isnan(temp)) { Serial.println("Failed to read from DHT sensor!"); return; } float hi = dht.computeHeatIndex(f, hum); // Computing heat index in Fahrenheit float hiDegC = dht.convertFtoC(hi); // Converting heat index to degree centigrade if(esp8266.available()) // If any data is available to read (only when we request data through browser) { if(esp8266.find("+IPD,")) // Validating the request by finding "+IPD" in the received data { delay(1000); int connectionId = esp8266.read()-48; // Subtracting 48 from the character to get the connection id. String webpage = "<h1>Arduino Weather Station</h1>"; // Creating a string named webpage and storing the data in it. webpage += "<h3>Temperature: "; webpage += temp; //This will show the temperature on the webpage. webpage += "*C"; webpage += "</h3>"; webpage += "<h3>Temperature: "; webpage += f; webpage += "F"; webpage += "</h3>"; webpage += "<h3>Humidity: "; webpage += hum; webpage += "%"; webpage += "</h3>"; webpage += "<h3>Heat index: "; webpage += hiDegC; webpage += "*C"; webpage += "</h3>"; webpage += "<h3>Heat index: "; webpage += hi; webpage += "F"; webpage += "</h3>"; String cipSend = "AT+CIPSEND="; cipSend += connectionId; cipSend += ","; cipSend +=webpage.length(); cipSend +="\r\n"; sendData(cipSend,1000,DEBUG); sendData(webpage,1000,DEBUG); // Sending temperature and humidity information to browser //The following three commands will close the connection String closeCommand = "AT+CIPCLOSE="; closeCommand+=connectionId; // append connection id closeCommand+="\r\n"; sendData(closeCommand,3000,DEBUG); // Sending the close command to the sendData function to execute } } } //This function will send the data to the webpage String sendData(String command, const int timeout, boolean debug) { String response = ""; esp8266.print(command); // Sending command to ESP8266 module long int time = millis(); // Waiting for sometime while( (time+timeout) > millis()) { while(esp8266.available()) // Checking whether the ESP8266 has received the data or not { char c = esp8266.read(); // Reading response from ESP8266 response+=c; } } if(debug) { Serial.print(response); } return response; }
DHT22 Library
You need to manually add DHT library to Arduino IDE as it is not included by default. You can ignore it if you already added it. Otherwise you can do following steps for that.
- Download DHT Library from here : DHT Sensor Library.
- Open Arduino IDE.
- Go to Sketch >> Include Library >> Add .ZIP Library
- Select the downloaded ZIP file and press Open.
Working
After uploading the code to Arduino, open the serial monitor from the Arduino IDE. It should show you the IP address as shown below.
Then connect your system to the access point created by ESP8266 module (ESP_xxxxxx). Enter the IP Address in your web browser (Google Chrome or Mozilla Firefox). Now you can monitor temperature and humidity details.
Note : You may change the default SSID (ESP_xxxxxx) of ESP8266 WiFi module by using AT+CWSAP command. | https://electrosome.com/arduino-weather-station-iot/ | CC-MAIN-2017-47 | en | refinedweb |
In-app deep link routing
Overview¶
When a Branch link is opened, either your app launches or users are taken to the App/Play store to download it. Deep links improve this process by routing users directly to specific content after your app launches. With Branch, this still works even if users have to stop and download the app first (a.k.a., "deferred deep links").
Deep links are an incredibly important part of delivering a high quality user experience. With deep links, you can take users to the exact thing they clicked on or even offer a customized onboarding experience.
Option 1: Build custom routing inside the routing callback¶
Route immediately on app open¶
Inside the deep link handler callback that you register in initSession, you will want to examine the params dictionary to determine whether the user opened a Branch link. Below is an example assuming that the links correspond to pictures. Below are some examples from iOS and Android where we're using the
pictureId key to route, but you can see more code snippets for the other platforms here.
iOS - Objective C
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // initialize the session, setup a deep link handler [[Branch getInstance] initSessionWithLaunchOptions:launchOptions andRegisterDeepLinkHandler:^(NSDictionary *params, NSError *error) { // start setting up the view controller hierarchy UINavigationController *navC = (UINavigationController *)self.window.rootViewController; UIStoryboard *storyboard = [UIStoryboard storyboardWithName:@"Main" bundle:nil]; UIViewController *nextVC; // If the key 'pictureId' is present in the deep link dictionary // then load the picture screen with the appropriate picture NSString *pictureId = [params objectForKey:@"pictureId"]; if (pictureId) { nextVC = [storyboard instantiateViewControllerWithIdentifier:@"PicVC"]; [nextVC setNextPictureId:pictureId]; } else { nextVC = [storyboard instantiateViewControllerWithIdentifier:@"MainVC"]; } // navigate! [navC setViewControllers:@[nextVC] animated:YES]; }]; return YES; }
iOS - Swift
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { let branch: Branch = Branch.getInstance() branch.initSession(launchOptions: launchOptions, andRegisterDeepLinkHandler: {params, error in // If the key 'pictureId' is present in the deep link dictionary if error == nil && params["+clicked_branch_link"] != nil && params["pictureId"] != nil { print("clicked picture link!") // load the view to show the picture } else { // load your normal view } }) return true }
Android
@Override public void onStart() { super.onStart(); Branch branch = Branch.getInstance(); branch.initSession(new BranchReferralInitListener(){ @Override public void onInitFinished(JSONObject referringParams, Branch.BranchError error) { if (error == null) { // params are the deep linked params associated with the link that the user clicked before showing up // params will be empty if no data found String pictureID = referringParams.optString("picture_id", ""); if (pictureID.equals("")) { startActivity(new Intent(this, HomeActivity.class)); } else { Intent i = new Intent(this, ViewerActivity.class); i.putExtra("picture_id", pictureID); startActivity(i); } } else { Log.e("MyApp", error.getMessage()); } } }, this.getIntent().getData(), this); }
Branch-added parameters¶
In addition to any custom key/value pairs specified in the link data dictionary, Branch also returns some other useful parameters every time a session is initialized. These parameters will be returned every time
initSession is called, even if the user has not clicked on a Branch link. Here is a list, and a description of what each represents.
~denotes analytics
+denotes information added by Branch
- (for the curious,
$denotes reserved keywords used for controlling how the Branch service behaves. Read more about control parameters on the Configuring Links page)
Access deep link parameters later on¶
You can retrieve the deep link data at any time from the Branch singleton by calling one of the below methods. This would be the route to use if you wanted to deep link the user after prompting them to log in or something. You can see the code snippets for other platforms here.
Get latest session referring params¶
This returns the latest set of deep link data from the most recent link that was clicked. If you minimize the app and reopen it, the session will be cleared and so will this data.
iOS - Objective C
NSDictionary *params = [[Branch getInstance] getLatestReferringParams];
iOS - Swift
let sessionParams = Branch.getInstance().getLatestReferringParams()
Android
JSONObject sessionParams = Branch.getInstance().getLatestReferringParams();
Get first session referring params¶
This returns the first set of deep link data that ever referred the user. Once it's been set for a given user, it can never be updated. This is useful for referral programs.
iOS - Objective C
NSDictionary *params = [[Branch getInstance] getFirstReferringParams];
iOS - Swift
let firstParams = Branch.getInstance().getFirstReferringParams()
Android
JSONObject installParams = Branch.getInstance().getFirstReferringParams();
Option 2: Let Branch use your existing deep link routing¶.
Incomplete support on iOS
Universal Links and Spotlight do not support deep linking via URI paths. If you use
$deeplink_path or
$ios_deeplink_path, you will need to implement some custom logic. Click here for more information.
How to insert custom deep link routes into a Branch link¶
All of the examples below create links that will cause Branch to display
myapp://content/1234 after launch.
When creating links dynamically
If you're creating a link by appending query parameters, just append the control parameters to the URL. Please make sure to URL encode everything, lest the link will break.
"https://[branchsubdomain]?%24deeplink_path=content%2F1234"
When using a mobile SDK
iOS - Objective C
BranchLinkProperties *linkProperties = [[BranchLinkProperties alloc] init]; linkProperties.feature = @"sharing"; linkProperties.channel = @"facebook"; [linkProperties addControlParam:@"$deeplink_path" withValue:@"content/1234"];
iOS - Swift
let linkProperties: BranchLinkProperties = BranchLinkProperties() linkProperties.feature = "sharing" linkProperties.channel = "facebook" linkProperties.addControlParam("$deeplink_path", withValue: "content/1234")
Android
LinkProperties linkProperties = new LinkProperties() .setChannel("facebook") .setFeature("sharing") .addControlParameter("$deeplink_path", "content/1234");
When creating Quick Links on the Branch dashboard
You can specify the control parameters for individual Quick Links by inserting the keys and values into the Deep Link Data (Advanced) section.
How to handle URI paths with Universal Links or App Links¶
Because Universal Links, Spotlight and Android App Links do not use URI schemes for deep link routing. If you populate
$deeplink_path,
$ios_deeplink_path or
$android_deeplink_path with a URI path, you will need to a bit of additional work to ensure that Branch links route according to your original schema.
- Call initSession as described in the app configuration steps
- In the callback function, add some custom code to read the appropriate
$deeplink_pathparameter in the
params
- Use this value to call your existing routing logic to route users to the correct place in your app
Option 3: Use Branch's easy config deep link routing¶
Auto-routing in iOS¶
Configure View Controller to accept deep links¶
Open the view controller that you want to appear when a user clicks a link. For example, this could be a view to show a product. First, import the Branch framework:
Objective C
#import "Branch.h"
Swift
import Branch
Register your view controller for the delegate
BranchDeepLinkingController:
Objective C
@interface ExampleDeepLinkingController : UIViewController <BranchDeepLinkingController>
Swift
class ExampleDeepLinkingController: UIViewController, BranchDeepLinkingController {
Receive the delegate method that will be called when the view controller is loaded from a link click:
Objective C
@synthesize deepLinkingCompletionDelegate; - (void)configureControlWithData:(NSDictionary *)data { NSString *pictureUrl = data[@"product_picture"]; // show the picture dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ NSData *imageData = [NSData dataWithContentsOfURL:[NSURL URLWithString:pictureUrl]]; UIImage *image = [UIImage imageWithData:imageData]; dispatch_async(dispatch_get_main_queue(), ^{ self.productImageView.image = image; }); }); }
Swift
func configureControl(withData params: [AnyHashable: Any]!) { let dict = params as Dictionary if dict["product_picture"] != nil { // show the picture } }
What is a link data key?
The example key
product_picture is a parameter from the data dictionary of the link that was clicked, and would have been defined when the link was created.
- Objective C
- (IBAction)closePressed { [self.deepLinkingCompletionDelegate deepLinkingControllerCompleted]; }
- Swift
var deepLinkingCompletionDelegate: BranchDeepLinkingControllerCompletionDelegate? func closePressed() { self.deepLinkingCompletionDelegate!.deepLinkingControllerCompleted() }
Register View Controller for deep link routing¶
Lastly, you need to tell Branch about the view controller you just configured, and which key it is using from the link's data dictionary.
Objective C
[branch initSessionWithLaunchOptions:launchOptions andRegisterDeepLinkHandler:^(NSDictionary *params, NSError *error) { if (!error && params) { // params are the deep linked params associated with the link that the user clicked -> was re-directed to this app // params will be empty if no data found // ... insert custom logic here ... print(@"params: %@", params.description); } }];
Swift:
Objective C
ExampleDeepLinkingController *controller = [[UIStoryboard storyboardWithName:@"Main" bundle:[NSBundle mainBundle]] instantiateViewControllerWithIdentifier:@"DeepLinkingController"]; [branch registerDeepLinkController:controller forKey:@"product_picture" withPresentation:BNCViewControllerOptionShow]; [branch initSessionWithLaunchOptions:launchOptions automaticallyDisplayDeepLinkController:YES];
Swift
var controller = UIStoryboard.init("Main", NSBundle.mainBundle()).instantiateViewControllerWithIdentifier("DeepLinkingController") branch.registerDeepLinkController(controller, forKey: "product_picture", withPresentation: .optionShow) branch.initSession(launchOptions: launchOptions, automaticallyDisplayDeepLinkController: true)
Now whenever your app launches from a Branch link that has the
product_picture key set in its data dictionary, the
ExampleDeepLinkingController view controller will be displayed!
Note
BNCViewControllerOptionShow or BNCViewControllerOptionPush option would only push a view controller if the root view controller of window is of type UINavigationViewController. Or else, the view controller would be presented by default.
Auto-routing in Android¶
Configure Activity to accept deep links¶
Open the Activity that you want to appear when a user clicks a link. For example, this could be an Activity to show a product. Insert the following code snippet to display your content when the Activity is loaded from a link click:
@Override protected void onResume() { super.onResume(); if (Branch.isAutoDeepLinkLaunch(this)) { try { String autoDeeplinkedValue = Branch.getInstance().getLatestReferringParams().getString("product_picture"); launch_mode_txt.setText("Launched by Branch on auto deep linking!" + "\n\n" + autoDeeplinkedValue); } catch (JSONException e) { e.printStackTrace(); } } else { launch_mode_txt.setText("Launched by normal application flow"); } }
What is a link data key?
The example key
product_picture is a parameter from the data dictionary of the link that was clicked, and would have been defined when the link was created.
Register Activity for deep link routing¶
Lastly, you need to tell Branch about the Activity you just configured, and which key it is using from the link's data dictionary. In your Manifest file, locate the definition for the Activity above and add this meta-data tag:
<meta-data android:
Now whenever your app launches from a Branch link that has the
product_picture key set in its data dictionary, this Activity will be displayed! | https://docs.branch.io/pages/deep-linking/routing/ | CC-MAIN-2017-47 | en | refinedweb |
imclient library¶
The imclient library functions are distributed with Cyrus IMAP and IMSP. These functions are used for building IMAP/IMSP client software. These functions handle Kerberos authentication and can set callbacks based on the keyword in untagged replies or based on the command tag at the end of command replies.
Users must link with the -lcyrus switch, and must supply a function called
The following code is a possible skeleton of imclient that relies on Kerberos to do authentication. This code performs an IMAP CAPABILITY request and prints out the result.
#include <cyrus/xmalloc.h> /* example uses xstrdup */ #include <cyrus/sasl.h> #include <cyrus/imclient.h> #include <stdio.h> extern struct sasl_client krb_sasl_client; struct sasl_client *login_sasl_client[] = { &krb_sasl_client, NULL }; struct imclient *imclient; char server[] = "cyrus.andrew.cmu.edu" ; char port[] = "imap"; void fatal(char* message, int rc) { fprintf(stderr, "fatal error: %s\en", message); exit(rc); } static void callback_capability(struct imclient *imclient, void *rock, struct imclient_reply *reply) { if (reply->text != NULL) { *((char**)rock) = xstrdup( reply->text ); } } static void end_command(struct imclient *connection, void* rock, struct imclient_reply *inmsg) { (*(int*)rock)--; } main() { char* capability_string; int nc; if (imclient_connect(&imclient, server, port)) { fprintf(stderr, "error: Couldn't connect to %s %s\en", server, port); exit(1); } if (imclient_authenticate(imclient, login_sasl_client, "imap" /* service */, NULL /* user */, SASL_PROT_ANY)) { exit(1); } imclient_addcallback(imclient, "CAPABILITY", CALLBACK_NOLITERAL, callback_capability, &capability_string, NULL); nc = 1; imclient_send(imclient, end_command, (void*) &nc, "CAPABILITY"); while(nc > 0) { imclient_processoneevent(imclient); } if (strstr("LITERAL+", capability_string)) { imclient_setflags(imclient, IMCLIENT_CONN_NONSYNCLITERAL); } imclient_send(imclient, NULL, NULL, "LOGOUT"); imclient_close(imclient); printf("capability text is: %s\en", capability_string); free(capability_string); }
See Also¶
cyradm(8), imapd(8), RFC 2033 (IMAP LITERAL+ extension), RFC 2060 (IMAP4rev1 specification), and select(2) | https://www.cyrusimap.org/imap/developer/libraries/imclient.html | CC-MAIN-2017-47 | en | refinedweb |
SoTextDetail.3iv man page
SoTextDetail — stores detail information about a text node
Inherits from
SoDetail > SoTextDetail
Synopsis
#include <Inventor/details/SoTextDetail.h>
Methods from class SoTextDetail:
SoTextDetail()
virtual ~SoTextDetail()
int32_t getStringIndex() const
int32_t getCharacterIndex() const
SoText3::Part getPart() const
SbBox3f getBoundingBox() const
SbXfBox3f getXfBoundingBox() const
static SoType getClassTypeId()
Methods from class SoDetail:
SoDetail * copy() const
virtual SoType getTypeId() const
SbBool isOfType(SoType type) const
Description
This class contains detail information about a point on a text shape (SoText2 or SoText3). It contains the part of the text, string, and character that were hit or generated.
Methods
SoTextDetail()
virtual ~SoTextDetail()
Constructor and destructor.
int32_t getStringIndex() const
Returns the index of the relevant string within a multiple-value string field of a text node.
int32_t getCharacterIndex() const
Returns the index of the relevant character within the string. For example, if the character of detail was the "u" within "Splurmph", the character index would be 3.
SoText3::Part getPart() const
For SoText3, this returns which part was picked or generated.
SbBox3f getBoundingBox() const
SbXfBox3f getXfBoundingBox() const
When the detail is returned from picking, these return the object-space bounding box of the character that was intersected. Otherwise, they return an empty box. The second method returns an SbXfBox3f instead of a SbBox3f. These methods are implemented only for 3D text.
static SoType getClassTypeId()
Returns type identifier for this class.
See Also
SoText2, SoText3, SoDetail, SoPickedPoint, SoPrimitiveVertex | https://www.mankier.com/3/SoTextDetail.3iv | CC-MAIN-2017-47 | en | refinedweb |
If you are looking for the setup guide, you can find it here.
If you are looking for the API docs, you can find them here.
Beginners guide
The beginners guide is a linear guide,
it follows a semi logical progression about understanding some of the core concepts while working with luxe. Things like building and running, assets, input and more.
#1 - Getting Started
#2 - Images and sprites
#3 - Sprites and animation
#4 - Text and audio
Feature guide
The feature guide is a piece by piece reference for specific features of the engine, designed to explain and teach what the engine can do.
gameplay
timers
wip -
audio - transforms - app timing - scene - physics - collision
rendering
sprites
sprite animation
color
render control
shaders
drawing
fonts
wip -
render batching - cameras - tilemaps - nineslice - particles - textures
systems
assets
events
components
utils
wip -
maths - states
#1 - Getting started
code for this guide is found in
samples/guides/1_getting_started/
guide outcome
An empty project
For now, as luxe is in development,
copy the
luxe/samples/empty folder as your starting point.
This will be automated in future.
Some editors (like HaxeDevelop) have a new project template already.
Basic Anatomy
A flow file?!
When you build a luxe app, it's built by flow, a build tool that reads a project file.
Your project file is called a flow file, and has the extension
.flow.
This is essentially the entry point to working with your project.
Open your
project.flow file and look inside, you'll find information specific to your project that you can configure.
Here's is an example of what that looks like:
Your game code
The second place to look, is the
src/Main.hx file, which is where your game begins. For reference,
src is short for
source code. Typically you would put your game haxe code in this folder.
A luxe app, in it's very basic form, is a single haxe class that extends from
luxe.Game. It has some functions that you override, like the
ready function, which is where you start coding your game.
If you ran this code, you would see a blank window.
We'll see how to run the game a bit further down.
class Main extends luxe.Game { override function ready() { //your game starts here } }
Aside from the
ready function, there are quite a few that you can override in order to handle luxe system events - like
onkeyup(event:KeyEvent) or
update(delta:Float). The first being when a key is released, and update is called every frame for you, so you can update your game logic.
The
config function
One important function in your game is the
config function. This function gets called for you before ready happens. This means that anything you configure here, will be setup when the game is ready.
It is important to note, the function serves to configure your game app before it launches, meaning that almost all of the engine systems are unavailable in here.
For now, you can see what the config options are and specify them to your liking. We'll use this function again in the next guide.
override function config(config:GameConfig) { config.window.title = 'luxe game'; config.window.width = 960; config.window.height = 640; config.window.fullscreen = false; return config; } //config
The empty Game class
Now that we're a bit more familiar, this is how the full empty game might look.
import luxe.GameConfig; import luxe.Input; class Main extends luxe.Game { override function config(config:GameConfig) { config.window.title = 'luxe game'; config.window.width = 960; config.window.height = 640; config.window.fullscreen = false; return config; } //config override function ready() { } //ready override function onkeyup(event:KeyEvent) { if(event.keycode == Key.escape) { Luxe.shutdown(); } } //onkeyup override function update(delta:Float) { } //update } //Main
Building and running a luxe game
Your project file contains the information necessary to instruct the build tool -
flow - on how to build your game.
flow is a command line tool but is integrated into editors like so that you can work from those instead, if you prefer. You can see how to configure that here in the setup guide.
For now, we will stick with the command line for reference on what is happening.
If you haven't installed the flow shortcut (via snowfall for example) then you need to add
haxelib run in front of each command below.
flow run
To run the game, you type
flow run into the command line and hit enter, which you do from within your project folder. This will run all the steps necessary to convert your code into an application, and then launch it.
If you want to build without running, use
flow build.
If you want to launch without building, use
flow launch.
You may need to run in debug mode to find the cause of runtime errors - to do that you would just append a
--debug flag to the command. The final command would be
flow run --debug.
Note that debug mode is significantly more expensive to run, making the game run less smoothly at times. If you aren't debugging a crash you would likely be ok without the debug flag.
Getting something on screen
Now that we have a blank window, let's draw a sprite and move it around!
To use code classes from luxe, we usually import them first. Imports go at the top of the file. Since we are going to make a sprite with a color and a position, we need the
luxe.Sprite,
luxe.Color and the
luxe.Vector class. You don't have to import them this way, since you can reference them by their full name like above if you prefer.
import luxe.Sprite; import luxe.Color; import luxe.Vector;
luxe options
In luxe, most constructors of objects specify their options in an object.
This solves many annoyances while working and allows you to succicntly specify only the arguments you care about, and let the rest be handled by sane defaults.
An options object is a regular haxe "anonymous object",
it would look like this in a simple example :
{ name:'a sprite' }
To hand it to the constructor, you just put it in between the parenthesis.
You'll see a full example just below this:
new Sprite({ name:'simple example' });
Creating a Sprite
For now, we will create a small orange block in the middle of the screen and give it a color, a size and a position. The name is optional, but it will make your project infinitely easier to work with if you name things from the start.
We will make our sprite and store it in a variable named
block. This is because we want to move the sprite later when the mouse moves, so we need to hold onto the sprite we create to do that.
The variable is declared inside the class, outside of the functions. Usually at the top, right under the "class Main" part.
Inside the ready function, we create the sprite and store it in our variable.
When you run this you should see an orange block in the centre of the screen!
var block : Sprite; override function ready() { block = new Sprite({ name: 'block sprite', pos: Luxe.screen.mid, color: new Color().rgb(0xf94b04), size: new Vector(128, 128) }); } //ready
Moving things around
The
onmousemove function
The Game class has a function you can override called
onmousemove. It also has
onmousedown and
onmouseup. Each of them take the same argument,
(event:MouseEvent). We can use the mouse move event to change the position of the block to match the mouse.
To do that, we can set the
pos property of the block directly, which is a
Vector. The vector also has an
x and a
y component that you can set too, for instance you can do
block.pos.x += 10; to shift it ten units.
Start by overriding the
onmousemove function, just below
ready, and use the event information to move the block:
override function onmousemove(event:MouseEvent) { block.pos = event.pos; // also valid: // block.pos.x = event.x; // block.pos.y = event.y; } //onmousemove
Updating things every frame
The last thing to learn in this guide is the
update event function,
we will rotate the block a small amount each frame.
To do this, the sprite class has a
rotation_z property for convenience. It's a value set in degrees for the sprite's rotation. If we want to animate the block spinning indefinitely, we simply add a small amount to this each frame.
One thing to note though, is that we want the value to update consistently over time. To make that happen, we scale the change by the delta (difference in) time. This scaling of time makes sure it adds the amount as "per second" rather than "per update", so if the game runs faster on a faster device, it won't spin faster.
Run this and it should behave like the animated image above.
override function update(delta:Float) { //if we add 40° each frame, and scale it by the delta, //it becomes 40° per second instead of 40° per update. block.rotation_z += 40 * delta; } //update
That's the basics of getting things on screen, updating things every frame, and using the override functions from the Game class to get key, mouse and other forms of input.
#2 - Images and sprites
code for this guide is found in
samples/guides/2_sprites/
guide outcome
Drawing images instead of blocks
This guide covers image loading, sprite flipping, pixel art scaling and mapping input.
Loading assets is typically an asynchronous concept, on many platforms this is an literal requirement. luxe handles assets asynchronously for you, making it easy to manage. The only important thing to remember when getting started, is that you can't use an asset that you have not loaded yet.
In luxe, the word
Textureis used and it just means “image”. It's just the term that the hardware rendering API's use. You will also notice the
phoenixpackage used,
phoenixis the name of the rendering backend that luxe is currently using.
A parcel is what luxe calls a list of assets, and is used to load and unload groups of resources.
The first method of loading an image, is to load it in the
config function. The config holds a default parcel for you, that you can quickly and conveniently put assets to be loaded before your ready function is called. This means you can use it right away.
As mentioned in the previous guide, the config function is before the engine is ready, meaning that you can't draw things (like a progress bar). The config parcel is for iterating quickly, and for assets that are needed before your game is loaded. For instance, a splash screen image would be loaded here, the displayed in ready. If your game assets are small enough, loading all of the assets this way is not a problem.
We add the image to the list of textures to be loaded in the
preload parcel. It has a
textures list, so we give it an object with an
id by adding it to the list.
override function config(config:luxe.GameConfig) { config.preload.textures.push({ id:'assets/stand.png' }); return config; } //config
When ready happens, the image is loaded, and accessed via the resources API.
//fetch the previously loaded texture! var image = Luxe.resources.texture('assets/stand.png');
Displaying the image
In
ready, we can set the filtering for pixel art, so that drawing it bigger doesn't blur it. We also calculate a good size that fits the current window, and create a sprite to display it.
We also handle some input binding, but we will get to that next.
override function ready() { //fetch the previously loaded texture! var image = Luxe.resources.texture('assets/stand.png'); //keep pixels crisp when scaling them, for pixel art image.filter_min = image.filter_mag = FilterType.nearest; //work out the correct size based on a made up ratio var ratio = 1.75; var height = Luxe.screen.h/ratio; var width = (height/image.height) * image.width; //create the actual visible player, give it the texture player = new Sprite({ name: 'player', texture: image, pos: new Vector(Luxe.screen.mid.x, Luxe.screen.h - (height/ratio)), size: new Vector(width, height) }); //set up keys and values for moving around move_speed = width * 3; connect_input(); } //ready
Basic input handling
To move the sprite around a bit, we will use one of a few methods to handle input. We saw the direct event functions last time, and another alternative type is to check a state value every frame. This usually means putting key bindings spread out around your code, making it a bit trickier to change later.
To make this type of input less hardcoded, luxe supports the notion of "named input bindings". This is what it sounds like, it binds an input event to a name. From there, you can handle the named events, rather than the raw input. This also let's us bind multiple keys to a single name, making our resulting code much simpler.
Here is how the
connect_input method would look. We only need left and right in this guide, so we bind two common mappings to those names:
function connect_input() { //here, we are going to bind A/left and D/right into a single named //input event, so that we can keep our movement code the same Luxe.input.bind_key('left', Key.left); Luxe.input.bind_key('left', Key.key_a); Luxe.input.bind_key('right', Key.right); Luxe.input.bind_key('right', Key.key_d); } //connect_input
Simple movement logic
We are going to keep things simple by shifting the player along the x axis, which looks pretty strange but we'll add animation in the next guide to make it appear less rigid.
To do that, we ask luxe during the update function if the input named "left" is pressed down. This type of input handling in luxe is called "immediate query", since you ask for an immediate result. If it returns true, it means the keys we bound were in a down state, so we move left.
Since our image is facing the other way when moving left, we also flip the sprite so it faces the direction we're moving in. The same applies for moving to the right. Again we see that we scale the movement by time, so that the movement is consistent.
override function update( delta:Float ) { if(Luxe.input.inputdown('left')) { player.pos.x -= move_speed * delta; player.flipx = true; } else if(Luxe.input.inputdown('right')) { player.pos.x += move_speed * delta; player.flipx = false; } } //update
And we're done!
In the next guide we'll add some animation and more.
#3 - Sprites and animation
code for this guide is found in
samples/guides/3_sprite_animation/
guide outcome
Animating a sprite
This guide covers sprite animation and a simple loading screen with a progress bar.
The previous guide mentioned that a parcel is a list of assets, which are loaded and unloaded as a group. We used the built in preload parcel from the config function, this time we'll make our own parcel and we'll also track the loading progress using the built in
luxe.ParcelProgress class. We import that and
luxe.Parcel.
We're going to load a background image, a player sprite sheet for the animation, and a json file which describes the animations for the player. We define a parcel with a list for json items, and a list for textures. When we load this parcel, it will load these items for use.
var parcel = new Parcel({ jsons:[ { id:'assets/anim.json' } ], textures : [ { id: 'assets/apartment.png' }, { id: 'assets/player.png' } ], });
We also want a very simple progress bar, and we use the
ParcelProgress class to get it. What happens is that the parcel emits events, telling anyone who wants to know, about what is happening with the parcel. Things like progress, failed items and completeness are sent out and the
luxe.ParcelProgress class simply listens in and updates the visuals accordingly.
To use it is simpler than that:
new ParcelProgress({ parcel : parcel, background : new Color(1,1,1,0.85), oncomplete : assets_loaded });
The
oncomplete field for the options object is a function to be called when the parcel has been loaded. In this case we make a function in our game class called
assets_loaded and continue in there. This makes the execution of our game go:
ready => [ parcel loading ] =>
assets_loaded
And while that happens, we see the progress on screen. We can now carry on with our game code inside
assets_loaded where our assets are now available to use. Before we do that though, inside
ready after creating the progress bar, we have to tell the parcel to load.
parcel.load();
after loading
The
assets_loaded function continues what
ready would do, so we create the world and it's contents, and connect the input like the previous guide.
function assets_loaded(_) { create_apartment(); create_player(); create_player_animation(); connect_input(); } //assets_loaded
The
_character is used to ignore arguments that you don't really care about on a function. Say you had a function like
add(a:Int, b:Int)but you only cared about
ayou could declare the function as
add(a:Int, _).
This is convenient when you have callbacks that give you information that you're not going to need yet, like above.
creating the player and the apartment
The background is just a sprite, and so is the player!
We're familiar with how to create those, so you can view the code sample to see the details.
Sprite frame animation
In luxe currently sprite animation is handled by a component which gets attached to an entity. A
luxe.Sprite is an entity and the
luxe.components.sprite.SpriteAnimation is a
Sprite specific component. You can only attach this component to a
Sprite.
A component is a design pattern that has been around for a long time, which composes game specific entity behaviours using modular pieces. For instance, a player might be composed of a
Health,
Hungerand
Thirstcomponents. A creature in this same game can also use these components, but only the first two.
This makes it powerful for creating variety and flexibility where entities can be anything at anytime, simply by their composition at the time, and not the code inside their class that's baked in.
Many engines employ this model, and there are quite a few variations of the pattern around. This guide talks about components in more detail.
creating and attaching a sprite animation component
By this point, we've created a
player variable, which is a
Sprite and can accept the animation component. We're going to want to keep hold of the animation component that is created, because we want to change the animation playing when moving around.
//create the animation component and name it anim var anim = new SpriteAnimation({ name:'anim' }); //add the component to the player sprite/entity player.add(anim);
An alternative approach is to leverage the fact that
add will return the component instance as well. They do the same thing, but offer a succinct way which clarifies code when creating multiple components in a row.
anim = player.add( new SpriteAnimation({ name:'anim' }) );
Defining some animation data
How do we tell it what our animations look like?
Sprite animations are usually stored in packed images, which puts the frames of the animation in a single image.
Here is our player sprite sheet, with an idle and walk animation stored as frames. This particular sprite was created quickly for a previous project by the talented andrio.
Note that sprite sheets aren't required, you can use separate textures but this is typically inefficient. See
tests/features/sprite_animation.
To tell the animation component where each animation is and how it works, we will use a json file for convenience.
{ "idle" : { "frame_size":{ "x":32, "y":73 }, "frameset": ["1-3", "hold 2", "4","2-1", "hold 10"], "loop": true, "speed": 8 }, "walk" : { "frame_size":{ "x":32, "y":73 }, "frameset": ["5-10"], "loop": true, "speed": 9 } }
You can see that the json frame sets are quite expressive. It allows timing to be expressed through frame numbers. The speed parameter is frames per second. Note that frame numbers in images always start at 1. There is no frame 0 in an animation.
finalizing the animation
Most of this should be self explanatory and has comments, we use the resource manager to get our parcel items we loaded, we add the component, and we define our animations using the json. Once done, we set the animation by name to the one we want, and we tell it to play!
function create_player_animation() { //create the animation component and add it to the sprite anim = player.add( new SpriteAnimation({ name:'anim' }) ); //create the animation from the previously loaded json var anim_data = Luxe.resources.json('assets/anim.json'); //create the animations from the json resource anim.add_from_json_object( anim_data.asset.json ); //set the idle animation to active anim.animation = 'idle'; anim.play(); } //create_player_animation
Changing animations for walking
A lot of the movement code hasn't changed, so we'll only focus on the animation differences.
We saw the
animation property set to
idle earlier, so we can change it to
walk while they're moving:
//set the correct animation if(moving) { if(anim.animation != 'walk') { anim.animation = 'walk'; } } else { if(anim.animation != 'idle') { anim.animation = 'idle'; } }
And that's how to animate things!
The full code sample is linked at the beginning of the guide and continues on from the previous guide. The next guide doesn't follow the same code but uses a lot of the concepts already introduced.
#4 - Text and audio
code for this guide is found in
samples/guides/4_text_and_tweening/
guide outcome
:todo:
This guide has not been written but the code sample is commented well.
Gameplay guides
Timers and schedules
Luxe.timer
To schedule things ahead of time, you have two options.
- use
Luxe.timer(returns a
Timerinstance)
- use
snow.utils.Timerclass directly
A comprehensive example of this is demonstrated in beginner's guide #4.
A schedule is given a haxe function that will be called at a later time. In luxe, time is always in seconds.
var timer = Luxe.timer.schedule(time, function() { trace("This code happens in the future"); });
stopping or cancelling a timer
To stop the timer, you need the timer instance itself returned from
schedule.
timer.stop();
future goals
- pause
- pausing all or a single timer
- timer grouping
- i.e controlling game specific timers vs menu specific timers
Rendering guides
Sprite features
The beginners guides cover basic sprite usage.
The
Sprite class extends the
luxe.Visual class, which is a geometry container.
Sprite is also a
luxe.Entity, so it can accept
luxe.Component attachments.
The sprite class is a Quad based geometry, and facilitates common actions with a quad based, textured sprite. If you want non quad geometry, use
luxe.Visual instead, as sprite is a quad specialization.
In concept a sprite is 2D, but no restriction on 3D rotation or positioning is applied. All 2D helpers will only affect x/y relative properties of the sprite.
Sprite specific features
centered
By default, the sprite origin will be centered. By setting the centered flag to false, it will be top left instead. The centered flag is used only when a custom origin is not specified, it will not override the explicit origin.
The centered flag sets the transform
origin to
size/2.
flipx/flipy
The
flipx and
flipy flags will flip the geometry along it's own x or y axis respectively. Flipping works by changing the UV coordinates of the texture, based on the existing uv coordinates. Comes from
phoenix.geometry.QuadGeometry.
If the flip flag is already setting, setting it twice has no effect.
size
The size of the geometry in units, which allows setting a baseline size of the geometry. This differs from the scale transform as it is in units, allowing simpler scaling through a preset size. Comes from
Visual (note this shouldn't be there).
uv
A rectangle in texture pixels for the UV coordinates of this sprite. Can be animated through setting the properties of the UV or assigning a new UV.
Comes from
phoenix.geometry.QuadGeometry.
rotation_z
A convenience for setting the 2D rotation (around the z axis) in degrees. Will also change
radians to match. Comes from
Visual.
radians
A convenience for setting the 2D rotation (around the z axis) in radians. Will also change
rotation_z to match. Comes from
Visual.
SpriteAnimation features
The beginners guides cover the basic sprite and sprite animation usage. The
SpriteAnimation class extends the
Component class, allowing it to be attached to a
Sprite. This component can only be attached to a
Sprite or child instance.
Animation type
The
SpriteAnimation component supports
- uv animation from a packed sprite sheet
- image sequence from separate textures
Controlling playback
For each example, anim is an instance of a
SpriteAnimation
get/set animation
//set anim.animation = 'name'; //get var name = anim.animation;
Current animation
get/change speed
//set anim.speed = 25; //get var speed = anim.speed;
set a specific frame
Uses frame index, not image frame.
anim.set_frame(6);
control playback
//reset the animation to the first frame anim.restart(); //play/resume the animation anim.play(); //stop/pause the animation. Does not reset the frame. anim.stop();
add/remove frame events
Frame events allow the animation to tell you when it reached specific key frames during playback. These can be used to spawn particles, play sounds and so on.
//add event at frame 6 anim.add_event('animation', 6, 'event_name'); //remove a specific event from this frame anim.remove_event('animation', 6, 'event_name'); //remove all events from this frame anim.remove_events('animation', 6);
Animation JSON Data
The animation frame JSON data consists of the following properties :
UV & Image sequence common settings
pingpong: Bool
- default: false
- if true, the animation will reverse at the last frame
- 1 2 3 2 1
loop: Bool
- default: false
- if true, the animation will continue to loop
- 1 2 3 1 2 3 etc
- can combine with pingpong
reverse: Bool
- default: false
- if true, animation will play in reverse
- can combine with pingpong and loop
- 3 2 1
speed: Float
- default: 2
- frames per second to play at
- can be 0
events: Array of { frame:Int, ?event:String }
- fires named event on frame into the attached sprite
- IF event is not given:
- {animation}.event.{frame}, i.e "walk.event.5"
- handler given SpriteAnimationEventData
frameset: Array of String
- required
- sequential list of frames or frame actions:
- range : "1-10"
- frame : "1"
hold nhold current for n frames of time : "hold 10"
f hold nhold specific frame f for n frames : "1 hold 10"
frame_size: { x:Int, y:Int }
- the size of a frame in texture pixels
- acts as the default for frame_sources, if any
frame_sources: { frame:Int, pos:{ x:Int, y:Int }, size:{ w:Int, h:Int }, source:{ x:Int, y:Int, w:Int, h:Int } }
- optional
- per frame custom size, position and source uv rect
- uses frame_size in place of missing info
Image sequence type only
image_sequence: String
- name of a sequence of images :
- i.e assets/idle => assets/idle0.png ... assets/idleN.png
- will search for :
name_0,
name-0,
name0patterns
filter_type: String
- "linear" or "nearest", to set when loading the sequence
Manipulating color in luxe
Color is a very important tool in games, and being able to smoothly transition colors is important.
When dealing with RGB color as is the default color type, it tends to break when you try and animate between two colours. The solution to this, is different color models, such as HSV (Hue, Saturation, Value) and HSL (Hue, Saturation, Lightness).
luxe supports both HSL and HSV interchangeably with the default of RGB (and each support alpha as well).
What is the difference?
HSV and HSL are cylindrical in nature, that means that they are round and their color value works in degrees(º) rather than components.
Take a look at the image below, this is the color wheel (Hue only), mapped to degrees.
How this helps
Notice how if we want to transition from red (danger!) to orange (warning) on a flashing UI element, it's around 30º of movement?
To animate that using Hue (the above color wheel) it is simple, we animate the hue value.
Color classes have their own convenience tween function
One thing you will notice is that the color classes have their own tween function for convenience.
//We create our red color using 0º Hue (red), //the second and third argument are saturation and value, //which we set to "maximum" right now. var color = new ColorHSV(0, 1, 1); //Now we want to animate to orange, just change the hue //over two seconds, to orange color.tween(2, { h:30 });
By mapping the colors to a round cylinder it affords much smoother transitions between colors, much smoother than RGB can do.
Saturation, Lightness and Value
Color can be quite a complex system, and has mathematical properties outside of the scope of this simple guide. If you want to get in depth details into the mechanics of color , this article on wikipedia is quite thorough.
Now - for simplicity sake - we will define the terms in a less exact way. Take a solid color at hue 30º like above.
- Value - The color approaches black when value is lowest
- Lightness - The color approaches white when the lightness is highest
- Saturation - The amount of color present (like draining the color away)
Have a look at these graphs from Wikipedia for a clearer view.
Creating and using the different types
Now that you hopefully understand the purpose and differences, we can look at how to work with them in luxe.
//defaults for r g b and a are 1 (full white) var color: Color = new Color( r, g, b, a ); //Defaults for h = 0, s = 0, v = 1, a = 1 (full white) var colorhsv: ColorHSV = new ColorHSV( h, s, v, a ); //Defaults for h = 0, s = 1, l = 1, a = 1 (full white) var colorhsl: ColorHSL = new ColorHSL( h, s, l, a );
This is for creating, but how about switching between?
Changing color type
All types are extended from the
Color class, so they automatically work where
Color is expected. For example, a sprite color is typed as
Color but a
ColorHSL or
ColorHSV can be given in place -
sprite.color = colorhsl;
This makes all types interchangeable automatically, but the following functions are exposed as well.
// helpers on Color color.toColorHSL //returns ColorHSL color.toColorHSV //returns ColorHSV color.fromColorHSL //changes color color.fromColorHSV //changes color //helpers on ColorHSL colorhsl.toColor //returns Color colorhsl.toColorHSV //returns ColorHSV colorhsl.fromColor //changes colorhsl colorhsl.fromColorHSV //changes colorhsv //helpers on ColorHSV colorhsv.toColor //returns Color colorhsv.toColorHSL //returns ColorHSV colorhsv.fromColor //changes colorhsv colorhsv.fromColorHSL //changes colorhsv
Render order and sorting
The rendering works by sorting items according to the following high level rules :
- Renderer
- batchers,
- sorted by
layerproperty
- geometry, sorted by an order sort
depth
- shader
- texture
- primitive
- clipping
- age
In general use, you control specific batches of items using a
Batcher, specifying it's layer for overall order. Then, you specify geometry depths, the rest is automatic.
Geometry depth
In luxe, "geometry depth" is not the same as the geometry z position in world space. The depth is a render tree depth, controlling render order explicitly.
If you want to draw spriteA above spriteB, setting spriteA depth to 2, and spriteB depth to 1, it will always rendering second.
depth values
The depth value is a floating point number which is convenient for "last minute" depth control, allowing finely grained details to matter. Some examples can be "all hud elements are between 10 and 11", where 10.1 is hud background, 10.2 is hud buttons, 10.3 is hud text and so on.
Because of this,
10.1 is a valid depth and so is
10.142325. The granularity is subject to floating point errors, so try not to go too small here or you may get different sorting.
uses
This is especially useful when render depth is calculated dynamically. In a 2D front down view, the depth can simply be the Y position in world space, and as avatars move through the space they update their depth to their position. It also allows division to be used to calculate sorting, which is helpful in situations where depth is calculated on the fly.
As the depth value is the first rule, it can also be used when working with depth testing and transparent objects. You could split transparent objects into a separate batcher, and control it as a whole, or you can separate it using the depth values, ensuring the render order is respected.
Render callbacks
In your game class you can override the
onprerender,
onrender and
onpostrender events.
These are for all rendering that happens, not a specific subset of geometry. This is useful if you want to do some explicit rendering into render targets before the rest is processed.
Group render callbacks
In luxe, with phoenix (the rendering engine in luxe) you can listen for events from the batcher that geometry is in, in order to alter render state.
This allows you to specify render state (like blend modes) or control rendering into textures explicitly through the use of the callbacks.
Take a look at this example, we ask the default batcher to tell us when it is being rendered.
override function ready() { Luxe.renderer.batcher.on(prerender, before); Luxe.renderer.batcher.on(postrender, after); } //ready function before(_) { //change how this group is blended Luxe.renderer.blend_mode( BlendMode.dst_color, BlendMode.one_minus_src_alpha ); } //before function after(_) { //reset to default blend mode Luxe.renderer.blend_mode(); } //after
And the results would be similar to :
Blending and blendmodes are a frequent topic in rendering and you can explore the different types here :
Anders Riggelsen blend modes online tool
Rendering a batcher to a texture
You could also use the render callbacks to switch the rendering target.
If you listen for the callbacks as above, you can set and unset the target easily.
function before(_) { Luxe.renderer.target = my_render_target; } //before function after(_) { //reset to default render target Luxe.renderer.target = null; } //after
For a clearer example of rendering to a texture, see
tests/rendering/rendertexture
Shaders in luxe
What are shaders exactly?
I have written a comprehensive primer to shaders here!
Shaders : A primer
The series talks about shaders in various ways,
but for the luxe specifics, we'll look at how to use them.
Using shaders in luxe
luxe makes using shaders easier by wrapping up the details, allowing you to load and apply shaders to your sprites and geometry easily. It also makes it simple to send information into the shaders.
Take a look at the code from the demo, and you should be able to follow along as to what is happening. As with other resources, you can't use it unless it is loaded. You can load shaders via a parcel, or via the resource manager.
//we create a variable to hold the shader var hue_shader : Shader; //and then we fetch the already loaded shader hue_shader = Luxe.resources.shader('hue'); //then we tell the sprite to use this shader when rendering hue_sprite.shader = hue_shader;
In our example, when we move the mouse, we send some information to the shader to change the color.
override function onmousemove( e:MouseEvent ) { var percent = e.pos.x / Luxe.screen.w; var hue = (Math.PI*2) * percent; //hue based on mouse x hue_shader.set_float('in_hue', hue); }
shader code
The shaders in luxe currently use GLSL, the OpenGL Shading Language across all targets. There are many resources around for shaders and GLSL specifically so this guide won't need to cover that. The test code has some simple example shaders you can learn from.
built in shaders
The default shaders the engine uses are in
phoenix/defaults/shaders.
Drawing shapes as geometry
Luxe supports drawing a few shapes by default, each with some very similar options available. This is essentially a convenient way to draw debugging information quickly, and allows immediate (draw once) rendering.
These are accessed via the
Luxe.draw api, and return a
phoenix.geometry.Geometry instance. Since geometry is lower level than say, a
Sprite, it is often convenient to use the draw API as more of a factory for the geometry, that you then use in a higher level container (like a
Visual).
- line
- rectangle, box
- ring, arc, circle, pie
- text
There is a pattern here - a
rectangle is an outline, and a
box is filled in. A ring is an
outline, a circle is
solid. An arc is an
outline, a pie is
solid.
using the geometry in a visual
It's convenient to be able to treat the geometry as a higher level object, like an
Entity which is what
Visual is for. You can use the draw API to create visuals by handing it to the
geometry property. We use 0,0 as the position because it will be centered on the visual's transform.
It's important to note though, the the visual is what you're creating as the result of this, so there are some properties of the geometry created by the draw API that get overridden. For example, the color property is set on the visual level. If it was set on the geometry draw create step, it will have no effect.
var visual = new luxe.Visual({ color : new luxe.Color(0.8, 0.3, 0.2, 1), geometry: Luxe.draw.ring({ x : 0, y : 0, r : 50 }) }); visual.pos = Luxe.screen.mid;
examples
Drawing a line:
var mid_y = Luxe.screen.h/2; Luxe.draw.line({ p0 : new Vector( 0, mid_y ), p1 : new Vector( Luxe.screen.w, mid_y ), color : new Color( 0.5, 0.2, 0.2, 1 ) });
And a rectangle :
Luxe.draw.rectangle({ x : 10, y : 10, w : Luxe.screen.w - 20, h : Luxe.screen.h - 20, color : new Color( 0.4, 0.4, 0.4 ) });
And a circle :
Luxe.draw.circle({ x : Luxe.screen.w/2, y : Luxe.screen.h/2, r : 50, color : new Color( 0.8, 0.3, 0.2, 1 ) });
Fonts and text
Creating custom bitmap fonts
The fonts currently supported by luxe are in a "AngelCode BMFont" format. This format has become widespread and many tools now exist to create fonts easily for it.
Some tools to generate a font, but Littera is the best free choice :
- Littera, online, free
- bmGlyph, mac only, commercial
- glyphdesigner, mac only, commercial
- BMFont, windows only, free
If you want to color the text in luxe using geometry colors, you must create a white fill solid text, alpha.
Export the fonts as .fnt text based format.
Importing custom fonts
To use a custom font, you can use a parcel (as shown in the beginner guide), or, you can manually load the font yourself.
To do so, you use
Luxe.resources.load_font. You can manually create a
phoenix.BitmapFont and use
BitmapFont.load, or even
new BitmapFont() and
from_string functions. The BitmapFont API docs have all the details.
To use
Luxe.resources.font and
Luxe.resources.load_font you should also read the assets guide.
Take note that the folder is separated from the file name, because there can and often are multiple texture sheets for a font set. The name is always without a path.
A more thorough example can be found in
tests/rendering/fonts/ and
tests/features/text/,
tests/features/text2/
override function ready() { var get = Luxe.resources.load_font('assets/fonts/font.fnt'); get.then(function(font:BitmapFont) { Luxe.draw.text({ font: font, text : "LUXE\nLUXE", bounds : new Rectangle( 0, 0, Luxe.screen.w * 0.99, Luxe.screen.h * 0.98 ), color : new Color().rgb(0xff4b03), align : TextAlign.right, align_vertical : TextAlign.bottom, point_size : 32 }); }); //onload }
system guides
Assets system
The following asset types are supported directly by the API for convenience :
- textures ( images, png, jpg, tga, psd, gif, bmp )
- text assets ( any format, xml, txt etc )
- json assets ( parses and returns a usable json object )
- sound assets ( audio, ogg/wav/pcm )
- bitmap fonts ( a text .fnt description + image files see font guide )
- shader files ( glsl shaders, with custom vertex or fragment shaders )
- binary files ( any binary byte data )
Async
By design, you should always consider asset loading to be asynchronous.
If you load assets in a background thread, you wait. If you load assets on web, you wait. Many game consoles load data async too. For this reason, and for portability, assets are treated as asynchronous.
The separation between loading, and using an asset is important.
To use an asset, it must already be loaded. The resource manager employs the concept of Promises and Parcels to make this really simple,
and the luxe Game class includes an easy way to quickly load dev assets for prototyping/jamming.
Using assets
Once an asset has been loaded (see below) it is stored in the resource manager.
To retrieve a stored asset by it's id, the following functions are available:
Luxe.resources.bytes
Luxe.resources.text
Luxe.resources.json
Luxe.resources.texture
Luxe.resources.font
Luxe.resources.shader
Luxe.resources.sound
If the asset does not exist (i.e is not loaded) then the function returns null.
Since these functions return concrete types, you can make the code using the resource imperative and deterministic. For example, this code can rely on the asset being loaded, if I ensured that it was ahead of time.
var sprite = new luxe.Sprite({ pos: Luxe.screen.mid, texture: Luxe.resources.texture('assets/image.png') });
Assets need to be loaded before they can be used.
Luxe.resources has a bunch of
load_* functions for this purpose.
Luxe.resources.load_bytes
Luxe.resources.load_text
Luxe.resources.load_json
Luxe.resources.load_texture
Luxe.resources.load_font
Luxe.resources.load_shader
Luxe.resources.load_sound
These functions return something called a Promise, which promises a value when it's ready. This just means that it will call a function for you, when the asset has finished loading.
var load = Luxe.resources.load_texture('assets/image.png'); load.then(function(texture:phoenix.Texture) { //now use the texture value trace('Loaded texture ${texture.id} with size ${texture.width}x${texture.height}'); });
To handle multiple returned load promises, you can use
Promise.all from
snow.api.Promise. Take note that the array returned from Promise.all will be typed as
Dynamic if mixing resource types.
var list = [ Luxe.resources.load_texture('assets/image.png'), Luxe.resources.load_texture('assets/image2.png'), Luxe.resources.load_texture('assets/image3.png') ]; var load = Promise.all(list); load.then(function(loaded:Array<phoenix.Texture>) { for(image in loaded) { trace('Loaded texture ${image.id}'); } });
Resource content
Some of these load functions return a Resource instance,
which contains the asset data within it.
For example, to access the loaded text data from a
load_text call, it will give you a
TextResource, which contains a
TextAsset. To access the value, you would use
loaded_text.asset.text.
The resource API docs have further details.
A parcel is simply a related group of assets that would like to load together.
This can include all the assets for your game, or just a subsection like a specific level.
A parcel includes the same default types as listed above,
bytes,
texts,
jsons,
textures,
fonts,
shaders and
sounds. These are simple arrays, that you fill when creating a parcel before calling the load function.
Taken from a tutorial guide:
var parcel = new luxe.Parcel({ jsons:[ { id:'assets/anim.json' } ], textures : [ { id: 'assets/apartment.png' }, { id: 'assets/player.png' } ], });
We could now call
parcel.load() which would load the assets,
but we would probably like to see a progress bar.
There is a simple default one built in, but you can feel free to implement one yourself.
//this will call a function named assets_loaded when done new luxe.ParcelProgress({ parcel : parcel, background : new Color(1,1,1,0.85), oncomplete : assets_loaded }); //load the assets parcel.load();
The default preload parcel
To aid rapid development and for convenience,
the
Game class (your application) offers you a “default preload parcel”.
This parcel is loaded before anything happens, and assets in it will be available before
ready is called.
Take note that the preload parcel serves a simple purpose:
- Handle really early asset/data dependency
- Rapid development to avoid creating a parcel manually (yet)
It is not suited for preloading game content in general,
- It does not include a progress bar (no renderer available yet)
- Failed assets in this early loader are fatal, the application stops
This makes it good for early required data, less so for loading levels and menus and so on.
To use the preload parcel, you fill in the parcel arrays within your config function.
override function config(config:luxe.GameConfig) { config.preload.textures.push({ id:'assets/logo.png' }); config.preload.jsons = [ { id:'assets/1.json' }, { id:'assets/2.json' } ]; return config; } //config
Events, Signals, Messages?
One common method of communicating between game systems, a very powerful method of development, is called event driven design.
Event driven design, often also referred to as signals and slots, or messaging systems and are simple in principle. They allow code to listen or attach to messages (also called events or signals) from elsewhere in the code.
If you think about it like a radio station, it's sent out to people who may choose to listen to a particular channel. The events are sent whether there is any code listening, and there can be multiple listeners on a single "channel".
This model allows systems some encapsulation and decoupling from one another. It also allows more adaptive changes to the code based on the changes that happen at runtime. All possible combinations don't have to be connected in advanced or coded in place.
Let's look at a simple example.
The player is losing health
Imagine a game where your player can take damage from a projectile, an arrow fired by an enemy. Here is some pseudo code to imagine what would happen, when the player has collided with the arrow.
// arrow update code: // check if we are going to hit something? for(entity in range_of_collision) { if(entity.collides_with(this)) { //we have hit an entity! //we will assume this is the player and convert it var player: Player = cast entity; player.take_damage( damage_amount ); //... } }
Now, what if the entity was another enemy? What if we don't want to do maximum damage to other enemies?
if( entity.collides_with( this ) ) { //we have hit an entity! if( Std.is(entity, Player) ) { var player : Player = cast entity; player.take_damage( damage_amount ); } else if( Std.is(entity, Enemy) ) { var enemy : Enemy = cast entity; enemy.take_damage( damage_amount * 0.5 ); } }
Now what happens when the arrow hits a wall entity? What if there are different types of walls? Or different types of enemies? This can quickly spiral into many needless type checks and make this code very specific and hardcoded. It has to have code to handle every single case, which introduces a large amount of complexity and bug potential.
Let's try the evented approach.
The player is losing health event
Events make this example a lot more elegant and flexible. Instead of handling specifics, we'll use the entity specific events instance to send it a message. "Hey, whatever type of entity you are, if you are listening for this event, you are taking damage".
if(entity.collides_with(this)) { entity.events.fire('take_damage', { from:this, amount:damage_amount }); }
Now, no matter what the entity is - it is up to the entity (encapsulated, decoupled from the arrow!) to handle the situation. This includes ignoring the event as well.
//Inside the Player class override function init() { events.listen('takes_damage', on_take_damage); } function on_take_damage( data:DamageEvent ) { //from, and amount are available //we can also handle game specific situations here, like //if there was invincibility, or a damage reduction buff health -= data.amount; check_health(); }
And how about on the walls?
//When taking damage, handle the arrow differently by wall type. //In this fake example, we can reflect or explode the arrow //(assuming the take damage is only from arrows here, see below) function on_take_damage( data:DamageEvent ) { switch(wall_type) { case WallType.reflective: data.from.reflect(); case WallType.normal: data.from.explode(); } }
Important notes
There are two ways to fire and event,
events.fire will immediately call any listeners, and
events.queue will store the event in a queue for the next frame update. The distinction is important for ordering, as well as immediacy of events (like an input event is more important than other events).
You can remove a listener from an event using the unique ID that the listen function returns. This is important to manage your events so that you don't end up accidentally handling events at the wrong time - like the player shooting arrows while the menu is open because the event remained connected.
//connect a listener var listen_id = events.listen('event', function(){ } ); //this will remove just this listener events.disconnect( listen_id );
Being more specific
This example could use some more specifics, for example, the player will handle a 'takes_damage' event from more than just arrow projectiles.
For this, you can use event namespaces, and wildcards in the events. Let's make this more specific, first.
//In the arrow collision check entity.events.fire('take_damage.arrow', { from:this, amount:damage_amount }); ... //Inside the Player class override function init() { events.listen('takes_damage.arrow', on_take_damage_from_arrow); events.listen('takes_damage.explosion', on_take_damage_from_explosion); } function on_take_damage_from_arrow( data:ArrowDamageEvent ) { ... } function on_take_damage_from_explosion( data:ExplosionDamageEvent ) { ... }
Being less specific
If you wanted to listen for all the takes_damage events , you can use wildcards.
events.listen('takes_damage.*', on_take_damage_from_any );
You can also use the wildcard elsewhere, like this :
//game.player.ui, game.menu.ui, game.health.ui events.listen('game.*.ui', on_any_ui_events );
They can even be used for more complex event listeners, like "player enters swamp",
events.listen('(player)*(swamp)', on_entering_swamp );
Global vs Local events
A good example of a global event is when the user presses the pause key.
This single action has ramifications across the entire game, right down to the animation system and in the menu code and the game logic - an easy way to tell every system that want's to know when the game is paused, is by using events.
Luxe.events.queue('game.pause'); Luxe.events.queue('game.unpause'); ... Luxe.events.listen('game.pause', on_game_pause); Luxe.events.listen('game.unpause', on_game_unpause);
As mentioned above, many listeners can listen for a single event, and can react accordingly.
All of the above examples were sending events directly INTO an entity, only that entity would see it. There is also a way to send messages globally, for every class/function to listen for in the entire game. Let's go back to the example of the player taking damage from anything, and tell the entire game that there was damage lost.
//In the player class, //we use the GLOBAL events to tell //the rest of the game that we've taken //any damage from any type of event override function init() { events.listen('takes_damage.*', function(e) { Luxe.events.fire('game.player.damage', e); }); } ... //In the UI class, somewhere else, we can react //to the player getting hurt event sent out Luxe.events.listen('game.player.damage', function(e) { //Flash the screen red, etc //shake the camera 10% of the damage amount Luxe.camera.shake( e.amount * 0.1 ); });
Wrapping up
As you can see, events are powerful and meaningful and can be used for almost anything. You can always create your own instance of
luxe.Events and have many local events systems (though, entities already have one built in!).
In depth details
If you are wonder just exactly what happens with the filtering, here is what it is doing,
public function does_filter_event( _filter:String, _event:String ) { var _replace_stars : EReg = ~/\*/gi; var _final_filter : String = _replace_stars.replace( _filter, '.*?' ); var _final_search : EReg = new EReg(_final_filter, 'gi'); return _final_search.match( _event ); } //does_filter_event
Below are some more examples in a test case to demonstrate more uses of the event system.
Examples
Since event names are string, you can group events by a delimeter,
i.e
Luxe.events.listen('game.player.*'), which can be used to filter events by type.
import luxe.Vector; import luxe.Input; import luxe.Entity; typedef HealthEvent = { amount : Float } typedef DiedEvent = { attacker : String } typedef SpawnEvent = { spawn_node : String } class Main extends luxe.Game { var entity : Entity; public function ready() { //Global events connections Luxe.events.listen( 'global event' , function(e){ trace("Global Event Fired"); }); //Connect global to local event Luxe.events.listen( 'local event' , function(e){ trace("Should not print"); }); //Local to entity event connections entity = Luxe.scene.create(Entity,'temp'); entity.events.listen('local event', function(e){ trace("Local Event Fired"); }); entity.events.listen('player.*', function(e){ trace('player event happened!'); trace('it was `' + e._event_name_ + '` which has ' + e._event_connection_count_ + ' listeners!'); }); entity.events.listen('player.health.loss', function( e:HealthEvent ){ trace(' ouch! I lost ' + e.amount + ' health :('); }); entity.events.listen('player.health.gain', function( e:HealthEvent ){ trace(' woo! I got ' + e.amount + ' hp'); }); entity.events.listen('player.died', function( e:DiedEvent ){ trace(' oh snap! I was killed by ' + e.attacker ); }); entity.events.listen('player.spawn', function( e:Main.SpawnEvent ){ trace(' ok, letsdoodis, now at ' + e.spawn_node ); }); trace('PRESS SPACE TO FIRE EVENTS'); //Events class exposes the filter function //to test and learn how it works trace(does_filter('game.*', 'game.player.test')); trace(does_filter('game:player:*', 'game:player:health')); trace(does_filter('game.*.player', 'game.ui.player')); trace(does_filter('game.*.player', 'game.death.player')); trace(does_filter('game.*.player', 'game.death.test')); trace(does_filter('*.player', 'ui.player')); trace(does_filter('*.player', 'health.player')); trace(does_filter('*.player', 'derp.plea')); trace(does_filter('(player)*(house)', 'player inside house')); } //ready //shortening the lines above inline function does_filter(filter:String, event:String) { return Luxe.events.does_filter_event(filter, event); } public function onkeyup(e) { if(e.value == Input.Keys.escape) { Luxe.shutdown(); } if(e.value == Input.Keys.space) { Luxe.events.fire( 'global event' ); entity.events.fire( 'local event' ); entity.events.fire('player.health.gain', {amount:10}); entity.events.fire('player.health.gain', {amount:23}); entity.events.fire('player.health.loss', {amount:60}); entity.events.fire('player.died', {attacker:'SomeEnemy'}); entity.events.fire('player.spawn', {spawn_node:'spawn12'}); entity.events.fire('player.health.gain', {amount:'100'}); } //space } //onkeyup }
More examples
var event_id = Luxe.events.listen('debug:event1', function(e) { trace('event listener 1 : ' + e); }); Luxe.events.listen( 'debug:event1' , function(e){ trace('event listener 2 : ' + e); }); Luxe.events.listen( 'debug:event1' , function(e){ trace('event listener 3 : ' + e); }); trace( 'registered debug:event1 ' + event_id ); Luxe.events.fire('debug:event1', { name: 'test event', date: Date.now() }); //remove one of them Luxe.events.disconnect( event_id ); //now only two listeners Luxe.events.fire('debug:event1', { name: 'test event', date: Date.now() }); //fire next frame Luxe.events.queue('debug:event1'); //fire two seconds from now Luxe.events.schedule( 2.0 , 'debug:event1');
Understanding Components
code for this guide is found in
samples/guides/5_components/
guide outcome
What are Entities, and what are Components?
You have probably heard about about component/entity systems at some point if you have made games, and with good reason as they are quite useful for the way games are often structured. This was mentioned in the third guide with regards to sprite animation.
The terms are quite straight forward -
- An Entity is a place to attach components to
- A Component adds some behaviour to an Entity, the one that it is attached to
A quick concrete example
- A Sprite on screen is an Entity, an "EnemyTower" sprite
- A "ShootEveryThreeSeconds" is a component
- A "TakeDamageUntilZeroAndThenDie" is a component
This means that generally an entity doesn't do anything on it's own, it's a blank slate. By attaching components to it, it can become more specific at any time. This gives you the flexibility to compose dynamic items at runtime, and an entity is only defined by the components it has, not the code that is inside of it's class. If a tower wanted to fly for some reason, it can.
It's worth knowing that there are a few approaches to "entity component systems" as a concept. In current luxe the component is a class that contains code and is attached to an entity which is a container for components.
Anatomy of a Component class
Component classes have some default functions that are called for you, much like the game class.
Have a look at the comments in the code below to see them.
import luxe.Component; class MyComponent extends Component { override function init() { //called when initialising the component } override function update(dt:Float) { //called every frame for you } override function onreset() { //called when the scene starts or restarts } }
Component spatial transforms
Components are directly tied to the entity they are attached to.
When you change the transform from a component class - it is changing the entity itself.
pos.x = 100 changes the entity position.
It is the same as saying
entity.pos.x = 100.
All of the spatial values,
pos ,
rotation and
scale, affect the entity transform directly. Keep this in mind!
Creating and accessing entities
Creating entities
Entities are created using the same common pattern of
new luxe.Entity(options). You can import
luxe.Entity too. This entity will automatically be added to the default scene unless you ask it not to be. See the EntityOptions for all the flags.
var entity = new Entity({ name:'entity_name' });
The
luxe.Sprite and
luxe.Camera class in luxe extend from the Entity class so that you can add components to them.
Accessing entities from other entities and components
By default, entities are stored in scenes by name so you can fetch them later. This means that when creating your sprite, or entity, you will want to pass the name as well. You fetch the sprite from the scene by accessing the entities property from the scene.
public function init() { var sprite = new Sprite({ name : 'spritename' }); } ... //at a later time var sprite: Sprite = cast Luxe.scene.entities.get('spritename');
Creating and accessing components
All components should typically extend from the
luxe.Component class in order to behave as expected.
Adding components to entities
Components are added to entities using the
add function on the entity, and the
add function returns the instance for convenience. This was also demonstrated in the third beginner's guide.
Remember to name things, since the name of the component is needed later for fetching a reference if you don't have one. Since the same entity can have multiple components of the same type, the name is of the unique instance. i.e If you had two health components on the same entity each one would need to be named to identify them later.
When you create a custom component, the constructor is in your hands, but remember to call super with at least the name of the instance. You can see this in the example later.
Also important to mind the timing of the system events. The constructor will be called when
new is invoked, while
init and other events will probably happen later. This makes the constructor relatively early and can cause confusion when things you expected to exist do not exist yet.
var component : Component; var entity : Entity; override function ready() { // create an entity in the default scene entity = new Entity({ name:'some_entity' }); // add/attach a component to the entity. component = entity.add(new Component({ name:'some_component' }); }
Accessing the entity the component is attached to
When you are inside of a component and want to access the entity that the component is connected to, there is a variable called
entity that is declared in the
Component class as
entity : Entity.
When the entity is attached to a sub class of Entity (like a Sprite, which
extends Entity), you can store a typed reference by using the
cast keyword. The
onadded function is a good place for that. Now you can use the sprite features without casting each time. Like below :
var sprite : Sprite; override function onadded() { sprite = cast entity; sprite.flipx = false; //`Sprite` specific }
Accessing other components attached to the entity
When you want to access other components attached to the entity, you can use the
get function. The
get function is available from inside the component class, or from the
entity.get endpoint.
The parameter passed into the get function is the name of the component instance (which is passed into the constructor of the Component when calling the new function, or from
super in subclasses).
var move : Movement; override function init() { move = cast get('move'); move.speed *= 2; var health : Health = get('health'); health.amount += 10; }
A practical sample
To further demonstrate the component entity stuff, we will do the following :
- Create a sprite (which is an entity)
- Attach a custom component that will rotate the sprite
- Attach a custom component that will make the sprite bounce
You can mix and match components in this way to create a variety of behaviours with little effort, and to change the behaviour on the fly.
Rotate.hx
import luxe.Vector; import luxe.Component; import luxe.Sprite; //This component will rotate the entity //that it is attached to a small amount each frame. //It is assumed that the entity is a Sprite! class Rotate extends Component { public var rotate_speed : Float = 10; public var max_rotate_speed : Float = 60; var sprite : Sprite; override function init() { sprite = cast entity; } override function update( dt:Float ) { //changes to the transform inside //of components affect the entity directly! sprite.rotation_z += rotate_speed * dt; } //update } //Rotate
Bounce.hx
import luxe.Component; class Bounce extends Component { var dir : Int = 1; var speed : Int = 200; override function update( dt:Float ) { pos.y += speed * dir * dt; //hit the bottom? go back up if(pos.y > Luxe.screen.h) { dir = -1; } //hit the middle? go down if(pos.y < Luxe.screen.h/2) { dir = 1; } } //update } //Bounce
The rest of the code can be found in the link at the start of the guide.
Utilities and helpers
Geometry utils
Often when dealing with geometry or geometrical constructions (like procedurally generating spaces) it's helpful to have functions or classes to make building complex things easier.
These functions are accessible through
Luxe.utils.geometry
The geometrical utils object contains a handful of useful functions for exactly that, some examples:
- Determine line segments that make a smooth circle with a radius of r
- Generate a random point within a 1 radius circle area
- Find if a point is inside of this polygon (list of positions, or Geometry)
- Find a point of line intersect with an invisible plane
These functions are easily used by
Luxe.utils.geometry from anywhere.
As the API changes and more additions are added,
you will find the full list of utilities in the
GeometryUtils API docs
General utilities
These are functions that aren't specific to any discipline so there are many different kinds.
Some of the examples include :
These functions are accessible through
Luxe.utils
- generate a uniqueid or UUID
- get a haxe stacktrace as a string
- find assets in a sequence
Math utilities
Haxe already has many maths utilities built in, in the Math class.
On top of that, there are many game or rendering specific maths functions that are convenient to have,
These are all currently static functions, rather than an instance. i.e
Maths.radians( 90 )
These functions are accessible through
luxe.utils.Maths
- is a value within a range (useful for floating point "equality")
- wrap an angle smoothly around a fixed range (like 0~60 or 0~360)
- the nearest power of two value of a number
- smoothstep interpolations
- degrees/radians conversions
- random number helpers
You will find the full list of utilities in the
Maths API docs | https://luxeengine.com/guide/ | CC-MAIN-2017-47 | en | refinedweb |
Library for converting python numpy datastructures to the ROOT output format.
Library for converting python numpy datastructures to the ROOT output format.
- Free software: GPL v2 license
- Documentation:.
The ROOT() data analysis framework is used much in High Energy Physics (HEP) and has its own output format (.root). ROOT can be easily interfaced with software written in C++. For software tools in Python there exists pyROOT(). Unfortunately pyROOT does not work well with python3.4.
broot is a small library that converts data in python numpy ndarrays to ROOT files containing trees with a branch for each array.
The goal of this library is to provide a generic way of writing python numpy datastructures to ROOT files. The library should be portable and supports both python2, python3, ROOT v5 and ROOT v6 (requiring no modifications on the ROOT part, just the default installation). Installation of the library should only require a user to compile to library once or install it as a python package.
Secondly the library can be used to convert other file formats that store information in numpy-like structures, such as HDF5, to ROOT.
Installation
To use broot a user must have installed python, ROOT and be able to compile C++ code. To install:
pip install broot
To use the examples clone the repository and run:
python setup.py develop
Use
To use broot one needs libRootOutput.so and RootWrap.py
RootWrap can be imported in any python file and a new RootOutput instance can be made:
from broot import RootWrap
OUT = RootWrap.RootOutput()
Two example scripts are provided in ‘examples’.
- ‘convert.py’ demonstrates the functions available using some ndarrays.
- ‘hdf_to_root.py’ is a first implementation of a HDF5 to ROOT converter (HDF5 file not provided).
Current support:
- python2
- python3.4
- ROOT v5
- ROOT v6
- compiles on gcc versions with c++11
- compiles on gcc versions without c++11 (see branch no-c++11)
- tested on GNU/Linux
Todo list:
- Proper Makefile instead of ‘compile.sh’
- Python package
- HDF5 converter class
- OS support for Windows and Mac
- Documentation
History
0.1.0 (2015-05-25)
- First release on PyPI.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/broot/ | CC-MAIN-2017-47 | en | refinedweb |
I promised on Twitter to write a blog post explaining why “kata” was the wrong word for the “coding kata” problems presented at CodeMash this past week in Ohio.
First and foremost, I absolutely loved the idea of these coding problems. The problems were very similar to those found in computer science classes (for example, find all the prime numbers between 1 and 100), but the goal was to explore new languages or coding techniques (like TDD). For me, getting to pair programming with a coworker using TDD/xUnit to solve a few coding problems was definitely a highlight of the conference.
However, as a martial artist, to me “kata” is the wrong terminology to use in this context. The correct terminology is either “coding kihons” or probably more accurately “coding kumite”.
What is Kata
There are 3 parts to the study of karate:
You cannot study karate without all 3 components.
In Shotokan karate, the style I practice, there are 26 katas. The movements for each kata never changes. In other words, there is only one way to do the kata, meaning that your stances, kicks, and punches must be exact, and the timing must be correct and sincere, as if you were attaching an invisible opponent. In a karate competition, those who compete in kata are measured based upon who can perform the kata closest to perfection.
Over the course of one’s study of karate, you perform the kata over, and over, and over, and over, just like the 10,000 hours theory in Outliers. (Personally, I am not comfortable doing a kata until I’ve done it at least 100 times.) Not only does the body eventually optimize physically, but something mentally happens. You go into an “auto-pilot” mode. For example, have you ever driven to your house one day, but don’t consciously remember the specifics of the drive, because you’ve done it so many times before? This is what a karateka is trying to achieve in kata (and in kumite, and in all walks of life). The term for entering this “auto-pilot” mode is called mushin, but I digress…
The basic idea of kata is you’re trying to perfect a given series of moves via repetition. There is no deviation. Or from a Zen perspective, you’re trying to reach that state of mushin where you are in total focus and concentration, where the mind and body have become one (which is also illustrated in “kime” where you unlock your ki / chi in a split second, but I digress yet again). My first karate Sensei told me that in kata you imagine that you are fighting the dark side of yourself, all the things you dislike about your character. You visualize these negative aspects and you fight them. Thus, the more you do kata, the more your character improves.
The point I’m trying to make is that there’s a much larger aspect to kata than going through the movements.
What a “Coding Kata” would really look like
Below are a couple of examples of what I think a coding kata could look like:
Kata #1: The Implementation of Hello World in C#
public class Hello1
{
public static void Main()
{
System.Console.WriteLine("Hello World!");
}
}
Kata #2: The Implementation of Bubble Sort in C# (via C# online)
private int[] a = new int[100];
private int x;
public void SortArray()
{
int i;
int j;
int temp;
for( i = (x – 1); i >= 0; i– )
{
for( j = 1; j <= i; j++ )
{
if( a[j-1] > a[j] )
{
temp = a[j-1];
a[j-1] = a[j];
a[j] = temp;
}
}
}
}
And you would practice these katas as many times as possible, until you can code it wearing a blindfold or hold a conversation while coding this method.
In my opinion, coding katas are really just sample code or an algorithm for doing something. Just like a real kata, you know exactly what it is you are supposed to do. You’re just learning to repeat it over and over again, so it becomes second nature.
But, I’m not sure whether repeating these lines of code over and over again would make you a better coder. It would definitely help initially, but I’m not sure the benefits after that point. Maybe a true “coding kata” is mastered much faster than an actual karate kata.
Why Coding Kumite is a better term
Kihon is learning the specific techniques, like punches, kicks, stances, etc. In kihon, you practice these techniques in isolation, and you repeat each individually over and over and over again. To me, coding kihon would be the equivalent of learning the syntax of a language, learning lamda expressions, or learning generics. Kihon is not about solving a problem, but rather learning what tools you have available to solve a problem. Only after one learns kihon, can a karate student learn kata and kumite.
Looking at these coding problems, you could make the argument that your opponent is the problem to solve. And you’re using all your kihon practices to solve the problem, just like you would do in actual sparring (or in kumite.)
Conclusion
Having said all of this, my “Coding Kumite” analogy still falls short. I think only in debugging, where you are trying to find and fix bugs, is actual “coding kumite”. But, writing code to solve a problem still feels much closer to kumite to me than kata or kihon.
For a different perspective, you can check out Steve Andrew’s blog post called Shotokan Development. He watched my Nidan (2nd degree) black belt exam back in November, and wrote a blog post from the perspective of a software engineer on how to apply Shotokan teaching methods to software engineering.
Lastly, I’ve never experienced mushin in coding like i have in karate. Maybe someone out there has and can respond with a counterpoint to this. I’m really curious what others think, and I definitely would love to discuss these concepts further. I really think we could put together a teaching framework based on karate concepts, if anyone is interested in helping me out.
Maybe the next open spaces unconference I can propose a topic on karate terms in coding, but that’s only if Doctor Who is no longer making me need a support group. =D | http://blogs.msdn.com/b/saraford/archive/2010/01/17/coding-is-not-kata.aspx | CC-MAIN-2015-32 | en | refinedweb |
Base class for render handlers. More...
#include <Renderer.h>
Base class for render handlers.
You must define a subclass of Renderer, and pass an instance to the core (RunResources) *before* any SWF parsing begins.
For more info see page Render handler introduction.
================================================================== Machinery for delayed images rendering (e.g. Xv with YV12 or VAAPI) ==================================================================
Masks
Masks are defined by drawing calls enclosed by begin_submit_mask() and end_submit_mask(). Between these two calls, no drawing is to occur. The shapes rendered between the two calls define the visible region of the mask. Graphics that are irrelevant in the context of a mask (lines and fill styles, for example) should be ignored. After use, disable_mask() is called to remove the mask.
Masks may be nested. That is, end_submit_mask() may be followed by a call to begin_submit_mask(). The resulting mask shall be an intersection of the previously created mask. disable_mask() shall result in the disabling or destruction of the last created mask.
Implemented in gnash::Renderer_cairo.
Referenced by gnash::DisplayList::display(), and gnash::DisplayObject::MaskRenderer::MaskRenderer().
Checks if the given bounds are (partially) in the current drawing clipping area.
A render handler implementing invalidated bounds should implement this method to avoid rendering of characters that are not visible anyway. By default this method always returns true, which will ensure correct rendering. If possible, it should be re-implemented by the renderer handler for better performance. 'bounds' contains TWIPS coordinates.
TODO: Take a Range2d<T> rather then a gnash::SWFRect ? Would T==int be good ? TWIPS as integer types ?
See also gnash::renderer::bounds_in_clipping_area
References gnash::SWFRect::getRange().
Referenced by gnash::DisplayObject::boundsInClippingArea().
Given an image, returns a pointer to a bitmap_info class that can later be passed to FillStyleX_bitmap(), to set a bitmap fill style.
================================================================== Caching utitilies for core. ==================================================================
Implemented in gnash::Renderer_cairo.
Return a description of this renderer.
Implemented in gnash::Renderer_cairo.
Implemented in gnash::Renderer_cairo.
Referenced by gnash::DisplayList::display(), and gnash::DisplayObject::MaskRenderer::~MaskRenderer().
Draw a simple, solid filled polygon with a thin (~1 pixel) outline.
This can't be used for Flash shapes but is intended for internal drawings like bounding boxes (editable text fields) and similar. The polygon should not contain self-intersections. If you do not wish a outline or a fill, then simply set the alpha value to zero.
The polygon need NOT be closed (ie: this function will automatically add an additional vertex to close it.
When masked==false, then any potential mask currently active will be ignored, otherwise it is respected.
Implemented in gnash::Renderer_cairo.
Referenced by gnash::TextField::display().
Draws a glyph (font character).
Glyphs are defined just like shape characters with the difference that they do not have any fill or line styles. Instead, the shape must be drawn using the given color (solid fill). Please note that although the glyph paths may indicate subshapes, the renderer is to ignore that information.
Implemented in gnash::Renderer_cairo.
Referenced by gnash::SWF::TextRecord::displayRecords().
Draw a line-strip directly, using a thin, solid line.
Can be used to draw empty boxes and cursors.
an array of 16-bit signed integer coordinates. Even indices (and 0) are x coordinates, while uneven ones are y coordinates.
the number of x-y coordinates (vertices).
the color to be used to draw the line strip.
the SWFMatrix to be used to transform the vertices.
Implemented in gnash::Renderer_cairo.
Referenced by gnash::SWF::TextRecord::displayRecords().
Implemented in gnash::Renderer_cairo.
Referenced by gnash::DynamicShape::display(), gnash::SWF::DefineShapeTag::display(), and gnash::SWF::DefineMorphShapeTag::display().
Draws a video frame.
================================================================== Rendering Interface. ================================================================== The frame has already been decoded and is available in RGB format only.
Implemented in gnash::Renderer_cairo.
Implemented in gnash::Renderer_cairo.
Referenced by gnash::DisplayList::display(), and gnash::DisplayObject::MaskRenderer::MaskRenderer().
Converts pixel coordinates to world coordinates (TWIPS).
Implemented in gnash::Renderer_cairo.
Sets the update region (called prior to begin_display).
================================================================== Prepare drawing area and other utilities ================================================================== The renderer might do clipping and leave the region outside these bounds unchanged, but he is allowed to change them if that makes sense. After rendering a frame the area outside the invalidated region can be undefined and is not used.
It is not required for all renderers. Parameters are world coordinates (TWIPS).
For more info see page Detection of updated regions.
Reimplemented().
Sets the x/y scale for the movie.
================================================================== Interfaces for adjusting renderer output. ==================================================================
Reimplemented in gnash::Renderer_cairo.
Referenced by gnash::GtkAggVaapiGlue::beforeRendering(), gnash::GtkAggXvGlue::render(), and gnash::MovieTester::resizeStage().
Sets the x/y offset for the movie in pixels. This applies to all graphics drawn except the background, which must be drawn for the entire canvas, regardless of the translation.
Reimplemented in gnash::Renderer_cairo.
Converts world coordinates to pixel coordinates.
================================================================== Interface for querying the renderer. ==================================================================
Implemented().
Kept in parallel with movie_root's setting. | http://gnashdev.org/doc/html/classgnash_1_1Renderer.html | CC-MAIN-2015-32 | en | refinedweb |
Introduction
The ASP.NET RadioButtonList is a databound control which displays items as a mutually exclusive set of options in a Web Form. The data which is bound to the label part of each option can be formatted using a format string, but only a single field can be bound to the label.
This article shows an implementation which allows multiple fields to be bound to the item labels. We needed this in order to enrich the display of radio button lists in one of our applications. We’re going to look in detail at the code for the existing ASP.Net controls, which you can browse by disassembling the System.Web assembly in Reflector. The kind of solution presented here should also work equally well for CheckBoxList and DropDownList, because all three controls inherit most of their data binding functionality from the ListControl base class.
GridView Hyperlink Field
Anyone who has used GridView in any detail will know that you can create a type of databound column in the GridView which renders out a Hyperlink. The underlying data which is used to generate the text and URL of the hyperlink, and the format String to produce the actual output, are specified in properties declared in the markup.
Because often multiple fields are required to generate the Url (e.g. if the Url has multiple query string parameters), the DataNavigateUrlFields property accepts a comma delimited list of fields to bind. To display multiple items, the format string simply refers to each item required:
<asp:GridView <Columns> <asp:HyperLinkField </Columns> </asp:GridView>
Inspecting this class in Reflector show that this is automatically converted and stored as a string array:
[TypeConverter(typeof(StringArrayConverter))] [DefaultValue((string)null)] public virtual string[] DataTextFields {get; set;}
During data binding, the object properties corresponding to each field are investigated to yield reflection objects:
// Caches the reflection data for databinding private PropertyDescriptor[] urlFieldDescs; // called when each hyperlink is databound (edited for clarity) private void OnDataBindField(object sender, EventArgs e) { if (this.urlFieldDescs == null) { PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(component); string[] dataNavigateUrlFields = this.DataNavigateUrlFields; int num = dataNavigateUrlFields.Length; this.urlFieldDescs = new PropertyDescriptor[num]; for (int i = 0; i < num; i++) { dataTextField = dataNavigateUrlFields[i]; if (dataTextField.Length != 0) { this.urlFieldDescs[i] = properties.Find(dataTextField, true); if ((this.urlFieldDescs[i] == null) && !base.DesignMode) { throw new HttpException(SR.GetString("Field_Not_Found", new object[] { dataTextField })); } } } } ... }
From these property definitions, reflection is used to get an array of values for the current object:
// get an array of values from the current dataItem’s properties int length = this.urlFieldDescs.Length; object[] dataUrlValues = new object[length]; for (int j = 0; j < length; j++) { if (this.urlFieldDescs[j] != null) { dataUrlValues[j] = this.urlFieldDescs[j].GetValue(component); } } HyperLink link = (HyperLink)sender; string s = this.FormatDataNavigateUrlValue(dataUrlValues); link.NavigateUrl = s;
The actual formatting method just calls String.Format() on the DataFormatString using the array of values:
protected virtual string FormatDataNavigateUrlValue (object value) { string str = string.Empty; if (DataBinder.IsNull(value)) { return str; } string dataTextFormatString = this.DataTextFormatString; if (dataTextFormatString.Length == 0) { return value.ToString(); } return string.Format(CultureInfo.CurrentCulture, dataTextFormatString, new object[] { value }); }
RadioButtonList
So much for HyperLinkField – but why can we not do the same in RadioButtonList? The difference is that RadioButtonList derives indirectly from ListControl, which controls the data binding for values and labels using the DataTextField and DataValueField properties. Binding multiple fields to values or labels is not supported in ListControl, and has not been added to RadioButtonList. To add this functionality, we’re going to derive from RadioButtonList and make a few changes.
Firstly, we need to customise the properties of our new class so that multiple text fields can be specified. The existing DataTextField is inherited and is public in scope, so we can’t really hide that. For simplicity’s sake, the best we can do is just complain if it’s accessed. With DataTextField sabotaged, we add a DataTextFields property as per the example above. By specifying the TypeConverter attribute, ASP.Net can parse the property declaration in the aspx page into an array without any intervention on our part:
public class MultiFieldRBList : RadioButtonList { public override string DataTextField { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } /// <summary> /// A comma separated list of the fields used in the /// databinding of the text for each ListItem /// </summary> [TypeConverter(typeof(StringArrayConverter))] [DefaultValue((string)null)] public virtual string[] DataTextFields { get { object currentValues = base.ViewState["DataTextFields"]; if (currentValues != null) { return (string[])((string[])currentValues).Clone(); } return new string[0]; } set { string[] strArray = base.ViewState["DataTextFields"] as string[]; if (!this.StringArraysEqual(strArray, value)) { if (value != null) { base.ViewState["DataTextFields"] = (string[])value.Clone(); if (base.Initialized) { base.RequiresDataBinding = true; } } else { base.ViewState["DataTextFields"] = null; } } } }
So far, we’re just reading the field list into a property in the class. Now we need to hook into the existing architecture and override PerformDataBinding(). This method is provided by the DataboudControl base class, and is called after data has been selected from the data source:
/// <summary> /// Overrides the default data binding after the select method has been called. /// This allows us to create the ListItems using multiple fields /// </summary> protected override void PerformDataBinding(IEnumerable dataSource) { /// do our stuff to create new list items ///much like the HyperLinkField solution, so not repeated here- ///get the download attached to this article for the full source code... base.PerformDataBinding(null); }
ListControl’s implementation of PerformDataBinding() creates a new set of ListItem objects and adds them to the Items collection. We’re going to replace that with our own implementation, hence the override. There are also some things which ListControl does in this method involving managing the current item selection. This uses private properties which we can’t access from our subclass. This raises the problem of an ugly hack involving copying more code into our subclass, or calling the base class method as well, and then getting the default behaviour adding items as well as the ones we’ve created.
Fortunately, this can be avoided by simply passing in a null instead of the data collection. The reason this works is largely serendipitous, and depends on the fact that if the method in ListControl is called with a null instead of actual data, it does the work we need without clearing down the items. And that’s basically it – include the control in your web application and generate list labels from multiple fields. The following example is in the code download, and takes the names and years of arcade games provided by a data souce:
<GOV:MultiFieldRBList <asp:ObjectDataSource
The resulting output being a formatted RadioButtonList with the values. | http://www.developerfusion.com/code/9723/binding-multiple-fields-to-aspnet-listcontrol-classes/ | CC-MAIN-2015-32 | en | refinedweb |
include a text file into a webpage
Discussion in 'HTML' started by Hervé, Sep 29, 2003.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
Include html text in a webpageGregor Traven, Oct 20, 2004, in forum: HTML
- Replies:
- 11
- Views:
- 11,228
- Jeffrey Silverman
- Oct 21, 2004
#include "file" -vs- #include <file>Victor Bazarov, Mar 5, 2005, in forum: C++
- Replies:
- 4
- Views:
- 657
- Exits Funnel
- Mar 6, 2005
include file in include filePTM, Nov 12, 2007, in forum: HTML
- Replies:
- 1
- Views:
- 394
- Andy Dingley
- Nov 12, 2007
ASP Include file error <!-- #include file="" -->naveeddil, Jan 4, 2008, in forum: .NET
- Replies:
- 0
- Views:
- 704
- naveeddil
- Jan 4, 2008
/* #include <someyhing.h> */ => include it or do not include it?That is the question ....Andreas Bogenberger, Feb 21, 2008, in forum: C Programming
- Replies:
- 3
- Views:
- 1,135
- Andreas Bogenberger
- Feb 22, 2008 | http://www.thecodingforums.com/threads/include-a-text-file-into-a-webpage.155075/ | CC-MAIN-2015-32 | en | refinedweb |
Details
Description
This is basically to cut my teeth for much more ambitious code generation down the line, but I think it could be performance and useful.
the new syntax is:
a = load 'thing' as (x:chararray); define concat InvokerGenerator('java.lang.String','concat','String'); define valueOf InvokerGenerator('java.lang.Integer','valueOf','String'); define valueOfRadix InvokerGenerator('java.lang.Integer','valueOf','String,int'); b = foreach a generate x, valueOf(x) as vOf; c = foreach b generate x, vOf, valueOfRadix(x, 16) as vOfR; d = foreach c generate x, vOf, vOfR, concat(concat(x, (chararray)vOf), (chararray)vOfR); dump d;
There are some differences between this version and Dmitriy's implementation:
- it is no longer necessary to declare whether the method is static or not. This is gleaned via reflection.
- as per the above, it is no longer necessary to make the first argument be the type of the object to invoke the method on. If it is not a static method, then the type will implicitly be the type you need. So in the case of concat, it would need to be passed a tuple of two inputs: one for the method to be called against (as it is not static), and then the 'string' that was specified. In the case of valueOf, because it IS static, then the 'String' is the only value.
- The arguments are type sensitive. Integer means the Object Integer, whereas int (or long, or float, or boolean, etc) refer to the primitive. This is necessary to properly reflect the arguments. Values passed in WILL, however, be properly unboxed as necessary.
- The return type will be reflected.
This uses the ASM API to generate the bytecode, and then a custom classloader to load it in. I will add caching of the generated code based on the input strings, etc, but I wanted to get eyes and opinions on this. I also need to benchmark, but it should be native speed (excluding a little startup time to make the bytecode, but ASM is really fast).
Another nice benefit is that this bypasses the need for the JDK, though it adds a dependency on ASM (which is a super tiny dependency).
Patch incoming.
Issue Links
Activity
Jon, the problem with non-static methods was that what people really wanted was to be able to construct the object to call methods on (as opposed to just invoking the no-arg constructor for every invocation). Any thoughts on how to achieve that?
I'm not quite sure what you mean...could you illustrate it with a quick code snippet?
I am thinking about this and what I think you mean, is that if you say
define concat InvokerGenerator('java.lang.String','concat','String')
you don't want it to do:
return new String().concat((String)input.get(0));
So what it does is it uses the argument list (the 3rd parameter) to find a matching method. If it is not static, then it will expect n+1 arguments, where the 1st one will be an instance of the object on the left (in the case java.lang.String) which will serve as a method receiver. So in the case of concat, it would expect 2 Strings. If it is static, it only expects the argument list.
Is that what you were talking about, or something else?
Not quite – what I mean is that people want to be able to say, effectively:
Foo foo = new Foo(1, 2, 'some_string'); for (Tuple t : tuples) { foo.someMethod(t); }
I think the problem with invokers isn't that they are slow. It's that they are cumbersome to use. Not sure making them faster is really the place to focus.. integrating them deeper would be interesting (why can't we just see that you are saying java.lang.String.concat('foo', $1) and auto-generate or auto-wrap a UDF?).
Ah, ok. Well, first, since we have a more performance, less verbose syntax (mainly not having to say "InvokeForLong" vs "InvokeForString" and so on), I think it's worth including it because it IS faster and cleaner, though I agree that the focus now should be on filling a niche that doesn't currently exist. As I said before, the work so far was in a big part to be a small project to begin to work with ASM with, and benefit pig on the side.
I do like the idea of potentially supporting math function syntax, and then behind the scenes generating the scaffolding. I like that idea a lot. Will mull on how it'd be implemented. Perhaps a first pass would be to support a MATH keyword where if you do MATH.operator(stuff, stuff) it generates the scaffolding, and then it can get more generic? I don't really know how to do this without adding keywords... hmm hmm hmm. Would love thoughts in that vein.
But what you bring up is an interesting use case...sort of the generation of UDF's based on some class that exists. What your proposing sounds like we could generate an accumulator UDF that would apply to any case where you have an object that you instantiate on the mapper, then stream everything through and return. Ideally we'd provide an interface that objects could implement that would serve as the bridge. Perhaps something like
public Object eval(Object... o) throws IOException;
that way they don't even have to depend on pig in the object?
I'm a bit confused about the math keyword stuff. I meant just generically being able to invoke (or generate code that wraps) methods on objects, the way one would initialize an object inside a UDF constructor and then call methods on that method during UDF exec methods.
what about this?
- call of a static method the same way a non-defined UDF can be:
a = load 'thing' as (x:long); b = foreach a generate java.lang.Math.sqrt(x) as sqrt;
- definition of a UDF through an expression (for a strict definition of those expressions).
c = load 'thing' as (x:bag{t:(v:chararray)}); DEFINE join com.google.common.base.Joiner.on('-').skipNulls().join; d = foreach c generate join(x);
An expression being defined as follows:
{class}.{static method|constructor}([{primitive value}[,{primitive value}]*])[.{method}([{primitive value}[,{primitive value}]*])]*.{method}
The exact method to use can be evaluated through reflection using the expression and input schema.
- For methods called on the incoming objects (ex: concat) there is a finite set of those as they are the methods available on the types supported by Pig, they can just all be defined as builtins once and for all. That would simplify the other cases above.
I don't know that I like including all of the methods for types supported by Pig...if anything, I think we should try and move towards a solution that requires less code like that. Why cruft up the code base with a bunch of wrappers?
In that vein, I really like your first proposal. I think it's definitely the direction we should go. As far as a syntax to allow people to call methods that require an object, how about:
a = load 'thing' as (a:chararray, b:chararray); b = foreach a generate a:sqrt(b) as sqrt;
I'd love to use $, but I see the potential for namespace collision to be too great, not to mention ambiguity on the parser. It doesn't have to be : of course, but I don't think . will work. But perhaps I'm wrong? Either way, I think this is one of those cases where we should shoot for a really usable syntax. In both of these cases, we could use the InvokerGenerater and reflection to figure out all of the necessary code.
As far as the defintion of the UDF, I think that the mentioned approach is not a bad one. Another possibility:
c = load 'thing' as (x:bag{t:(v:chararray)}); DEFINE joiner NEW com.google.common.base.Joiner.on('-').skipNulls(); d = foreach c generate joiner:join(x);
This would introduce a new keyword which would allow us to more succinctly reference one object with various methods we want to use. The define would register this string in the namespace, and then when we see a :, first we see if what is to the left is a relation, then we see if it is in the object space. If it is, then we can build up the UDF.
I think we're heading in the right direction, whatever we choose.
Also, as an aside, I think it'd be awesome to make it so people didn't have to do the fully qualified classname, at least for stuff in java.lang... is there a reason why java.lang can't be added to the list returned by PigContext.getPackageImportList()? I suppose people can add it to the pig properties or whatever, but it seems like an intelligent base, at least?
Love Julien's proposal, that reads like what I expect it to read.
Except at some point down that road we discover ourselves writing java without IDE support..
Is supporting chaining method calls a bit much?
Jon, I'm not following your bit of sample syntax with the ":" – possibly due to variable collision (is "a" a defined object, or a relation?)
I was worried about the "writing java without IDE support" issue, but I think as long as our scope is narrow, the win is worth it.
I like Julien's proposal as well, but I guess I feel like we might as well push it to the next level?
DEFINE join com.google.common.base.Joiner.on('-').skipNulls().join; d = foreach c generate join(x);
the hanging "join" at the end of the define statement seems odd to me. Why not just let people call wahtever method they want?
And Dmitriy, I guess the ":" syntax is a little awkward, but the idea is that if you had "relation:method(relation*)", it invoke that method on the relation with the appropriate arguments. Or, in the same vein, if you had "joiner:join(relation*)", it'd invoke the method on the object that will be created viz. the DEFINE statement.
I think some sort of syntax allowing us to call methods of various pig types directly would be pretty neat, though. The syntax could be something bigger to highlight that it's kind of a big thing. "joiner=>join(relation*)", I dunno.
Another thought for this sort of thing:
This might be achievable without bytecode generation and good performance with Java 7 MethodHandles [1][2]. Of course, that would require Java 7, but Java 6 support ends later year [3], about the time Pig 0.11 would be out anyway.
[1]
[2]
[3]
Scott, good idea but moving Pig to java 7 would require running on a Hadoop that's running on Java 7... that might take a while. Though I think Oracle's working on figuring out what that would take, now that they have a hadoop integration team.
this will go in a future version
I don't know why this never got +1'd, I think we got derailed be the conversation near the end. I have updated it and added some tests. I don't see why we shouldn't commit this? It's strictly better than what we have, and I will make a new JIRA for the broader issue of trying to get rid of having to make a builtin for everything.
Oh, I also should add tests. Can leverage the tests Dmitriy used for his. | https://issues.apache.org/jira/browse/PIG-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel | CC-MAIN-2015-32 | en | refinedweb |
Class: (1 review) - 03 Feb 2012 07:59:58 GMT - Search in distribution
This role allows you to automatically "use" the classes your test class is testing, providing the name of the class via the "class_name" attribute. Thus, you don't need to hardcode your class names....DROLSKY/Test-Class-Moose-0.60 - 06 Jul 2015 17:31:15 GMT - Search in distribution
- Test::Class::Moose - Serious testing for serious Perl
UR uses Class::Autouse to handle automagic loading for modules. As long as some part of an application "use"s a Namespace module, the autoloader will handle loading modules under that namespace when they are needed....BRUMMETT/UR-0.44 - 06 Jul 2015 14:36:22 GMT - Search in distribution
The "later" pragma enables you to postpone using a module until any of its methods is needed during runtime....ERWAN/later-0.05 - 23 Apr 2014 16:40:34 GMT - Search in distribution
This documentation describes how to install CPANXR on a host....CLAESJAC/CPANXR-0.08 (1 review) - 07 Oct 2003 20:15:40 GMT - Search in distribution
ToolSet provides a mechanism for creating logical bundles of modules that can be treated as a single, reusable toolset that is imported as one. Unlike CPAN bundles, which specify modules to be installed together, a toolset specifies modules to be imp...DAGOLDEN/ToolSet-1.01 (2 reviews) - 27 Feb 2014 16:05:31 GMT - Search in distribution
ToolSet::y.pm is richly packed ToolSet, it excessivly uses autouse and Class::Autouse and loads commonly used perlmodules and OO Modules on demand. Because of its excessive magic it is not recommended, that you use it anywhere else except for quicknd...JMELTZER/ToolSet-y-0.01 - 27 Jun 2009 10:46:12 GMT - Search in distribution.22.0 (6 reviews) - 01 Jun 2015 17:51:59 GMT - Search in distribution
- perl588delta - what is new for perl v5.8.8
- perl589delta - what is new for perl v5.8.9
- perl5004delta - what's new for perl5.004
- 2 more results from perl »
Acme::Everything is the ultimate run-time loader. With one 'use' line, you effectively load all 20,000,000 odd lines of code in CPAN. Run ANY method in ANY class, and Acme::Everything will download and/or load the module as needed at runtime, includi...ADAMK/Acme-Everything-1.01 - 11 Dec 2007 03:17:52 GMT - Search in distribution
Class::Inspector allows you to get information about a loaded class. Most or all of this information can be found in other ways, but they aren't always very friendly, and usually involve a relatively high level of Perl wizardry, or strange and unusua...ADAMK/Class-Inspector-1.28 - 19 Oct 2012 21:18:26 GMT - Search in distribution
ADAMK/Bundle-CVSMonitor-1.06 - 05 Oct 2004 00:39:46 GMT - Search in distribution | https://metacpan.org/search?q=Class-Autouse | CC-MAIN-2015-32 | en | refinedweb |
Uhm, having a program that can talk to 0.6 and 0.7 servers at the same time
is not the hard problem, it took way less than five minutes to copy in both
generated clients in the same project and rename the C# namespaces. Two apps
and write to disk inbetween? Maven? That's crazy talk. :-D
What I was the most interested in is the fastest way to retrieve all rows
through thrift. get_range_slices or get_multi_slices? And how can I be
certain that I get all rows and the latest version of each row?
Say that I do get_range_slices, with some start token, a limit of 1000 and
consistency level ALL, what will actually happen? Will I get the first 1000
rows from that token as they appear on the server I was talking to, or will
the server fetch in additional rows that it doesn't have, but that fit the
query?
If I do repeated calls to get_range_slices, and set the start_token to the
last token of the previous slice, will I actually get all rows?
/Henrik
On Fri, May 6, 2011 at 09:07, Stephen Connolly <
[email protected]> wrote:
> maven-shade-plugin could help with having two versions of thrift at the
> same time... but you'd need to build some stuff with maven, and some people
> don't like that idea
>
> - Stephen
>
> ---
> Sent from my Android phone, so random spelling mistakes, random nonsense
> words and other nonsense are a direct result of using swype to type on the
> screen
> On 6 May 2011 01:11, "aaron morton" <[email protected]> wrote:
> | http://mail-archives.apache.org/mod_mbox/cassandra-user/201105.mbox/%[email protected]%3E | CC-MAIN-2015-32 | en | refinedweb |
txZMQ 0.
Requirements
Non-Python library required:
- ØMQ library 2.2.x or 3.2.x
Python packages required:
- pyzmq (for CPython)
- pyzmq-ctypes (for 0MQ. For example, special descendants of the ZmqConnection class, ZmqPubConnection and ZmqSubConnection, add special nice features for PUB/SUB sockets.
Request/reply pattern is achieved via DEALER/ROUTER sockets and classes ZmqREQConnection, ZmqREPConection, which provide REQ-REP like semantics in asynchronous case.
Other socket types could be easily derived from ZmqConnection..
Example
Here is an example of using txZMQ:
import sys from optparse import OptionParser from twisted.internet import reactor, defer parser = OptionParser("") parser.add_option("-m", "--method", dest="method", help="0MQ socket connection: bind|connect") parser.add_option("-e", "--endpoint", dest="endpoint", help="0MQ Endpoint") parser.add_option("-M", "--mode", dest="mode", help="Mode: publisher|subscriber") parser.set_defaults(method="connect", endpoint="epgm://eth1;239.0.5.3:10011") (options, args) = parser.parse_args() from txzmq import ZmqFactory, ZmqEndpoint, ZmqPubConnection, ZmqSubConnection import time zf = ZmqFactory() e = ZmqEndpoint(options.method, options.endpoint) if options.mode == "publisher": s = ZmqPubConnection(zf, e) def publish(): data = str(time.time()) print "publishing %r" % data s.publish(data) reactor.callLater(1, publish) publish() else: s = ZmqSubConnection(zf, e) s.subscribe("") def doPrint(*args): print "message received: %r" % (args, ) s.gotMessage = doPrint reactor.run()
The same example is available in the source code. You can run it from the checkout directory with the following commands (in two different terminals):
examples/pub_sub.py --method=bind --endpoint=ipc:///tmp/sock --mode=publisher examples/pub_sub.py --method=connect --endpoint=ipc:///tmp/sock --mode=subscriber):
- 129 downloads in the last day
- 693 downloads in the last week
- 1954 downloads in the last month
- Author: Andrey Smirnov
- License: GPLv2
- Package Index Owner: smira
- DOAP record: txZMQ-0.6.0.xml | https://pypi.python.org/pypi/txZMQ/0.6.0 | CC-MAIN-2015-32 | en | refinedweb |
z.
It was hilarous. Toasts never made me laugh. Beause of you, everytime I see toast now, I'm beginning to see it dancing... O.o Okay, Let just say it was funny and pretty okay.
Rated 2.5 / 5 stars
wiu
The animation itself wasnt that great.. it wasnt bad either..
The sets of pics didnt help much, I wish it was more animated.
But tbh you had me laughing at the dancing bread throught the animation :p
Rated 2 / 5 stars
...
Well, that's a pretty damn old song, but in the past, every time i listen to it, it still sounded funny. Now, i dunno. It's just not as good now. Besides, you're a prick, i was gonna do pretty much just that. Well. It was still ok. 4/10 for you. It would have been 5/10 if you had ACTUALLY CREDITED Bob&Tom in the submission settings, so those people who hadn't heard it could go to their website or something, all you did was say that they made it, not actually give them anything in return (ie. more views on their website)
Rated 0 / 5 stars
failure
dude, all you did was take a song that had been on the bob and tom show for years, and put together a crappy slideshow to it. you fail, hard.
Rated 3 / 5 stars
WTF?
Funny as hell, but what's with the obsession with toast? | http://www.newgrounds.com/portal/view/375705 | CC-MAIN-2015-32 | en | refinedweb |
Trying again without all the nasty HTML -- apologies to all.
[David Abrahams]
. . . I want to put in another plug for itemize(),
Well, since we're doing post-Pronouncement fantasizing . . .
here's one more plug for "itemize".
Noting that
for k,v in adict.iteritems():
is probably as useful as
for i,v in enumerate(aseq):
why not change the spec to:
def itemize(iterable):
... # use iteritems if defined ... try: ... for item in iterable.iteritems(): yield item ... except AttributeError: pass ... # and then everything as before ... i = 0 ... iterator = iter(iterable) ... while 1: ... yield i,iterator.next() ... i += 1 ...
So "itemize(iterable)" returns an iterator that yields (k,v) pairs from the iterable.iteritems() method if defined, else pairs generated by associating 0,1,...,n-1 with n values from iter(iterable).
This allows
for k,v in itemize(adict_or_aseq):
to be written uniformly for dicts and seqs.
And makes
list(itemize(adict)) == adict.items()
work, removing one of the objections to the name "itemize".
Also, if one were to need, for example, a sequence whose indices start at 1 instead of 0, one could define a sequence class that implements iteritems and objects of said class would work just fine with "itemize".
Jim | https://mail.python.org/archives/list/[email protected]/thread/SLZNUFBGNDOC5NWGEA337ELGGB5FRWN5/ | CC-MAIN-2022-33 | en | refinedweb |
Android/SharkBait/Building a toolchain for aarch64-linux-android supports running on, including AArch64, which my GSoC project needs. Unfinished work on this topic is documented at the end of this article.
Step 0 -- Clone source and set up environment variables
Clone the following repositories which hold Google's modifications to the GCC toolchain for Android target:
The rest of this article assumes that $PWD is in the corresponding project for each step.
Export the environment variables to get the paths right:
user $
export TARGET=aarch64-linux-android
user $
export PREFIX=/usr/local/$TARGET
Step 1 -- Build and install binutils
Binutils that can process binaries for the target is needed. Create a separate build directory, configure, compile, and then install for target:
user $
mkdir build && cd build
user $
../binutils-2.27/configure --target=$TARGET --prefix=$PREFIX --enable-werror=no
user $
make -j8 && sudo make install
The
--enable-werror=no option is used to work around failed compiles due to newer versions of GCC generating new warnings. Examine $PREFIX and see if the binutils for the target has been properly installed.
Step 2 -- Install prebuilt libc & headers
The next step, as most cross-compiler creation guides instructs, is to install libc. Unfortunately, we have not figured out a proper way to compile Bionic without Android's (gigantic) build system, so we're just doing a voodoo copy-and-paste. The libc part in this section is expected to get better when we develop a mature solution for building Bionic.
Install sys-kernel/linux-headers-3.10.73 from here. Set up libc and kernel headers.
user $
sudo emerge -av =sys-kernel/linux-headers-3.10.73
user $
mkdir -p $PREFIX/$TARGET/sys-include
user $
cp -Rv bionic/libc/include/* $PREFIX/$TARGET/sys-include/
user $
for a in linux asm asm-generic; do ln -s /usr/include/$a $PREFIX/$TARGET/sys-include/; done
The following object files are needed for a successful generation of the toolchain:
- crtbegin_so.o: from NDK
- crtend_so.o: from NDK
- crtbegin_dynamic.o: from NDK
- crtend_android.o: from NDK
- libc.so: from AOSP
- libm.so: from AOSP
- libdl.so: from AOSP
- ld-android.so: from AOSP
Obtain the above files and place them under $PREFIX/$TARGET/lib for discovery by the linker.
Step 3 -- Build and install GCC
The final part is relatively simple. Just compile GCC and install it into the toolchain prefix:
user $
mkdir build && cd build
user $
../gcc-4.9/configure --target=$TARGET --prefix=$PREFIX --without-headers --with-gnu-as --with-gnu-ld --enable-languages=c,c++
user $
make -j8 && sudo make install
Step 4 -- Build and verify "Hello, world!"
We need to verify that the toolchain is really working by creating executables for our target. Write a simple "Hello, world!" program:
hello.cc
#include <iostream> int main() { std::cout << "Hello, world!" << std::endl; return 0; }
Compile it with:
user $
aarch64-linux-android-g++ hello.cc -o hello -pie -static-libgcc -nostdinc++ -I/usr/local/aarch64-linux-android/include/c++/v1 -nodefaultlibs -lc -lm -lc++
Explanation for the commandline options used:
-pie: Android requires Position Independent Executables property for dynamically-linked executables.
-static-libgcc: Android platform does not have libgcc_s.so available; we'll have to make it statically-linked.
-nostdinc++and
-I...: Android uses libc++ as its default STL implementation, and we need this to get the right symbols used by including right C++ headers.
- This suggests that a correct copy of libcxx headers should be present at the path shown above.
-nodefaultlibsand
-l...: by default GCC links to libstdc++, which is not desirable in this case; we manually specify what to consider during the linking process.
- This suggests that libc++.so should be present in the linker search path.
What else?
The work is not finished yet on this topic:
- We still copy-and-paste libc object files instead of properly building them separately. Proper packaging of bionic is needed.
- The toolchain build process needs integrating with crossdev, Gentoo's flexible cross-compile toolchain generator. | https://wiki.gentoo.org/wiki/Android/SharkBait/Building_a_toolchain_for_aarch64-linux-android | CC-MAIN-2022-33 | en | refinedweb |
.
Often you want to use your own python code in your Airflow deployment, for example common code, libraries, you might want to generate DAGs using shared python code and have several DAG python files.
You can do it in one of those ways:
add your modules to one of the folders that Airflow automatically adds to
PYTHONPATH
add extra folders where you keep your code to
PYTHONPATH
package your code into a Python package and install it together with Airflow.
The next chapter has a general description of how Python loads packages and modules, and dives deeper into the specifics of each of the three possibilities above.
How package/modules loading in Python works¶
The list of directories from which Python tries to load the module is given
by the variable
sys.path. Python really tries to
intelligently determine the contents
Adding directories to the PYTHONPATH..
Also make sure to Add init file to your folders.
Typical structure of packages¶
This is an example structure that you might have in your
dags folder:
<DIRECTORY ON PYTHONPATH> | .airflowignore -- only needed in ``dags`` folder, see below | -- my_company | __init__.py | common_package | | __init__.py | | common_module.py | | subpackage | | __init__.py | | subpackaged_util_module.py | | my_custom_dags | __init__.py | my_dag1.py | my_dag2.py | base_dag.py
In the case above, these are the ways you could import the python files:
from my_company.common_package.common_module import SomeClass from my_company.common_package.subpackage.subpackaged_util_module import AnotherClass from my_company.my_custom_dags.base_dag import BaseDag
You can see the
.airflowignore file at the root of your folder. This is a file that you can put in your
dags folder to tell Airflow which files from the folder should be ignored when the Airflow
scheduler looks for DAGs. It should contain either regular expressions (the default) or glob expressions
for the paths that should be ignored. You do not need to have that file in any other folder in
PYTHONPATH (and also you can only keep shared code in the other folders, not the actual DAGs).
In the example above the dags are only in
my_custom_dags folder, the
common_package should not be
scanned by scheduler when searching for DAGS, so we should ignore
common_package folder. You also
want to ignore the
base_dag.py if you keep a base DAG there that
my_dag1.py and
my_dag2.py derives
from. Your
.airflowignore should look then like this:
my_company/common_package/.* my_company/my_custom_dags/base_dag\.py
If
DAG_IGNORE_FILE_SYNTAX is set to
glob, the equivalent
.airflowignore file would be:
my_company/common_package/ my_company/my_custom_dags/base_dag.py
Built-in
PYTHONPATH entries in Airflow¶
Airflow, when running dynamically adds three directories to the
sys.path:
The
dagsfolder: It is configured with option
dags_folderin section
[core].
The
configfolder: It is configured by setting
AIRFLOW_HOMEvariable (
{AIRFLOW_HOME}/config) by default.
The
pluginsFolder: It is configured with option
plugins_folderin section
[core].
Note
The DAGS folder in Airflow 2 should not be shared with the webserver. While you can do it, unlike in Airflow 1.10,
Airflow has no expectations that the DAGS folder is present in the webserver. In fact it’s a bit of
security risk to share the
dags folder with the webserver, because it means that people who write DAGS
can write code that the webserver will be able to execute (ideally the webserver should
never run code which can be modified by users who write DAGs). Therefore if you need to share some code
with the webserver, it is highly recommended that you share it via
config or
plugins folder or
via installed Airflow packages (see below). Those folders are usually managed and accessible by different
users (Admins/DevOps) than DAG folders (those are usually data-scientists), so they are considered
as safe because they are part of configuration of the Airflow installation and controlled by the
people managing the installation.
Best practices for module loading¶
There are a few gotchas you should be careful about when you import your code.
Use unique top package name¶
It is recommended that you always put your dags/common files in a subpackage which is unique to your
deployment (
my_company in the example below). It is far too easy to use generic names for the
folders that will clash with other packages already present in the system. For example if you
create
airflow/operators subfolder it will not be accessible because Airflow already has a package
named
airflow.operators and it will look there when importing
from airflow.operators.
Don’t use relative imports¶
Never use relative imports (starting with
.) that were added in Python 3.
This is tempting to do something like that it in
my_dag1.py:
from .base_dag import BaseDag # NEVER DO THAT!!!!
You should import such shared dag using full path (starting from the directory which is added to
PYTHONPATH):
from my_company.my_custom_dags.base_dag import BaseDag # This is cool
The relative imports are counter-intuitive, and depending on how you start your python code, they can behave differently. In Airflow the same DAG file might be parsed in different contexts (by schedulers, by workers or during tests) and in those cases, relative imports might behave differently. Always use full python package paths when you import anything in Airflow DAGs, this will save you a lot of troubles. You can read more about relative import caveats in this Stack Overflow thread.
Add
__init__.py in package folders¶
When you create folders you should add
__init__.py file as empty files in your folders. While in Python 3
there is a concept of implicit namespaces where you do not have to add those files to folder, Airflow
expects that the files are added to all packages you added.
Inspecting your
PYTHONPATH loading configuration¶
Adding directories to the
PYTHONPATH]
Creating a package in Python¶
This is most organized way of adding your custom code. Thanks to using packages, you might organize your versioning approach, control which versions of the shared code are installed and deploy the code to all your instances and containers in controlled way - all by system admins/DevOps rather than by the DAG writers. It is usually suitable when you have a separate team that manages this shared code, but if you know your python ways you can also distribute your code this way in smaller deployments. You can also install your Plugins and Provider packages as python packages, so learning how to build your package is handy.
Here is how to create your package:", packages=setuptools.find_packages(), ). | https://airflow.apache.org/docs/apache-airflow/2.3.1/modules_management.html | CC-MAIN-2022-33 | en | refinedweb |
I can’t seem to figure out how to remove the hooks without the handles, despite being able to detect the hooks on the target module.
from typing import Tuple import torch model = torch.nn.Sequential(*[torch.nn.Identity()] * 4) def forward_hook(module: torch.nn.Module, input: Tuple[torch.Tensor], output: torch.Tensor) -> None: pass def find_and_remove_hooks(m): print("Hooks") c = 0 module_name = type(m).__name__ print(m._forward_hooks) for k, v in m._forward_hooks.items(): c+=1 m._forward_hooks[v].remove() # Doesn't work print("All " + str(c) + " hooks found") model[1].register_forward_hook(forward_hook) find_and_remove_hooks(model[1]) | https://discuss.pytorch.org/t/how-do-i-remove-forward-hooks-on-a-module-without-the-hook-handles/140393 | CC-MAIN-2022-33 | en | refinedweb |
legal_t5_small_multitask_de_en model
Model on translating legal text from Deustch to English._de_en model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario.
Intended uses & limitations
The model could be used for translation of legal texts from Deustch to English.
How to use
Here is how to use this model to translate legal text from Deustch to English in PyTorch:
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_de_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_de_en", do_lower_case=False, skip_special_tokens=True), device=0 ) de_text = "Der zuständige Ausschuss wacht darüber, dass alle Angaben, die die Ausübung des Mandats eines Mitglieds bzw. die Rangfolge der Stellvertreter beeinflussen können, dem Parlament unverzüglich von den Behörden der Mitgliedstaaten und der Union - unter Angabe deren Wirksamwerdens im Falle einer Benennung - übermittelt werden." pipeline([de_text], max_length=512)
Training data
The legal_t5_small_multitask_de_en | https://huggingface.co/SEBIS/legal_t5_small_multitask_de_en | CC-MAIN-2022-33 | en | refinedweb |
XmRemoveTabGroup - A function that removes a tab group
#include <Xm/Xm.h> void XmRemoveTabGroup (tab_group) Widget tab_group;
This function is obsolete and its behavior is replaced by setting XmNnavigationType to XmNONE. XmRemoveTabGroup removes a widget from the list of tab groups associated with a particular widget hierarchy and sets the widget's XmNnavigationType to XmNONE.
XmAddTabGroup(3X), XmManager(3X), and XmPrimitive(3X). | http://www.vaxination.ca/motif/XmRemoveTab_3X.html | CC-MAIN-2022-33 | en | refinedweb |
snf-ganeti:7b80424fbcbf46b2753ee619814ba6fb554ae6ee commits 2009-02-03T15:42:42+00:00 Fix unittest encoding breakage 2009-02-03T15:42:42+00:00 Iustin Pop [email protected] Due to the fact that we sanitize now the output from environment scripts, the unittest needs to be adjusted. My bad for not checking it. Reviewed-by: imsnah Allow gnt-node evacuate to use an iallocator 2009-02-03T14:45:53+00:00 Iustin Pop [email protected] Add gnt-node migrate 2009-02-03T14:45:43+00:00 Iustin Pop iustin@google An attempt at fixing some encoding issues 2009-02-03T14:45:32+00:00 Iustin Pop [email protected] This patch unifies the hardcoded re-encoding attempts into a single function in utils.py. This function is used to take either an unicode or str object and convert it to a ASCII-only str object which can be safely displayed and transmitted. We replace then the current manual re-encodings with this function. In mcpu we stop re-encoding the hooks output and instead we do it right at the hook generation in backend.py. This passes on my 'custom' lvs output with non-ASCII chars. But there are probably other places we will need to fix. Reviewed-by: ultrotter lvmstrap: allow removable devices too 2009-02-03T14:45:14+00:00 Iustin Pop [email protected] For testing or just in case a device is exported by a bad driver with the 'removable' flag set, this patch adds a flag to lvmstrap that allows it to use these devices too. Reviewed-by: ultrotter Documentation: update the gnt-os manpage 2009-02-03T14:45:03+00:00 Iustin Pop [email protected] This patch updates the gnt-os man page and the common footer page for ganeti 2.0. Reviewed-by: ultrotter Small patch for handling errors in node add 2009-02-03T10:55:30+00:00 Iustin Pop [email protected] This small path hopefully fixes the handling of ssh verify errors in node add (note: untested). Reviewed-by: ultrotter ssh: more details on failure 2009-02-03T10:55:19+00:00 Iustin Pop [email protected] In case we fail without output from the ssh command, we should at least add the exit code or any other failure reason to the error message, and log it and the cmdline used to the node daemon log. Reviewed-by: imsnah Give a sane permission to the known_host file 2009-02-03T10:45:12+00:00 Guido Trotter [email protected] Reviewed-by: iustinp A couple of small changes to the OS environment 2009-02-02T14:49:10+00:00 Iustin Pop [email protected] This patch correctly exports the mode of disks (rw/ro) and also exports the instance OS. Reviewed-by: imsnah Whitespace change: bad indentation in constants.py 2009-02-02T11:23:48+00:00 Iustin Pop [email protected] This patch only changes some indentation in constants.py. Reviewed-by: imsnah Return error messages in node add ssh handling 2009-02-02T11:23:40+00:00 Iustin Pop [email protected] When the rpc call node_add fails, we don't have any error message. This patch changes the call to return (status, data) so that the user can see the correct error message. Reviewed-by: imsnah gnt-instance: support no_PARAMETER value 2009-02-01T09:48:37+00:00 Guido Trotter [email protected] Since parameters get set to False if a no_ is prefixed don't try to interpret those boolean values, and pass them unchanged. Reviewed-by: iustinp LUQueryClusterInfo: filter hvparams 2009-02-01T09:48:23+00:00 Guido Trotter [email protected] We don't need to show hvparams for hypervisors which are not enabled on the cluster. Reviewed-by: iustinp KVM: advise about VNC support on GetShellCommand 2009-01-29T15:51:58+00:00 Guido Trotter [email protected] Reviewed-by: iustinp KVM: enable VNC if a VNC_BIND_ADDRESS is defined 2009-01-29T15:51:44+00:00 Guido Trotter [email protected] We'll also enable a tablet usb device, as suggested by the kvm man page. Reviewed-by: iustinp KVM: Allow the HV_VNC_BIND_ADDRESS parameter 2009-01-29T15:51:29+00:00 Guido Trotter [email protected] Reviewed-by: iustinp LUAddNode: copy the vnc password file also for KVM 2009-01-29T15:51:14+00:00 Guido Trotter [email protected] Before we used to copy the file if xen-hvm was enabled on the cluster, no we'll do that if any enabled hypervisor is in the new HTS_USE_VNC group. Reviewed-by: iustinp Add HT_KVM to HTS_REQ_PORT 2009-01-29T15:51:00+00:00 Guido Trotter [email protected] HT_KVM doesn't technically require a port, but if it has one it can give vnc displays to instances. Reviewed-by: iustinp KVM: make the kernel and initrd arguments optional 2009-01-29T15:50:38+00:00 Guido Trotter [email protected] KVM: add the HV_SERIAL_CONSOLE parameter 2009-01-29T15:47:21+00:00 Guido Trotter [email protected] GetShellCommand: get hvparams and beparams 2009-01-29T15:47:06+00:00 Guido Trotter [email protected] Sometimes the hypervisor will use the instance hv and/or be parameters to determine the best shell command. This is not possible, though, currently, as the instance hv/beparams are not filled, so we have to pass the filled versions separately. Reviewed-by: iustinp Implement software release version checks too 2009-01-29T15:09:21+00:00 Iustin Pop [email protected] Currently the LUVerifyCluster only reports the protocol version changes, not software ones. This is useful to know/monitor, so we add this too as a warning. Reviewed-by: ultrotter gnt-instance list: accept input names 2009-01-29T15:09:11+00:00 Iustin Pop [email protected] Currently gnt-instance list will refuse to take arguments, and always return the full list of instances. This patch allows it to pass names to LUQueryInstances, so that we restrict the input to a given set of instances. Reviewed-by: ultrotter LUQueryInstances: keep the given order of names 2009-01-29T15:08:57+00:00 Iustin Pop iustin@google locking.LockSet: don't modify input arguments 2009-01-29T15:08:46+00:00 Iustin Pop iustin@google Re-wrap some lines to keep them under 80 chars 2009-01-29T15:08:34+00:00 Iustin Pop [email protected] This non-code change rewraps some lines in locking.py to keep them under 80 chars. Reviewed-by: ultrotter Check that instance exists before confirm. queries 2009-01-29T15:08:24+00:00 Iustin Pop iustin@google RAPI: tag work 2009-01-29T15:03:42+00:00 Oleksiy Mishchenko [email protected] Generalize tag work for instances/nodes/cluster tag management. Reviewed-by: iustinp RAPI: rlib1 removal 2009-01-29T15:03:00+00:00 Oleksiy Mishchenko [email protected] The resources we still need moved to rlib2. Reviewed-by: iustinp RAPI: Implement /2 resource 2009-01-29T15:02:20+00:00 Oleksiy Mishchenko [email protected] Reviewed-by: iustinp RAPI: Deprecate version Rapi version1 2009-01-29T14:52:41+00:00 Oleksiy Mishchenko [email protected] It is impossible to keep backward compatibility due to significant changes in the Ganeti core. Reviewed-by: iustinp Fix gnt-cluster modify -H and offline nodes 2009-01-28T19:06:11+00:00 Iustin Pop [email protected] Reviewed-by: ultrotter Actually mark drives as read-only if so configured 2009-01-28T19:06:00+00:00 Iustin Pop [email protected] This patch correctly marks the drives as read-only for Xen, and raises and exception for KVM since it doesn't support read-only drives. Reviewed-by: ultrotter Fix some issues related to job cancelling 2009-01-28T14:46:58+00:00 Iustin Pop iustin@google Xen: use utils.WriteFile for the instance configs 2009-01-27T16:44:38+00:00 Guido Trotter [email protected] Also raise HypervisorError rather than OpExecError. Reviewed-by: iustinp Xen: use utils.Readfile to read the VNC password 2009-01-27T16:44:23+00:00 Guido Trotter [email protected] Also raise HypervisorError rather than OpExecError. Reviewed-by: iustinp Implement disk verify checks in config verify 2009-01-27T15:41:38+00:00 Iustin Pop [email protected] This patch adds a simple check that the 'mode' attribute of top-level disks is correct. It does not recurse over children. The framework could be extended with other checks in the future. Reviewed-by: imsnah Fix the mode attribute of newly-created disks 2009-01-27T15:41:26+00:00 Iustin Pop [email protected] Rework the multi-instance gnt commands 2009-01-27T15:41:15+00:00 Iustin Pop iustin@google | https://git.minedu.gov.gr/itminedu/snf-ganeti/-/commits/7b80424fbcbf46b2753ee619814ba6fb554ae6ee?format=atom | CC-MAIN-2022-33 | en | refinedweb |
.
The loss of a cable is a textbook example of how a single change can immediately disrupt the entire network. To enable rapid response in such situations, our MAGE 🧙♂️ team has been adding online graph algorithms (Node2Vec, PageRank & community detection), whose magic ✨ is in updating previous outputs instead of computing everything anew.
Explore the Global Shipping Network
Prerequisites
In this tutorial, you will use Memgraph with:
- Docker,
- GQLAlchemy,
- and Jupyter Notebook installed.
Data
The dataset used in this blogpost represents the global network of submarine internet cables in the form of a graph whose nodes stand for landing points, the cables connecting them represented as relationships.
Landing points and cables have unique identifiers (
id), and the
landing points also have names (
name).
A giant thank you to TeleGeography for sharing the dataset for their submarine cable map. ❤️
Exploration
Our Setup
With Docker installed, we will start Memgraph through it using the following command:
docker run -it -p 7687:7687 -p 3000:3000 memgraph/memgraph-platform
We will be working with Memgraph from a Jupyter notebook. To interact with Memgraph from there, we use GQLAlchemy.
Betweenness Centrality
Before we start exploring our graph, let’s quickly refresh that betweenness centrality of a node is the fraction of shortest paths between all pairs pairs of nodes in the graph that pass through it:
In the above expression, n is the node of interest, i, j are any two distinct nodes other than n, and σij(n) is number of shortest paths from i to j (going through n).
The analysis of (internet) traffic flows, like what we are doing here, is an established use case for this metric.
Jupyter notebook
The Jupyter notebook is here – we can now go for a deep dive 🤿 in the data!
Preliminaries
First, let’s connect to our instance of Memgraph with GQLAlchemy and load the dataset.
from gqlalchemy import Memgraph
def load_dataset(path: str): with open(path, mode='r') as dataset: for statement in dataset: memgraph.execute(statement)
memgraph = Memgraph("127.0.0.1", 7687) # connect to running instance memgraph.drop_database() # make sure it’s empty load_dataset('data/input.cyp') # load dataset
Example
With everything set up, calling the
betweenness_centrality_online module is a matter of a single Cypher query.
As we are analyzing how changes affect the undersea internet cable network, we save the computed betweenness centrality scores for later.
memgraph.execute( """ CALL betweenness_centrality_online.set() YIELD node, betweenness_centrality SET node.bc = betweenness_centrality; """ )
Let’s see which landing points have the highest betweenness centrality score in the network:
most_central = memgraph.execute_and_fetch( """ MATCH (n: Node) RETURN n.id AS id, n.name AS name, n.bc AS bc_score ORDER BY bc_score DESC, name ASC LIMIT 5; """ ) for node in most_central: print(node)
{'id': 15, 'name': 'Tuas, Singapore', 'bc_score': 0.29099145440251273} {'id': 16, 'name': 'Fortaleza, Brazil', 'bc_score': 0.13807572163430684} {'id': 467, 'name': 'Toucheng, Taiwan', 'bc_score': 0.13361801370831092} {'id': 62, 'name': 'Manado, Indonesia', 'bc_score': 0.12915295031722301} {'id': 123, 'name': 'Balboa, Panama', 'bc_score': 0.12783714460527598}
Two of the above results, Singapore and Panama, have become international trade hubs owing to their favorable geographic position. They are highly influential nodes in other networks as well.
Dynamic Operation
This part brings us to MAGE’s newest algorithm – iCentral dynamic betweenness centrality by Fuad Jamour and others.[1]. After each graph update, iCentral can be run to update previously computed values without having to process the entire graph, going hand in hand with Memgraph’s graph streaming capabilities.
You can set this up in Memgraph with triggers – pieces of Cypher code that run after database transactions.
memgraph.execute( """ CREATE TRIGGER update_bc BEFORE COMMIT EXECUTE CALL betweenness_centrality_online.update(createdVertices, createdEdges, deletedVertices, deletedEdges) YIELD *; """ )
Let’s now see what happens when a shark (or something else) cuts off a submarine internet cable between Tuas in Singapore and Jeddah in Saudi Arabia.
memgraph.execute("""MATCH (n {name: "Tuas, Singapore"})-[e]-(m {name: "Jeddah, Saudi Arabia"}) DELETE e;""")
The above transaction activates the
update_bc trigger, and previously computed betweenness centrality scores are updated using iCentral.
With the cable out of function, internet data must be transmitted over different routes. Some nodes in the network are bound to experience increased strain and internet speed might thus deteriorate. These nodes likely saw their betweenness centrality score increase. To inspect that, we’ll retrieve the new scores with
betweenness_centrality_online.get() and compare them with the previously saved ones.
highest_deltas = memgraph.execute_and_fetch( """ CALL betweenness_centrality_online.get() YIELD node, betweenness_centrality RETURN node.id AS id, node.name AS name, node.bc AS old_bc, betweenness_centrality AS bc, betweenness_centrality - node.bc AS delta ORDER BY abs(delta) DESC, name ASC LIMIT 5; """ ) for result in highest_deltas: print(result) memgraph.execute("DROP TRIGGER update_bc;")
{'id': 111, 'name': 'Jeddah, Saudi Arabia', 'old_bc': 0.061933737931979434, 'bc': 0.004773934386713466, 'delta': -0.057159803545265966} {'id': 352, 'name': 'Songkhla, Thailand', 'old_bc': 0.05259842296405675, 'bc': 0.07514804741735281, 'delta': 0.022549624453296065} {'id': 15, 'name': 'Tuas, Singapore', 'old_bc': 0.29099145440251273, 'bc': 0.2730690696075149, 'delta': -0.017922384794997803} {'id': 175, 'name': 'Yanbu, Saudi Arabia', 'old_bc': 0.0648358824682235, 'bc': 0.07561992914231867, 'delta': 0.010784046674095174} {'id': 210, 'name': 'Dakar, Senegal', 'old_bc': 0.08708567541545133, 'bc': 0.09412362761485257, 'delta': 0.007037952199401246}
As seen above, the network landing point in Songkhla, Thailand had its score increase by 42.87% after the update. Conversely, other landing points became less connected to the rest of the network: the centrality of the Jeddah node in Saudi Arabia almost dropped to zero.
Performance
iCentral builds upon the Brandes algorithm[2] and adds the following improvements in order to increase performance:
- Process only the nodes whose betweenness centrality values change: after an update, betweenness centrality scores stay the same outside the affected biconnected component.
- Avoid repeating shortest-path calculations: use prior output if it’s possible to tell it’s still valid; if new shortest paths are needed, update the prior ones instead of recomputing.
- Breadth-first search for computing graph dependencies does not need to be done out of nodes equidistant to both endpoints of the updated relationship.
- The BFS tree used for computing new graph dependencies (the contributions of a node to other nodes’ scores) can be determined from the tree obtained while computing old graph dependencies.
bcc_partition = memgraph.execute_and_fetch( """ CALL biconnected_components.get() YIELD bcc_id, node_from, node_to RETURN bcc_id, node_from.id AS from_id, node_from.name AS from_name, node_to.id AS to_id, node_to.name AS to_name LIMIT 15; """ ) for relationship in bcc_partition: print(relationship)
Graphs of infrastructural networks, such as this one, fairly often consist of a number of smaller biconnected components (BCCs). As iCentral recognizes that betweenness centrality scores are unchanged outside the affected BCC, this can result in a significant speedup.
Algorithms: Online vs. Offline
An important property of algorithms is whether they are online or offline. Online algorithms can update their output as more data becomes available, whereas offline algorithms have to redo the entire computation.
The gold-standard offline algorithm for betweenness centrality is the one by Ulrik Brandes[2]: it works by building a shortest path tree from each node of the graph and efficiently counting the shortest paths through dynamic programming.
In How to Identify Essential Proteins using Betweenness Centrality we built a web app to visualize protein-protein interaction networks with help of betweenness centrality.
However, we can easily see that updates often change only a tiny piece of the whole graph. Scalability means that one needs to take advantage of this by cutting down on repetition. To this end, we employed the fastest algorithm so far: iCentral. Let’s see how it stacks up against the Brandes algorithm in complexity.
- Brandes: runs in O(|V||E|) time and uses O(|V| + |E|) space on a graph with |V| nodes and |E| relationships,
- iCentral: runs in O(|Q||EBC|) time and uses O(|VBC| + |EBC|) space. |VBC| and |EBC| are the counts of nodes and relationships in the affected portion of the graph; |Q| ≤ |VBC| (see the Performance section for the |Q| set). NB: iCentral also saves time by avoiding repeated shortest-path calculations where possible; this varies by graph.
Another key trait of iCentral is that it can be run fully in parallel, just like the Brandes algorithm. With N parallel instances, this has the algorithm run N times faster, at the expense of requiring N times more space (each thread keeps a copy of the data structures).
Takeaways
Betweenness centrality is a very common graph analytics tool, but it is nevertheless challenging to scale up to dynamic graphs. To solve this, Memgraph has implemented the fastest yet online algorithm for it – iCentral; it joins our growing suite of streaming graph analytics.
Our R&D team is working hard on streaming graph ML and analytics. We’re happy to discuss it with you – ping us at Discord!
What’s Next?
It’s time to build with Memgraph! 🔨
Check out our MAGE open-source graph analytics suite and don’t hesitate to give a star ⭐ or contribute with new ideas. If you have any questions or you want to share your work with the rest of the community, join our Discord server.
References
[1] Jamour, F., Skiadopoulos, S., & Kalnis, P. (2017). Parallel algorithm for incremental betweenness centrality on large graphs. IEEE Transactions on Parallel and Distributed Systems, 29(3), 659-672.
[2] Brandes, U. (2001). A faster algorithm for betweenness centrality. Journal of Mathematical Sociology, 25(2), 163-177. | https://memgraph.com/blog/analyze-infrastructure-networks-with-dynamic-betweenness-centrality | CC-MAIN-2022-33 | en | refinedweb |
Comment on Tutorial - Read from a COM port using Java program By Steven Lim
Comment Added by : Michaelruink
Comment Added at : 2017-06-08 00:20:46
Comment on Tutorial : Read from a COM port using Java program By Steven Lim
Michaelru run the program,Iam getting this
View Tutorial By: anitha at 2008-12-17 21:37:36
2. import java.util.*;
public class de
View Tutorial By: Virudada at 2012-05-05 06:27:22
3. I have maked all you said and when I open jsp I am
View Tutorial By: John at 2013-01-10 14:21:30
4. sir i want to knw abt all simple java programs pls
View Tutorial By: sakthi at 2011-08-29 14:06:10
5. What is meant ClasspathResource why do we use it i
View Tutorial By: anil kumar at 2008-06-03 05:41:36
6. i like java but java many program are confused i a
View Tutorial By: savaliya hardik at 2012-09-11 03:30:10
7. to get more and clear information about java progr
View Tutorial By: SWARNA at 2010-09-13 04:23:10
8. Very good explanation . I was very confuse but rea
View Tutorial By: Shahabaj at 2010-08-10 08:58:39
9. the website is good and nice helps a lot add more
View Tutorial By: wkkjnj at 2010-07-20 04:34:13
10. Very good tutorial, but I have a small question he
View Tutorial By: Mohammad NABIL at 2010-07-20 05:58:19 | https://www.java-samples.com/showcomment.php?commentid=41294 | CC-MAIN-2022-33 | en | refinedweb |
Note This guide shows how to build an app using our Amplify Libraries for iOS (Preview) and the Amplify CLI toolchain. To use the existing AWS Mobile SDK for iOS instead, click here.
Getting Started
Build an iOS app using the Amplify Framework which contains:
- Amplify integrating the Amplify libraries in your iOS app. You will create a “Note app” with a GraphQL API and to store and retrieve items in a cloud database, as well as receive updates over a realtime subscription using the API category. Alternatively the DataStore category can be used for local-first programming, offline access, and object sync with GraphQL..
a. From a terminal window, navigate into your Xcode project’s root application directory and run the following commands:
$ cd ./YOUR_PROJECT_FOLDER $ pod init
b. Open the created
Podfile in a text editor and add the pods for the core Amplify Framework components.
target :'YOUR-APP-NAME' do use_frameworks! pod 'amplify-tools' pod 'Amplify' pod 'AWSPluginsCore' pod 'AmplifyPlugins/AWSAPIPlugin' # other pods end
c. Install dependencies by running the following command:
pod install --repo-update
d. Close your Xcode project and reopen it using
./YOUR-PROJECT-NAME.xcworkspace file. Remember to always use
./YOUR-PROJECT-NAME.xcworkspace to open your Xcode project from now on.
e. Build your Xcode project.
Once the build is successful, three files are generated:
- amplifyconfiguration.json and awsconfiguration.json: Rather than configuring each service through a constructor or constants file, the Amplify Framework for iOS supports configuration through centralized files called amplifyconfiguration.json and awsconfiguration.json which define all the regions and service endpoints to communicate.
- amplifyxc.config : This file is used to configure modelgen and push to cloud actions.
Step 2: Generate your Model files
The GraphQL schema is auto-generated can be found under
amplify/backend/api/amplifyDatasource/schema.graphql.
Learn more about annotating GraphQL schemas and data modeling.
a. In this guide, use this schema:
type Task @model { id: ID! title: String! description: String status: String } type Note @model { id: ID! content: String! }
b. Open
amplifyxc.config and update
modelgen to
true
modelgen=true
c. Run build in Xcode (
CMD+B). Amplify will automatically generate the Model files using the graphql schema. You should see the following Model files under
amplify/generated/models
AmplifyModels.swift Note.swift Note+Schema.swift Task.swift Task+Schema.swift
d. Drag the
models directory over to your project, click on each file, and on the right panel, under
Target Membership, check your app target to add it.
e. Next, build the project.
Step 3: Add API and Database
a. Run
amplify configure from the root of your application folder to set up Amplify with your AWS account.
b. Click on
amplifyxc.config and update
push to
true
push=true
c. AppSync offers server-side conflict resolution that does the heavy lifting of managing data conflicts. This is only supported when using Amplify DataStore. So for now, disable conflict resolution.
Click on
amplify/backend/api/amplifyDatasource/transform.conf.json and delete the
ResolverConfig section. Remove this section:
"ResolverConfig": { "project": { "ConflictHandler": "AUTOMERGE", "ConflictDetection": "VERSION" } }
d. Run build in Xcode (
CMD+B). This starts provisioning the backend cloud resources.
Optional: You can view the provisioned backend in the AppSync console by running the command
amplify console api and choosing
GraphQL.
e. Open
amplifyconfiguration.json and you should see the
api section containing your backend like the following:
{ "api": { "plugins": { "awsAPIPlugin": { "amplifyDatasource": { "endpointType": "GraphQL", "endpoint": "https://<YOUR-GRAPHQL-ENDPOINT>.appsync-api.us-west-2.amazonaws.com/graphql", "region": "us-west-2", "authorizationType": "API_KEY", "apiKey": "da2-abcdefghijklmnoprst" } } } } }
Step 4: Integrate into your app
a. Add the following imports to the top of your
AppDelegate.swift file
import Amplify import AmplifyPlugins
b. Add the follow code to your AppDelegate’s
application:didFinishLaunchingWithOptions method
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { let apiPlugin = AWSAPIPlugin(modelRegistration: AmplifyModels()) do { try Amplify.add(plugin: apiPlugin) try Amplify.configure() print("Amplify initialized") } catch { print("Failed to configure Amplify \(error)") } return true }
c. Add data to your backend using the following mutate method:
func apiMutate() { let note = Note(content: "content") Amplify.API.mutate(of: note, type: .create) { (event) in switch event { case .completed(let result): switch result { case .success(let note): print("API Mutate successful, created note: \(note)") case .failure(let error): print("Completed with error: \(error.errorDescription)") } case .failed(let error): print("Failed with error \(error.errorDescription)") default: print("Unexpected event") } } }
d. Query the results from your API using the query method by passing in
note.id from the previous call:
func apiQuery(id: String) { Amplify.API.query(from: Note.self, byId: id) { (event) in switch event { case .completed(let result): switch result { case .success(let note): guard let note = note else { print("API Query completed but missing note") return } print("API Query successful, got note: \(note)") case .failure(let error): print("Completed with error: \(error.errorDescription)") } case .failed(let error): print("Failed with error \(error.errorDescription)") default: print("Unexpected event") } } }
e. Set up subscriptions to listen to realtime updates:
func createSubscription() { let subscriptionOperation = Amplify.API.subscribe(from: Note.self, type: .onCreate) { (event) in switch event { case .inProcess(let subscriptionEvent): switch subscriptionEvent { case .connection(let subscriptionConnectionState): print("Subsription connect state is \(subscriptionConnectionState)") case .data(let result): switch result { case .success(let todo): print("Successfully got note from subscription: \(todo)") case .failure(let error): print("Got failed result with \(error.errorDescription)") } } case .completed: print("Subscription has been closed") case .failed(let error): print("Got failed result with \(error.errorDescription)") default: print("Should never happen") } } }
Call the
amplifyconfiguration.json file in your code.
{ "UserAgent": "aws-amplify-cli/2.0", "Version": "1.0", "storage": { "plugins": { "awsS3StoragePlugin": { "bucket": "my-s3-bucket", "region": "us-west-2", "defaultAccessLevel": "guest" } } }, "analytics": { "plugins": { "awsPinpointAnalyticsPlugin": { "pinpointAnalytics": { "appId": "xxxx123xxxx23423bf24234", "region": "us-east-1" }, "pinpointTargeting": { "region": "us-east-1", } } } }, "api": { "plugins": { "awsAPIPlugin": { "uniqueApiname123": { "endpoint": "", "region": "us-east-1" "authorizationType": "AWS_IAM", "endpointType": "REST" }, "graphqlEndpoint123UserPools": { "endpoint": "", "region": "us-east-1", "authorizationType": "AMAZON_COGNITO_USER_POOLS", "endpointType": "GraphQL" }, "graphqlEndpoint234APIKEy": { "endpoint": "", "region": "us-east-1", "authorizationType": "API_KEY", "apiKey": "apikey12sudksjdfnskjd", "endpointType": "GraphQL" }, "graphqlEndpoint345IAM": { "endpoint": "", "region": "us-east-1", "authorizationType": "AWS_IAM", "endpointType": "GraphQL" } } } }, "predictions":{ "plugins": { "awsPredictionsPlugin": { "identify": { "collectionId": "TestCollection", "region": "us-east-1", "maxEntities": 50 }, "convert": { "voiceId": "Ivy", "region": "us-east-1" } } } } }
In the configuration above, you would need to set the appropriate values such as
Region,
Bucket, etc. | https://aws-amplify.github.io/docs/ios/start | CC-MAIN-2022-33 | en | refinedweb |
For an overview of the ToolValidator class and use of parameter methods, see Customizing script tool behavior.
Parameter object
Accessing the tool parameters
Every tool parameter has an associated parameter object with properties and methods that are useful in tool validation. Parameters are contained in a Python list. The standard practice is to create the list of parameters in the ToolValidator class __init__ method, as shown in the code below.
def __init__(self): import arcpy self.params = arcpy.GetParameterInfo()
You can also access parameters in your script (as opposed to the ToolValidator class) as shown below. The only reason to access the parameter list within a script is to set the symbology property.
import arcpy params = arcpy.GetParameterInfo()
Learn more about setting symbology in scripts
Order of parameters
A tool's parameters and their order are defined in the Parameters tab of the tool's properties, as illustrated below.
MethodsParameter object methods
PropertiesParameter object.
def updateParameters(self): # Set the default distance threshold to 1/100 of the larger of the width # or height of the extent of the input features. Do not set if there is no # input dataset yet, or the user has set a specific distance (Altered is true). # if self.params[0].value: if not self.params[6].altered: extent = arcpy.Describe(self.params[0].value).extent if extent.width > extent.height: self.params[6].value = extent.width / 100 else: self.params[6].value = extent.height / 100 return
A parameter's value property returns an object, unless the parameter isn't populated, in which case the value returns None. To safeguard against a parameter not being populated, use)
altered
altered is true if the user changed the value of a parameter—by entering an output path, for example. Once a parameter has been altered, it remains altered until the user empties (blanks out) the value, in which case it returns to being unaltered. Programmatically changing a value with validation code does not change the altered state. That is, if you set a value for a parameter, the altered state of the parameter does not change.). # import string if self.params[0].value and not self.params[0].hasBeenValidated: if not self.params[6].altered: extent = arcpy.Describe(self.params[0].value).extent width = extent.width height = extent.height if width > height: self.params[6].value = width / 100 else: self.params[6].value = height / 100
category
You can put parameters in different categories to minimize the size of the tool dialog box. The) with an output parameter.
params[2].symbology = "E:/tools/extraction/ToolData/ClassByDist.lyr"
Learn more about output symbology
schema object
Every output parameter of type feature class, table, raster, or workspace has a schema object. Only output feature classes, tables, rasters, and workspaces have a schema—other types do not. The schema object is created for you by geoprocessing. You access this schema through the parameter object and set the rules for describing the output of your tool. After you set the schema rules,() is called. You set up the static rules (rules that don't change based on user input) for describing the output. No output description is created at this time, since the user hasn't specified values for any of the parameters (unless you've provided default values).
- Once the user interacts with the tool dialog box in any way, updateParameters() is called.
- updateParameters() can modify the schema object to account for dynamic behavior that can't be determined from the parameter dependencies (such as adding a new field like Add Field).
- After returning from updateParameters(), the internal validation routines are called and the rules found in the schema object are applied to update the description of the output data.
- updateMessages() is then called. You can examine the warning and error messages that internal validation may have created and modify them or add your own custom warning and error messages.
All schema properties are read and write except for type, which is read-only.Schema object properties
Using FirstDependency
Several of the rules can be set to "FirstDependency", which means to use the value of the first parameter found in parameter dependency array set with parameter.parameterDependencies. In the code example below, parameter 2 has two dependent parameters, 0 and 1, and the first dependency is parameter 0.
# Set the dependencies for the output and its schema properties # self.params[2].parameterDependencies = [0, 1]
If any dependent parameter is a multivalue (a list of values), the first value in the multivalue list is used.
type
The type property is read-only and is set by geoprocessing.
clone
If true, you are instructing geoprocessing to make an exact copy (clone) of the description in the first dependent parameter. The default value is false. Typically, you set clone to true in the initializeParameters() method. If the first dependent parameter is a multivalue (a list of values), the first value in the multivalue list is cloned.
- If parameter.parameterType is "Derived", an exact copy is made. This is the behavior of the Add Field tool.
- If parameter.parameterType is "Required", an exact copy is also made, but the catalog path to the dataset is changed. Catalog paths consist of two parts: the workspace and the base name. For example:
E:/Data/TestData/netcity.gdb/infrastructure/roads
- Workspace = E:/Data/TestData/netcity.gdb/infrastructure
- Base name = roads
- The base name is the same as the base name of the first input parameter containing a dataset (not the first dependency but the first parameter) appended with the name of the script tool (for example, roads_MyTool).
- The workspace is set to the scratch workspace environment setting. If this is empty, the current workspace environment setting is used. If this is empty, the workspace of the first input parameter containing a dataset is used. If this workspace is read-only, then the system temp directory is used.
After setting clone to true, all rule-based methods, such as featureTypeRule, geometryTypeRule, and extentRule, are set to "FirstDependency".
The two code examples below do the equivalent work. Both examples are based on how the Clip tool creates the output schema.
Example 1: Explicitly setting all rules
Example 2: using clone to set rules to FirstDependency, then overriding the extent rule:
def].schema.extentRule = "Intersection" return
featureTypeRule
This setting determines the feature type of the output feature class. This rule has no effect on output rasters or tables.featureTypeRule values
featureType
When the featureTypeRule is "AsSpecified", the value in featureType is used to specify the feature type of the output.featureType values
geometryTypeRule
This setting determines the geometry type (such as point or polygon) of the output feature class.geometryTypeRule values
geometryType
Set this to the geometry type to use (either "Point", "Multipoint", "Polyline", or "Polygon") when geometryTypeRule is "AsSpecified".
extentRuleextentRule values
Example
# The extent of the output is the intersection of the input features # and the clip features (the dependent parameters) # self.params[2].schema.extentRule = "Intersection"
extent
Set this to the extent to use when extentRule is "AsSpecified". You can either set the extent with a space-delimited string or a Python list object with four values. The sequence is xmin, ymin, xmax, ymax.
Example
self.params[2].schema.extentRule = "AsSpecified" self.params[2].schema.extent = "123.32 435.8 987.3 567.9"
or using a Python list
xmin = 123.32 ymin = 435.8 xmax = 987.3 ext = [xmin, ymin, xmax, 567.9] self.params[2].schema.extent = ext
fieldsRule
fieldsRule determines what fields will exist on the output feature class or table.
In the table below, FID stands for Feature ID but actually refers to the ObjectID field found on every feature class or table.fieldsRule values
Example of Clip using fieldsRule of "FirstDependency"
additionalFields
Besides the fields that are added by the application of the fieldsRule, you can add additional fields to the output. additionalFields takes a Python list of field objects.
View example of using AdditionalFields
cellSizeRule
This determines the cellsize of output rasters or grids.cellSizeRule values
cellSize
Set this to the cellsize to use when cellSizeRule is "AsSpecified".
rasterRule
This determines the data type—integer or float—contained in the output raster.rasterRule values
rasterFormatRule
This determines the output raster format, either "Grid" or "Img". The default is "Img", which is ERDAS IMAGINE format. "Grid" is Esri's format.
Learn more about raster data formats
additionalChildren
A workspace is a container for datasets (features, tables, and rasters). These datasets are children of the workspace (think of the workspace as the parent). If your tool adds datasets to a new or existing workspace, you can update the description of the workspace by adding descriptions of the children. For example, you may have a tool that takes a list of feature classes (a multivalue), modifies them in some way, then writes the modified feature classes to an existing workspace. When the tool is used in ModelBuilder, the workspace is the derived output of the tool, and you may want to use this workspace as input to the Select Data tool. Select Data allows you to select a child dataset found in a container and use it as input to another tool.
The input to additionalChildren is one or more descriptions of the children. There are two forms of child descriptions:Member lists for additionalChildren
When adding more than one child, you provide a list of child descriptions. If you're adding the children using the Python list object form, you'll create a list of lists for additionalChildren.
The Python list form has five arguments, as described in the following table.Contents of the child list
These arguments must be supplied in the order shown. To skip over an optional argument, use the Python keyword None, or "#".
Following are some examples of setting a workspace schema. The examples are based on a script tool that has the following arguments:Example tool parameters
The tool takes the input feature class and table, copies both to the workspace, adds a new field to the feature class, then creates a new polygon feature class in the workspace. (The actual work of the tool isn't important as it only serves to illustrate setting a workspace schema.) The code examples below build on one another, starting with simple usage of additionalChildren. If you choose to implement and test some of the code examples below, you can test the code using the model illustrated below.
In): import arcpy.
# Create a field object with the name "Category" and type "Long" # newField = arcpy.Field() newField.name = "Category" newField.type = "Long" # Describe the input feature class in order to get its list of fields. The 9.3 # version of the geoprocessing object returns fields in a Python list, unlike # previous versions, which returned fields in an enumerator. # desc = arcpy.Describe(inFC) fieldList = desc.fields # Add the new field to the list # fieldList.append(newField) # Create a new child based on the input feature class, but with the # additional field added # newChild = [desc.shapeType, desc.catalogPath, fieldList, desc.extent, desc.spatialReference] # Now create our list of children and add to the schema # children = [] children.append(newChild) children.append(inTable) children.append(["polygon", "SummaryPolygon"]) self.params[3].schema.additionalChildren = children
To create fields for SummaryPolygon (the new polygon feature class), create a list of field objects similar to the pattern shown in the above example.
Example: Multivalue input
In this example, the first parameter is a multivalue of feature classes. Each feature class in the multivalue is copied to the derived workspace. A new field, "ProjectID", is added to each of the copied feature classes.
# 0 - input features (multivalue) # 1 - input workspace # 2 - derived workspace class ToolValidator: def __init__(self): import arcpy the MultiValue table children = [] newField = arcpy.Field() newField.name = "ProjectID" newField.type = "Long" for row in range(0, rowCount): value = inVT.getValue(row, 0) if value: d = arcpy.Describe(value) fieldList = d.fields # Note -- not checking if field already exists # fieldList.append(newField) # Create new child with additional ProjectID field and # add child to list of children # child = [d.shapeType, d.catalogPath, fieldList] children.append(child):Filter type and values
Propertiesfilter, the: desc = arcpy.Describe(self.params[0].value) feature_type = desc.shapeType.lower() if feature_type == "polygon": self.params[1].filter.list = ["point", "multipoint"] elif feature_type == "polyline": self.params[1].filter.list = ["polygon"] elif feature_type == "point" or \ feature_type == .Field filter aliases and not self.params[1].altered: self.params[1].value = "" desc = arcpy.Describe(self.params[0].value) fields = desc.fields # Set default to the first field that matches our filter # for field in fields: fType = field.type.lower() if fType in ["smallinteger", "integer", "single", "double"]: self.params[1].value = field.name break return
Workspace
The workspace filter specifies the types of input workspaces that are permissible. There are three values: | https://desktop.arcgis.com/en/arcmap/10.3/analyze/creating-tools/programming-a-toolvalidator-class.htm | CC-MAIN-2022-33 | en | refinedweb |
Unity 2021.2 alpha
次のフルリリースで追加される機能に先行体験ができます。Download Unity 2021.2a
アルファ版でできること
アルファ版は、ベータ版を待つよりも早く次のバージョンでリリースされる機能にアクセスできます。ベータ版と同様にバグレポートを提出し、フィードバックを行うことで、開発プロセスに影響を与えることができます。
継続的なアップデート
このバージョン向けに予定されている主要なコンテンツがすべて搭載され、ベータ版へ移行するまで、週 1 回、アルファ版向けのイテレーションと機能追加が繰り返されます。
知っておくべきこと
アルファ版は完成品ではなく、バグが存在する可能性があることを念頭に置いてお使いください。アルファ版でプロジェクトを開く前に、必ずプロジェクトをバックアップしてください。この注意点は、そのとき使っているアルファ版から新しいアルファ版に移行するときも同様です。
Beta tester news
Sign up with your email here to get the latest beta tester news, including via email and social media.
Unity 2021.2.0b16 リリースノート
Known Issues in 2021.2.0b16
Android: * Devices might wake up from sleep when in split screen mode.
- Chrome OS devices that support tablet mode might not pause apps when they are no longer visible.
- Some Android devices may experience delayed resolution updates after resizing a window.
- Minimum window size might not be respected properly on all Android devices.
Asset Bundles: Building process of the AssetBundles is slow when the file count is huge (1358059)
Asset Importers: Instantiated FBX through code throws error after leaving Play Mode (1363573)
CodeEditors: Crash on stopping debugging (1355156)
Global Illumination: Crash while sculpting Terrain and Baking Lightmaps (1266511)
Global Illumination: [GPU PLM] Fallback to CPU PLM in CL_INVALID_MEM_OBJECT after switching light color only and rebaking GI (1356714)
HD RP: HDRP Template fills the Console with "Shader error...couldn't open include file" messages after building the project (1342989)
Input: Touch Input doesn't work in Play Mode when running an Editor on a Touchscreen device (1341159)
Metal: Game View glitches with Apple Silicon Editor (1368374)
Metal: [OSX][Metal] Editor randomly crashes in MetalWindow::EndRendering when Player Settings window is repainted/opened (1371276)
Mobile: [Android] Build fails when there are 680 or more files in the Streaming Assets folder (1272592)
Mono: .NETStandard 2.1 in the editor is missing System.Memory, System.Buffers at runtime (1367105)
Packman: User can't easily configure location of both UPM and Asset Store package local cache (1317232)
Profiling: GUIStyle errors are thrown when entering Play mode with docked Profiler and the "Maximize On Play" option Enabled (1364443)
Profiling: Profiler's timeline view loses context frames when frames go out of Frame Count bounds (1367470)
Quality of Life: Scrolling is jumping when scrolling in the Input Manager (1362327)
Scripting: Error CS8035 is thrown on opening a project when using rulesets (1349517)
Scripting: Increased Script Assembly reload time (1323490)
Video: Crash on WindowsVideoMedia::StepAllStreams when reimporting a .m4v file (1340340)
Vulkan: [Editor] The Scene's GameObjects textures are seemingly random and change colours depending on the Scene's Camera pos. (1337772)
WebGL: WebGL fails building on Windows 7 (1340260)
XR: [Linux] Scene View doesn't render when opening new AR or VR Template project or pressing "Show Tutorials" (1362435)
New 2021.2.0b16 Entries since 2021.2.0b15
Features
- Mono: Enabled Brotli compression for Windows with the Mono runtime.
Improvements
Graphics: Texture postprocessor re-imports are now split per texture type, reducing amount of re-imports when changing a texture postprocessor script.
IL2CPP: Improved the performance of invoking delegates
IL2CPP: Switch IL2CPP densehash map and set to sparsehash map and set for lower runtime memory usage.
Package: Update Addressables to 1.19.6
Scripting: Many search UI and indexing improvements
API Changes
- Shadergraph: Added: Adding control of anisotropic settings on inline Sampler state nodes in ShaderGraph.
Changes
Package: Released Localization package 1.0.3
Package: Update Sequences to 1.0.3.
Terrain: Updates the version of Terrain Tools included the Package Manager to 4.0.3. (Previously 4.0.0-pre.2)
URP: URP will no longer render via an intermediate texture unless actively required by a Renderer Feature. See the upgrade guide for compatibility options and how assets are upgraded.
Fixes
Animation: Fixed animation clip name to not be removed when clip asset is overwritten (1355739)
First seen in 2021.2.0a18.
Asset Pipeline: Fixed issue with asset reference getting lost, if asset is modified and domain reload is done in the same refresh. (1357812)
Documentation: Fixed incorrect measurement units for ArticulationDrive.forceLimit. (1369825)
This has already been backported to older releases and will not be mentioned in final notes.
DX12: Fixed flickering issue on mesh particles. (1357667)
Editor: Fix cursor hide in Linux playmode. (1350956)
Editor: Fix that Avatar Stage editing closes on clicking anywhere in the Scene view or Hierarchy when using 2 Inspector windows. (1330120)
Editor: Fix that selecting "Name" or "Type" in Hierarchy search bar dropdown does not change the search filter. (1367891)
First seen in 2021.2.0.
Editor: Fixed a regression in where users could no longer assign a Render Texture to the light cookie widget in the UI. (1355504)
Editor: Fixed an edge case where removing and re-adding a sub asset would cause the local file id of the object to change unnecessarily. (1323357)
Editor: If Adb is not able to make the file editable, we make it writable using OS function (1353760)
This has already been backported to older releases and will not be mentioned in final notes.
Editor: Prefab object selection performance issue resolved. (1352527)
GI: Fix issue with GPU Lightmapper falling back to CPU Lightmapper upon a rebake. (1356714)
First seen in 2021.2.0a19.
GI: Fix missing indirect lighting when using Enlighten Realtime GI in HDRP Player (1367133)
GI: Prevents the GPULM from falling back to CPULM when toggling lights during a bake. (1343313)
First seen in 2021.2.0.
Graphics: Fix to RenderTexture.format not returning correct values in the case of RenderTextureFormat.Depth and RenderTextureFormat.Shadow (1365548) critical issue on android devices & lens flares. Accidentally creating a 16 bit texture was causing gpus not supporting them to fail.
Graphics: Fixed another recently added internal bug where when the shader debug level in Switch player editor settings is changed, the shaders were not corectly rebuilt.
Graphics: Fixed black pixel issue in AMD FidelityFX RCAS implementation
Graphics: Remove URP and HDRP templates. They are now dynamic templates
HDRP: Fixed a warning to Rendering Debugger Runtime UI when debug shaders are stripped.
HDRP: Fixed ambient occlusion strenght incorrectly using GTAOMultiBounce
HDRP: Fixed corruption in player with lightmap uv when Optimize Mesh Data is enabled (1357902)
HDRP: Fixed lens flare occlusion issues with TAA.
HDRP: Fixed misleading text and improving the eye scene material samples. (1368665)
HDRP: Fixed missing DisallowMultipleComponent annotations in HDAdditionalReflectionData and HDAdditionalLightData. (1365879)
HDRP: Fixed Probe volume debug exposure compensation to match the Lighting debug one.
HDRP: Fixed support for light/shadow dimmers (volumetric or not) in path tracing.
HDRP: Fixed the LensFlare flicker with TAA on SceneView. (1356734)
HDRP: Fixed the volume not being assigned on some scene templates.
HDRP: MaterialReimporter.ReimportAllMaterials and MaterialReimporter.ReimportAllHDShaderGraphs now batch the asset database changes to improve performance.
IL2CPP: Allow the debugger to grow the frame capacity on-demand. (1360149)
IL2CPP: Correct BinaryFormatter serialization of a type with a field of type nullable struct where the struct has fields of type float and bool. (1361559)
First seen in 2021.2.0a18.
IL2CPP: Fix conversion issues on methods with ref readonly return values. (1367462)
This has already been backported to older releases and will not be mentioned in final notes.
IL2CPP: Fix parsing of --custom-step command line argument to UnityLinker (1351726)
IL2CPP: Fixed "Unexpected generic parameter." exception when a generic method had a function pointer parameter (1364482)
IL2CPP: Fixed conversion error that can occur with generic types that have a static constructor (1362583)
This has already been backported to older releases and will not be mentioned in final notes.
IL2CPP: Hash parameter info and generic arguments to avoid long method names that lead to compiler errors. (1362768)
This has already been backported to older releases and will not be mentioned in final notes.
IL2CPP: Prevent a crash in the player when deeply nested generics are used to create a value type object. The runtime will now cause a managed exception instead. (1361232)
First seen in 2021.2.0.
IL2CPP: Prevent a possible crash in the GC code when the mark stack overflows while script debugging is enabled if many threads are created. (1361799)
This has already been backported to older releases and will not be mentioned in final notes.
IL2CPP: Prevent an intermittent crash from happening during thread detach when many threads are calling reverse p/invoke wrappers at the same time. (1358863)
This has already been backported to older releases and will not be mentioned in final notes.
IL2CPP: UnityLinker will now respect --unity-root-strategy if defined on the command line (1351728)
Linux: Fixed main menu disappearing after certain layout change events. (1362449)
macOS: Dock is no longer ignored when exiting fullscreen and moving the window (1354879)
macOS: Fixed Unity shader compiler crashing on macOS Monterey when using Apple silicon editor. (1361979)
First seen in 2021.2.0.
macOS: Fixes inverted Y position of mouse cursor using New Input's Warp mouse (1311064)
This has already been backported to older releases and will not be mentioned in final notes.
macOS: Fixes Xbox wireless gamepad triggers and DPAD not working in Old Input (1342338)
This has already been backported to older releases and will not be mentioned in final notes.
macOS: Force to use GPU Lightmapper instead of CPU Lightmapper on Apple silicon (1341489)
Mono: Add missing facade dlls for Unity profiles (1367105)
Mono: Fix FileSystemWatcher support on IL2CPP. (1344045)
First seen in 2021.2.0.
Mono: Fix missing .NET Standard 2.1 assemblies (System.Memory, System.Buffers...) (1367105)
Mono: Fixed issue where not all gc handles were being released on domain unload. (1349827)
First seen in 2021.2.0a18.
Mono: Fixed issue where the timeout of a HttpClient handler was not being used for requests. (1365107)
Mono: Reenable COM in classlibs for win32 unityjit and unityaot profiles (1358705)
First seen in 2021.2.0b7.
Mono: Remove System.Runtime.CompilerServices.Unsafe.dll from unity profiles. (1360423)
First seen in 2021.2.0b7.
Package: Released com.unity.mathematics 1.2.4
Package Manager: Fixed the issue where sync code is not unregistered when the Package Manager window is closed. (1368318)
Particles: Fixed Texture Alpha clipping in the Shape module. (1349714)
This has already been backported to older releases and will not be mentioned in final notes.
Physics: Fixed a crash when accessing RaycastHit.lightmapCoord of a hit agains a Mesh that does not have texture channel 1 (1361884)
Physics: Fixed an issue where modifying the "Rigidbody2D.position" doesn't temporarily stop interpolation when called during the FixedUpdate callback. (1367721)
This has already been backported to older releases and will not be mentioned in final notes.
Profiler: Fixed Profiler.GetTotalAllocatedMemoryLong reporting increasing values while loading and unloading the same scene. (1364643)
This has already been backported to older releases and will not be mentioned in final notes.
Scene/Game View: Fix for translation tools offsetting object when cursor moved off-screen (1360113)
Search: Fixed search index should discard long serialized property string (i.e. embedded JSON string). (1362623)
Serialization: Fix SerializeReference object missing in certain situation.
Serialization: Missing types from managed referenced object are properly stripped when creating a player build.
Shadergraph: Fixed bug where an exception was thrown on undo operation after adding properties to a category (1348910)
Shadergraph: Fixed unhandled exception when loading a subgraph with duplicate slots. (1366200)
uGUI: Missing shader property warnings will no longer be produced when running in batchmode. (1350059)
This has already been backported to older releases and will not be mentioned in final notes.
UI Toolkit: Ensures that only modified files are saved to disk (1355591)
UI Toolkit: Fix VisualElement doesn't render instantly after setting display property to flex through C# (1359661)
UI Toolkit: Fixed a logic error when deciding whether styles should be updated when the pseudo states change. (1348866)
UI Toolkit: Fixed an issue causing ListView's reordering to stop working after docking its parent window to a new pane. (1345142)
Undo System: Improved performance when overwriting the redo stack
Universal: Fixed an issue that caused shader compilation error when building to Android and using GLES2 API. (1343061)
Universal: Fixed an issue that caused shader compilation error when switching to WebGL 1 target. (1343443)
Universal Windows Platform: Fixed symbol file packaging failing when using the 'MasterWithLTCG' build configuration. (1345403)
This has already been backported to older releases and will not be mentioned in final notes.
URP: Fixed a Universal Targets in ShaderGraph not rendering correctly in game view
URP: MaterialReimporter.ReimportAllMaterials and MaterialReimporter.ReimportAllHDShaderGraphs now batch the asset database changes to improve performance.
VFX Graph: Rename "Material Offset" to "Sorting Priority" in output render state settings (1365257)
WebGL: Bug fix for URP scene being rendered incorrectly with the new texture subtarget options in the build settings (1343208)
WebGL: Fixed a regression with building WebGL on Windows 7. (1340260)
Windows: Added a way to forward raw input messages to Unity with the UnityEngine.Windows.Input.ForwardRawInput API. (1368835)
First seen in 2021.2.0b9.
Windows: Fixed old input system missing some mouse movement events when new input system is enabled. (1368808)
First seen in 2021.2.0b9.
Windows: Fixed player icon being missing from the title after if the game was first launched in fullscreen mode and then later changed to windowed mode. (1361016)
This has already been backported to older releases and will not be mentioned in final notes.
Windows: Physics simulations and FixedUpdate no longer run while the splashscreen is being displayed on Windows Standlaone and Universal Windows Platform. (1362362)
This has already been backported to older releases and will not be mentioned in final notes.
XR: Fix single-pass stereo state after shadow map rendering (1335518)
XR: Fix soft particles shaders for XR single-pass (1332105)
This has already been backported to older releases and will not be mentioned in final notes.
Preview of Final 2021.2.0b16 Release Notes
Features
2D: Added a new template to allow users to start a new project with 2D Renderer set up.
2D: Added support for users to create and define scriptable tools for the Tile Palette window.
Android: Added a default texture compression format option to Player Settings.
Android: Added support for building and running Android apps on Chrome OS devices with x86 and x86-64 CPUs.
Android: Added support on Android for split-screen, pop-up and freeform windows.
Android: Changed so users can now include custom asset packs into the build by adding assets to the directory ending with '.androidpack'.
Android: When building Android App Bundle with Split App Binary enabled, Unity will create asset packs.
Asset Import: Added an artifact file dependency system to the AssetImportContext. For more information, see AssetImportContext.GetArtifactFilePath and AssetImportContext.GetOutputArtifactFilePath.
Asset Import: Updated Alembic package to 2.2.0-pre.3.
Asset Import: Updated com.unity.formats.alembic to 2.2.0-pre.3.
Asset Import: Updated com.unity.formats.alembic to 2.2.0-pre.4.
Asset Pipeline: Added ArtifactDifference Reporting messages to the Artifact Browser.
Asset Pipeline: Added Accelerator authentication support using Unity ID.
Asset Pipeline: Added Import Activity Window. This allows you to look at import times, previous revisions, list dependencies, see importers used and import duration over time.
Asset Pipeline: Support for importing models and textures in parallel in external processes (off by default, enable in Project Settings -> Editor -> Refresh.
Burst: Added a new optimization mode,
Balanced. This is now the default optimization mode, and trades off slightly lower maximum performance for much faster compile times.
Burst: Added an
OptimizeForoption to
[BurstCompile], allowing users to specify whether they want to prioritize runtime speed, runtime size, or compilation time.
Burst: Added Embedded Linux as a new target platform for Burst.
Burst: Added Embedded Linux toolchain resolution mechanism.
Burst: Added experimental half precision floating point type f16.
Burst: Added experimental support for Arm Neon intrinsics half precision floating point.
Burst: Added experimental support to the Burst compiler package for some ArmV8.2A intrinsics (dotprod, crypto, RDMA).
Burst: Added source location metadata into hash cache.
Burst: Added support for basic vld1 ARM Neon intrinsics.
Burst: Added support for calling Burst code directly from C# without using function pointers.
Burst: Added support for creating profiler markers from Burst code.
Burst: Added support for having
[return: MarshalAs(UnmanagedType.U1)]or
[return: MarshalAs(UnmanagedType.I1)]on external functions with a
boolreturn type.
Burst: Added support for new intrinsics.
Burst: Added support for the C# 8.0 construct
default(T) is null
Burst: Added support for the latest LLVM 11 code generation benefits to the Burst compiler package.
Burst: Added the option to call BurstCompiler.CompileFunctionPointer() in Burst code.
Burst: Added warnings about delegates being used by
BurstCompiler.CompileFunctionPointerthat are not decorated as expected. In most cases, Burst will automatically add the C-declaration attribute in IL Post Processing, but if the usage of CompileFunctionPointer is abstracted away behind an open generic implementation, then Burst will not be able to automatically correct the delegate declaration, and thus this warning will be displayed.
Burst: Added workarounds for issues where Windows Native Debuggers restrict the number of messages and DLLs that can be sent to the debugger when attaching.
Burst: Apple silicon support.
Burst: Assemblies are now allowed to have an
[assembly: BurstCompile()]attribute which allows you to specify compile options that apply assembly wide. For example,
[assembly: BurstCompile(OptimizeFor = OptimizeFor.FastCompilation)].
Burst: Burst 1.6 release cycle is for 2021.2.
Burst: Exceptions thrown from Burst can now contain a callstack.
Burst: Made it possible to get a pointer to UTF-8 encoded string literal data in HPC# code via StringLiteral.UTF8().
Burst: The Burst compiler package now fully supports ARMv7 and ARMv8-A Neon intrinsics.
Burst: To support modding, added support for loading additional burst compiled libraries in Play mode and standalone builds.
Burst: Unity now automatically adds the
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]attribute to any delegates that are used for
BurstCompiler.CompileFunctionPointer<>(). An error occurs if a delegate has the attribute but is not
Cdecl.
Core: Added Screen.mainWindowPosition, Screen.mainWindowDisplayInfo, Screen.GetDisplayLayout() and Screen.MoveMainWindowTo() APIs. See Scripting API documentation for more information.
Editor: Added a Diagnostics section to the Preferences window to help with remote troubleshooting. You shouldn't interact with this section unless instructed to by Unity Support.
Editor: Added support for two new menu states, "disabled" and "checked", in Unity Search.
Graphics: Added Graphics Buffer Support in VFX Graph.
Graphics: Added a new tool to set the bounds of a VFX Graph system.
Graphics: Added anisotropic filtering to the built-in inline samplers.
Graphics: Added support for DX12 on Hololens with OpenXR.
Graphics: Added support of direct link event.
IL2CPP: Added new option for "IL2CPP Code Generation" which enables faster and smaller builds by using better sharing of generics code and additional runtime overhead.
Input System: Released 1.1 of the Input System package with fixes and improvements. Details at.
License: Introduced licensingIpc Editor CLI.
macOS: Added deep linking support.
Mobile: Adapted the scaler profiles so you can easily define and change Adaptive Performance Scalers with predefined profiles.
Mobile: Added a new adaptive view distance scaler to change the Camera.main view distance automatically.
Mobile: Added boost mode for Samsung devices. This increases CPU and GPU performance for short periods of time.
Mobile: Added Boost mode to boost the CPU and GPU for short periods of time.
Mobile: Added predefined scalar profiles that you can use to easily define and change Adaptive Performance scalers.
Mobile: Added Startup Boost mode which enables boost mode during engine startup.
Mobile: Added the ability to enable boost mode during engine startup. This increases CPU and GPU performance for a short period of time as the application starts.
Mobile: Added the ability to request information about which and how many cores are available on the device.
Mobile: Added the Adaptive Performance feature API. This checks which Adaptive Performance features are available on the current platform.
Mobile: Added the Adaptive view distance scaler. This scaler changes the Camera.main view distance automatically.
Mobile: Cluster info - Request cluster info to have details which and how many cores are available on the device.
Mobile: Feature API - Check which Adaptive Performance feature is available on the current platform.
Mobile: Integrated Adaptive Performance into the Unity Profiler.
Mobile: Integrated the Profiler so you can Profile Adaptive Performance easily from the Unity Profiler.
Mono: Enabled Brotli compression for Windows with the Mono runtime.
Mono: Upgraded to a recent version of Mono (~6.12), which brings most bug fixes from the upstream Mono project.
Package: (Recorder) Added support for recording accumulation, for motion blur and path tracer (HDRP).
Package: (Recorder) Integrated the AOV Recorder into the Recorder package.
Package: Added com.unity.live-capture 1.0.0 package for capturing and recording camera motion, and face performances.
Package: Added com.unity.profiling.core 1.0.0-pre.1 package. This package introduces the Unity Profiler markup API with metadata support and the new counters API. For more information, see.
Package: Added com.unity.sequences 1.0.0-pre.5 package.
Package: Added support for macOS arm64.
Package: Added Visual Scripting, which was previously known as Bolt, as a default package.
Package: Released Polybrush version 1.1.2.
Package: Updated Polybrush to version 1.1.0-pre.1.
Package Manager: Added new UI support for features in the Package Manager window. Added an initial list of features.
Package Manager: Added option in the popup window, when Install for a Full Project Asset is clicked, to install the Asset into a new, temporary project.
Package Manager: Added the ability to install a package from a browser hyperlink, including experimental packages.
Package Manager: Added UI support for feature sets in the Package Manager window:
- Added a Lock/Unlock mechanism on packages that are part of a feature set.
- Reset a feature set dependencies to their default versions when the feature set was customized.
- Added a warning message if a feature set dependencies are already installed with a different version before installing it.
- Added a visual cue on feature sets when the dependency versions change.
- Added feature set information to the Inspector.
- Added analytics on feature sets.
Package Manager: Changed account menu in the top bar of the editor to show your initials instead of full name.
Package Manager: Changed the Package Manager window so that when users choose to continue from the UPM dialog warning that shows an entitlement error, then launch the Editor, the Package Manager window immediately opens to the first package with an entitlement error.
Package Manager: Swapped the advanced settings panel and the scoped registries management panel in project settings.
Package Manager: The Package Manager now supports packages with entitlements (subscription-based licensing).
Particles: Added a mesh weighting field to the list of meshes in a Particle System component, to control how likely Unity is to assign each mesh to a particle.
Physics: Added a new CustomCollider2D that allows custom 2D physics shapes to be used with a fully customizable and featured 2D Collider.
Physics: Added an Enable All and Disable All button to the Physics Project Settings' Layer Collision Matrix, which allows enabling or disabling all layer collisions.).
Physics: Exposed a set of functions to enable you to modify the contact properties of a collision before the solver receives them.
Profiler: Added ProfilerModule API to extend the Profiler window with custom modules.
Profiler: Added the File Access Profiler and Asset Loading Profiler modules to the Unity Profiler.
Profiler: Released [email protected] which enabled advanced markup for Unity Profiler markers and counters. See more details at..
Scene/Game View: Added support for component tools to the EditorToolContext.
Scene/Game View: Introduced new Overlays feature. Tools and contextual views are now available directly in the Scene View, and are completely customizable.
Search: Added a new explicit provider to search performance metrics.
Search: Added new extended search picker workflow and API.
Search: Added search expression language to evaluate multiple search queries and apply set operations, transformation or other user defined operations.
Search: Added search table support to build advanced reports using complex search expressions.
Shaders: Added support for specifying package requirements for SubShaders and Passes to ShaderLab..
Terrain: Added a new instancing mode for Terrain details, which uses the material specified on the prototype prefab to render detail objects with instanced draws.
Terrain: Added the Terrain Tools package 4.0.0-pre.2 to the pre-release set.
UI: Added preference to enable or disable extended Dynamic Hints.
UI Toolkit: Added a context action in the UI Builder Hierarchy to export an element to a UXML file.
UI Toolkit: Added a contextual action to unpack a template in a document in the UI Builder.
UI Toolkit: Added contextual actions to unpack a template completely in UI Builder.
UI Toolkit: Added runtime access to the PanelSettings object and the UIDocument component. Runtime UIToolkit rendering no longer requires the UIToolkit package.
UI Toolkit: Added support for negative transform scaling on x and y axes, enabling mirroring to be performed. When crossing zero, the geometry will be regenerated to flip the winding.
UI Toolkit: Added support for rendering antialiased vector shapes without MSAA in UI Toolkit.
UI Toolkit: Added support for up to 7 levels of stencil-based masking.
UI Toolkit: Added the DynamicColor usage hint that allows border/background/text color to change dynamically without having to regenerate the geometry.
UI Toolkit: Added transform-origin, rotate, scale and translate to the supported properties by UI Toolkit.
UI Toolkit: Added Transitions properties to the UI Toolkit.
UI Toolkit: Attribute overrides can be added, edited and removed in a template instance using UI Builder.
UI Toolkit: ListView now supports dynamic item height as a virtualization method. For more information, see ListView and CollectionVirtualizationMethod.
UI Toolkit: Updated the UIElementsGenerator tool to the latest version.
Version Control: Added auto sign in when logged into Unity account.
Version Control: Added workspace migration from Collab to Plastic which can be done with or without Plastic installed.
Added notification status icons.
Added light and dark mode versions of the avatar icon.
Version Control: Improvements to Plastic SCM:
-
WebGL: Added support for compressed audio in WebGL.
WebGL: Added the Debug Symbols player setting to create release builds with embedded function symbols for improved profiling and error stack traces.
WebGL: Added the texture subtarget build setting to WebGL.
WebGL: Enabled ETC/ETC2/ASTC/BC4/BC5/BC6/BC7 compressed texture formats for WebGL in editor, build and runtime.
XR: Added support for controller late latching, which can reduce latency between rendering and tracked input (head and hand-held controller) in XR. Can be used with the Mesh Renderer and the Skinned Mesh Renderer.
XR: Reduced render latency in URP with Late Latching.
XR: Released OpenXR version 1.0.0-pre.1.
Improvements
2D: Added folder support for SpriteAtlas V2 in 2D.
2D: Cache internal reflection to speed up Sprite editing data access.
2D: Improved performance for setting multiple Tiles on a Tilemap.
2D: Improved performance of RuleTile caching.
2D: Improved performance when importing large number of textures.
2D: Improved the placement of Tiles generated from Sprites with Textures sliced using the Isometric Slicing option in the Sprite Editor.
2D: Prereleased the SpriteShape and PSDImporter package
2D: Updated icons for the Tile Palette Rotate and Flip tools.
2D: Updated the 2D template to use the latest verified 2D packages.
2D: Updated the 2D URP template starting folder structure for better clarification of usage.
2D: Updated the Skinning Editor tooltips text.
AI: Added the
RasterizeModifierBoxprofiler marker for the NavMesh builder step that processes ModifierBox sources..
Android: Added boot-config/command-line switch
platform-android-cpucapacity-threshold. This specifies which CPU cores to treat as big cores. The cpu capacity is a value in the range between 0 and 1024. A capacity value of 870 yields the same behavior as before the fix for case 1349057.
Android: Added support for custom cursors to Android to support Player Settings and C# functions on Android version 7.0 and later.
Android: Allow low-level configuration of Unity threads (priority, affinity)
Android: Changed the device scanning operation of the Android Extension to be async when receiving an USB device changed event. (1349380)
Android: If a hardware keyboard is available, Unity now uses it within UI systems, instead of always bringing up a virtual, on-screen keyboard.
Android: Made a large part of the Android Build Pipeline incremental which means sequential builds with zero changes are now much faster. That also means Unity no longer creates builds from scratch, but instead updates the files which dependencies have changed. If you use the IPostGenerateGradleAndroidProject callback, note that it might be operating on files which were modified by IPostGenerateGradleAndroidProject from a previous build.
Android: Unity gradle projects now have a new entry in gradle.properties, unityTemplateVersion. Unity increments this property whenever Unity gradle template files change. That way if you build on top of the old folder and the unityTemplateVersion is different, Unity throws an error, saying that you need to update your gradle files or build to an empty folder.
Android: Updated Android Logcat package to 1.2.3.
Android: When generating manifest files, there are new files in Library\Bee\artifacts\Android\Manifest, LibraryManifestDiag.txt, LauncherManifestDiag.txt. They contain information about why a specific permission is added to manifest.
Asset Bundles: Added profile marker for CRC checks.
Asset Import: Documented the
MonoImporterclass.
Asset Import: Improved FBX model importing speed.
Asset Import: Improved import performance of ASCII FBX files.
Asset Import: Improved import performance of FBX files.
Asset Import: Improved import speed for FBX files that use the ASCII file format.
Asset Import: Improved import speed for model files containing more than 1 mesh.
Asset Import: Improved import speed of FBX models by skipping unused data.
Asset Import: Improved import speed of models that contain multiple meshes.
Asset Import: Improved import speed of Sketchup models.
Asset Import: Improved model import speed by multithreading mesh triangulation.
Asset Import: Improved model import times for models that contain animations.
Asset Import: Improved Model Importer material tab display performance. (1295743)
Asset Import: Improved model importing performance for files that contain lots of curves.
Asset Import: Improved the model import times slightly for models that contain animations.
Asset Import: Increased the speed of Asset Import when using mikktspace tangent generation on meshes containing degenerate triangles.
Asset Import: Optimized texture import mipmap calculation when Kaiser filtering is used (e.g. importing 16GB of textures with mostly Kaiser mip filters goes from 127 sec down to 109 sec).
Asset Import: Reduced peak memory usage during Model Importing.
Asset Import: Updated Sketchup SDK to version 2020.2.
Asset Pipeline: Added a new UI in the AssetImporter inspectors that display all AssetPostprocessors methods that were used in the last import of the selected asset.
Asset Pipeline: Added summary in the editor log about what happened during a refresh (import).
Asset Pipeline: Added warnings and an Automatic Fix button where main object names do not match the corresponding filename.
Asset Pipeline: Improved directory enumeration by multi-threading it.
Asset Pipeline: Improved project startup times. Projects with 900,000 files will load at least 30 seconds faster.
Asset Pipeline: Improved speed of Editor startup by fixing Asset handling related code.
Asset Pipeline: Improved startup performance for 900,000 file project by 18 seconds.
Asset Pipeline: Improved upload and download path
Asset Pipeline: Optimized the UnityEngine.Hash128.ToString method.
Asset Pipeline: The Asset Pipeline no longer displays a warning when it is not possible to move import worker log files.
Asset Pipeline: Unity prefetches Asset Databases to improve Editor startup time and reduce cost of page faults.
Audio: Added a new Attenuation transition type to AudioMixer so that it can perform equal power panning for group attenuations when transitioning between snapshots. (1322673)
Audio: Added voice priority display in the audio pane of profiler.
Audio: Added VU metering information from audio mixer groups in the audio pane of profiler to make it easier: Target mixer groups are now displayed in the audio profiler.
Bug Reporter: The Crash Handler window now includes the following UI changes:
- Stack trace field
- Go to crash logs button
- Report a Bug… button
- Open Unity Hub button
Note that clicking the Open Unity Hub button restarts the Editor.
Build Pipeline: "Scripts Only Build" is now automatic for platforms using the new incremental build pipeline. The checkbox is removed for such platforms, and Unity will automatically detect if it can do a Scripts Only Build based on which changes there are in the project.
Build Pipeline: Changed Linux so that it uses the incremental player build pipeline.
Build Pipeline: Improved linking speed for big projects
Build Pipeline: Improved the performance of the build pipeline by giving concurrent shader cache folders read access.
Build Pipeline: MacOS Standalone player builds now use the new incremental build pipeline, which allows faster subsequent player builds by only rebuilding parts which have changed.
Build Pipeline: Modified Windows Standalone player builds so that they only rebuild parts that have changed since the previous build to improve build speed.
Build Pipeline: Removed prompt to save an untitled scene if it is not included in the build.
Build Pipeline: WebGL now uses the incremental Player build pipeline.
Burst: Automatically add [UnmanagedFunctionPointer(CallingConvention.Cdecl)] to any delegates that are used for BurstCompiler.CompileFunctionPointer<>() or error if the delegate has the attribute and it is not Cdecl.
Burst: Improved codegen.
Core: Modified the Dynamic Heap Allocator to reduce the time it takes to intantiate chunks in a Build. (1272168)
Core: Reduced the number of memory alllocations and improved the tracking of core allocations.
Documentation: Added a pop-up to Obsolete API labels in the Script Reference, explaining why something is obsolete, and pointing to the new API where possible.
Documentation: Improved documentation for GeometryUtility.TestPlanesAABB to explain false positives.
DX12: *Binded dynamic lightmaps resources (Enlighten) to ray tracing shaders:
unity_DynamicLightmap,
unity_DynamicDirectionalityand
unity_DynamicLightmapST. *Enabled
DYNAMICLIGHTMAP_ONshader keyword for when these resources are used by Renderers.
DX12: Added the ability to set D3D12_RAYTRACING_GEOMETRY_FLAG_NO_DUPLICATE_ANYHIT_INVOCATION flag when adding Renderers to a RayTracingAccelerationStructure. This will allow implementation of robust colored ray traced shadows.
DX12: Optimized uniform setup in ray tracing shaders, and added a special case for the
UnityPerMaterialcbuffer.
Editor: Added a context menu to the Console window which has an option to use monospace font. (1276112)
Editor: Added ability to reorder UnityEvent callbacks.
Editor: Added missing USB device detection and reporting for Linux Editors..
Editor: Added new features for Menu Items in the context of Editor Modes.
Editor: Added Open in Property Editor menu item to the Inspector's kebab menu. (1334342)
Editor: Added profiler markers around test setup, teardown, and execution.
Editor: Added Texture import overrides to the Build Settings window so you can reduce imported texture size and change the compression settings to speed up asset imports and platform switches.
Editor: Added the Play Unfocused option to the Game view, to stop the Game view from focusing when entering Play mode. Also added an option to Edit > Preferences > General to enable or disable automatically creating a Game view on entering Play mode.
Editor: Added the ability to cut, copy, and paste Assets in ProjectWindow. (1264821)
Editor: Added the ability to search Settings by their properties.
Editor: Added the option to constrain scale proportions to the Transform component. You can set this option as default in Editor preferences.
Editor: Added the renderer type to the UpdateRendererBoundingVolumes profile marker tooltip.
Editor: Allowed the importing of LOD meshes with indices that have preceding zeroes. By specifying a range, e.g., LOD1-3 will assign the mesh to all LOD levels in the range.
Editor: Avoid stall entering playmode if a scene contains sequential GameObject file ID hints. (1308128)
Editor: Cached the translation results, reducing GC pressure.
Editor: Changed the popup menu behaviour to only trigger a
GUI.changedevent if it has changed.
Editor: If the assembly containing code that is stalling the editor is available, it's now displayed in the popup progress bar.
Editor: Improved Gizmos performance in Editor.
Editor: Improved import times of SketchUp models (*.skp).
Editor: Improved loading times for scenes with lots of GameObjects at the top level in the Hierarchy.
Editor: Improved mac editor process guard in order to catch all types of exceptions and early handle cases before shutting down.
Editor: Improved model importing performance.
Editor: Improved performance importing models with Blendshapes if the Import Blendshapes setting is unchecked.
Editor: Improved the Frame Debugger, so you can clear display color, depth, and stencil values. The compute shader displays the display shader name, keywords, and thread group size. Indirect draws display shader and property information. The Mesh preview now displays correctly on HDRP. Displays SRP Batcher draws with the names of meshes they render.
Editor: Improved the performance of the model importer by multi-threading the mesh triangulation step.
Editor: Improving UTF documentation (DSTR-120).
Editor: Increased speed of filtering operations when you only run a subset of tests.
Editor: Inspector number fields support more math expressions now: functions (
sqrt,
sin,
cos,
tan,
floor,
ceil,
round), distribution over multi-selection (
L,
R) and can refer to current value to change it across multi-selection (
+=3,
*=2).
Editor: Now shaders will have SHADER_API_(DESKTOP|MOBILE) define set according to the target build platform.
Editor: Optimized BC7 ("high quality" compression setting on PC/Console platforms) texture compression. Performance is 2-3 times faster. This optimization uses a new texture compressor (bc7e). An option is available in Editor Settings to switch to the old one (ISPC) if needed.
Editor: Optimized drag selection in Editor scenes.
Editor: Reduced the per-test overhead of running tests in the editor.
Editor: Reduced the time taken to rebuild the test tree and to scan for assets a test created but did not delete.
Editor: Removed the Enable Code Coverage option from Preferences/General, and moved it into the Code Coverage package.
Editor: Reorganized Quality Project Settings, to make it clearer which options are relevant to which parts of Unity. (1307483)
Editor: Texture import settings UI indicates which platforms have override settings, via a blue override line on the platform tabs.
Editor: The Frame Selected command now ignores Audio Source and Reverb Zone components.
Editor: UI polish of Build Settings window (improved logical sort of installed platforms).
Editor: Updated ASTC texture compressor to improve compression time by about 10%.
Editor: Updated ASTC texture compressor to reduce compression time.
Editor: Updated the Inspector property context menu 'Revert to Prefab' to work with multiple selected objects.
Editor: Updated the Renderer component so that you can click on a Material inside the Renderer component to highlight the Sub Meshes with that Material in the Scene View.
Editor: Updated the target window of a Dynamic Hint to be focused before displaying the Dynamic Hint, if available.
Editor: You can now drag multiple GameObjects to the Project Window to create multiple prefabs at once.
Editor:
Create Empty Parentnow matches the 'selection rect' for Rect Transforms.: Added asynchronous environment sky updates for realtime GI, avoiding frame hitches whenever the sky changes.
GI: Added exposure slider to the Enlighten lit clustering scene view mode in the Editor.
GI: Ensured analytics about a cancelled bake is sent when closing the editor while generating lighting. (1354238): Lightmap analytics events now include bakes that fell back from GPU to CPU Lightmapper.
GI: Lightmap compression is no longer affected by the "Lightmap Encoding" project setting. Instead, a new setting, "Lightmap Compression" has been introduced to the Lighting Settings Asset. This replaces the previous "Compress Lightmaps" checkbox. (1230918)
GI: Make it visible in the GI Profiler module what the state of Realtime GI support and enabled state is, when no data is captured.
GI: Move memory related logging from console to log file.
GI: Radeon Denoiser upgrade to version 1.7.0. Improves AI denoiser when running in HDR mode.
GI: Reduced ringing when using Open Image denoiser.
GI: Removed the logging of each Enlighten HLRT thread id, it spams too much on high core count systems.
GI: Reword Enlighten bounce warning in Light inspector because HDRP supports additional light shapes.
GI: Run GICache trimming jobs every 30 seconds instead of back to back. (1289849)
Graphics: Added an error message that appears when a custom render texture uses a Material with an invalid or unsupported shader. (1304355)
Graphics: Added ASTC texture format support for single channel textures.
Graphics: Added support for async readback when using OpenGL ES 3.0 (and later) and GL core.
Graphics: Added support for Color and Depth Load/Store actions to the Frame Debugger.
Graphics: Added support for
VFX.CameraBuffersFallbackpreferences. Select from one of 3 options:
- None: Use no fallback and keep outdated buffer info from the last rendered frame.
- Prefer Main Camera (Default): Use the buffer from the Main camera when available, and the Scene camera otherwise.
- Prefer Scene Camera: Use the buffer from the Scene camera when available, even if the Main camera is being rendered.
Graphics: Added the 2D Lights tab to the Light Explorer window.
Graphics: Added the ability to remove shader Passes that contain ray tracing shaders (for example,
ClosestHit,
AnyHit) when ray tracing is not supported by the system or graphics API.
Graphics: Added two new functions in RayTracingAccelerationStructure - UpdateInstanceID and GetInstanceCount. The ray tracing instance ID can be accessed in HLSL code using InstanceID() intrinsic.
Graphics: Changed Renderer components so that you can use
.boundsand
.localBoundssetter APIs to set custom world space or local space bounds.
Graphics: Changed the default specular reflection to use a RenderTexture with dimensions CUBE (instead of a Cubemap). (1281013)
Graphics: Changed the gear icon for the more menu on the Asset Settings Provider.
Graphics: Enhanced the error reporting from the command buffer in order to improve GPU-side (Metal) error logging.
Graphics: Fixed comments in shader examples from CommandBuffer.SetRayTracingShaderPass and RayTracingShader.SetShaderPass that point to incorrect functions.
Graphics: Improved error logging for the
CopyTexturefunction.
Graphics: Improved performance of ASTC decompression by using multi-threading (around 6x using 8 threads).
Graphics: Improved the application of outstanding pending changes to RendererScene after a camera render.
Graphics: Optimized render sorting to speed up performance.
Graphics: Optimized
Material.FindPass. The improved speed depends on how many passes the Material has.
Graphics: Removed the redundant calls that Unity made when it set shader program parameters.
Graphics: Set up unity_MotionVectorsParams built-in shader variable in Ray Tracing shaders.
Graphics: Shader preloading now can be performed after the first scene is loaded and can be distributed across multiple frames.
Graphics: Texture postprocessor re-imports are now split per texture type, reducing amount of re-imports when changing a texture postprocessor script.
Graphics: Unity now opens the Render Pipeline Asset dialog when changing asset for Quality Settings and Graphics Settings (Project Settings > Quality, Project Settings > Graphics) notifying that this may take a significant amount of time. You can choose to Continue or Cancel.
Graphics: VFX : Optimization while sending event to a VisualEffect by script.
Graphics: Virtual Texturing is now more robust when switching between color spaces.
Graphics: You can now access Mesh and SkinnedMeshRenderer geometry data from Compute Shaders similar to
Mesh.GetVertexBufferand
SkinnedMeshRenderer.GetVertexBuffer.
IL2CPP: Added an intrinsic for Span get_Item/indexer to improve Span Indexer performance when accessing a large number of Span items.
IL2CPP: Added full support for System.Reflection.MemberInfo.GetCustomAttributesData.
IL2CPP: Added optimizations to Enum.HasFlag.
IL2CPP: Changed IL2CPP's internal build system to use bee on Android to prepare for improved player build performance.
IL2CPP: Corrected the source file hash so that a managed debugger can determine when a source file has changed and provide a proper warning.
IL2CPP: Improved the performance of IL2CPP conversion by using a new data model.
IL2CPP: Improved the performance of invoking delegates
IL2CPP: Reduced executable size by reducing generic metadata output.
IL2CPP: Reduced the number of internal metadata allocations that relate to array method naming.
IL2CPP: Switch IL2CPP densehash map and set to sparsehash map and set for lower runtime memory usage.
IL2CPP: Updated IL2CPP to run on .NET 5.
IL2CPP: Updated IL2CPP to use the new bee distribution format.
IMGUI: Improved overall layout and repaint performance.
Input: Added Use Physical Keys setting in the Input Manager to map Input.KeyCode to physical keys.
iOS: Changed depth RenderSurfaces to have private storageMode. (1339864)
iOS: Changed how plug-ins handle the wrong CPU type when creating an XCode project. CPU types that aren't supported are now ignored.
iOS: The generated Xcode project now uses the new build system.
Kernel: Added Memory Settings to the Project Settings window. This gives you control of the internal memory setup in Unity. You can adjust the memory setup for individual projects.
Kernel: Improved code quality and amount of allocations in some of our base abstraction layers.
Kernel: Improved the performance of parallel sorting code.
Kernel: Improved the stability of player connection by implementing several changes:
- Increased the number of frames to receive messages.
- Fixed issue where player connection could corrupt message queue and required reconnecting to app.
- Improved handling for app disconnection (for example, when the app crashes; when the app is forced to disconnect; or when the app loses connection).
- Improved support for suspending apps on mobile platforms.
License: Improved license validation by syncing the access token with the licensing client every time the token changes.
macOS: Fixed the append mode for building Xcode projects.
macOS: The generated Xcode project now uses the new build system.
macOS: When generating Xcode project, it is now possible to pick the build config used for run action (can be changed in Xcode). Debug build config now has frame capture automatically enabled.
Mobile: The Android Patch/Patch & Run Build Setting works for all types of changes, and is automatic. Previously, it could only be used with Script Only builds.
N/A (internal): Altered default texture compression for EmbeddedLinux to now be configurable from the player settings.
Networking: Improved UnityWebRequest on iOS to allow system to call upload data instead of using the operation queue.
Package: Enabled alpha channel capture in projects that use HDRP in the Recorder package.
Package: Improved the migration tools so that Unity allows projects to migrate to the recent Visual Scripting version.
Package: To open the Visual Scripting editor, you can now click the open inspector button and double click a graph in the project browser.
Package: Update Addressables to 1.18.13.
Package: Update Addressables to 1.19.6
Package: Updated Addressables package to 1.17.17.
Package: Updated Addressables to 1.18.15.
Package: Updated Addressables to 1.18.2 and Scriptable Build Pipeline (SBP) to 1.18.0.
Package: Updated Addressables to 1.19.4 and SBP to 1.19.2.
Package: Updated Advertisement package to 3.7.1.
Package: Updated com.unity.formats.alembic to 2.2.0.
Package: Updated com.unity.purchasing.udp to 2.2.2.
Package: Updated In App Purchasing package to 2.1.6.
Package: Updated names of UI elements in the Visual Scripting package to be consistent with new naming schemes.
Package: Updated OpenXR Plugin package to 1.1.1.
Package: Updated ProBuilder package to 5.0.0.
Package: Updated ProBuilder package to 5.0.3.
Package: Updated the user interface of the Visual Scripting package.
Package: Updated the version of the Addressables package to 1.18.4.
Package: Updated WebGL Publisher package to 4.2.1.
Package: Updated WebGL Publisher package version to 4.2.2.
Package: Upgraded FBX Exporter and FBX SDK to 4.1.1.
Package: Upgraded the Input System package to version 1.0.2
Package: Upgraded the UDP package to v2.1.4 in order to publish documentation updates.
Package: Visual Scripting - Migration tools were improved to allow users to migrate their project to recent Visual Scripting version.
Package: Visual Scripting now creates a warning message when you add more than one Input unit to a SuperUnit.
Package: Visual Scripting now creates a warning when an Input System Package event references an action that is the wrong type for that event.
Package Manager: Added new labels to package versions to clarify when a package is installed as a dependency.
Package Manager: Added support for opt-in caching of Git LFS files when you download Git packages. To enable caching, set either of the following environment variables:
UPM_ENABLE_GIT_LFS_CACHEor
UPM_GIT_LFS_CACHE_PATH. The latter lets you override the default cache location.
Package Manager: Changed how string literals are translated by using string.Format at definition time.
Package Manager: Fixed the Add package from git URL option so that if you use a revision and a package path in the wrong order, you can't clone the repository.
Package Manager: Improved logging by adding logs for cache misses and tarball download steps.
Package Manager: Improved performance when browsing "My Assets".
Package Manager: Improved the error message when a Git dependency cannot be resolved because the path querystring and revision fragment are in the wrong order.
Package Manager: Improved the wording on the warning message when a user is using a different version of a package than the recommended version.
Package Manager: Included the Terrain Tools package in the Worldbuilding 3D feature set.
Package Manager: Increased the amount of information logged in
upm.logat various levels.
Package Manager: Optimized Git package download times for repositories using submodules (with Git 2.28.0 or higher installed only).
Package Manager: Optimized Git package download times, most notably for repositories with a larger history.
Particles: Added a warning when users select the same shader for both the main
Material slot and the Trail Material slot. This is because GPU Instanced
Mesh particles might not use the same shader for particle geometry and
trail geometry.
Particles: Added an exception when too much particle data is sent to
SetCustomParticleData.
Physics: Added icon for Articulation Body Anchor Transform tool.
Physics: Added new Physics Profiler metrics.
Physics: Added units of measurement to the Articulation Body properties in the scripting documentation.
Physics: Improved Articulation Body anchor limit gizmos.
Physics: Rearranged the ArticulationBody properties. Moved Damping and Friction after Mass.
Prefabs: Added a warning to the PrefabAssetImporter editor if there are SerializeReference missing types within the Prefab. Also disabled applying modifications from the instance in case the Prefab asset contains missing types. Editing the Prefab asset in isolation preserves MissingType information.
Prefabs: Disabled editing for missing Prefabs instances.
Prefabs: Improved the Hierarchy so that you can see which Prefab instances have non-default overrides. (1323680)
Prefabs: Updated documentation for Object.DontDestroyOnLoad
Profiler: Added missing memory labels sizes to the memory snapshot format, in order to give real value to the prexisting label list. Api for access this data will be found inside the memory profiler package..
Profiler: Encoded managed heap section type inside the snapshot format, for retrieval via the memory profiler package.
Profiler: Improved the Memory Profiler module UI to clearly show how the high level memory stats contribute towards the total memory usage.
Profiler: Modified native connection reporting for the Memory Profiler in order to properly report connections between Assets.
Profiler: Released [email protected] with a series of fixed and improvements. More details at.
Profiler: Tethered Android devices no longer require manually calling ADB commands in the CLI, in order to be picked up as connection targets by the Editor. Multiple tethered android devices are now supported.
Scene/Game View: Added Shortcut Manager entries for "Toggle Selection Outline" and "Toggle Selection Wireframe."
Scene/Game View: Extended the
PlaceObjectmethod to support SceneView grids and 2D.
Scene/Game View: Improved API documentation for Overlays feature, including multiple new code examples.
Scene/Game View: Improved the EditorToolContext UI.
Scene/Game View: Improved the documentation for
EditorTool.
Scene/Game View: Improved the documentation for
HandleUtility.PickGameObject.
Scene/Game View: Refreshed icons for Scene View toolbars.
Scripting: Certificate validation callbacks from .Net libraries pass now also previously identified root certificates along (i.e. the full validated chain if any). (1191987)
Scripting: Changed the Managed Stripping Level to be minimal for new projects when targeting the IL2CPP backend.
Scripting: Enabled scheduling managed jobs from non-main control threads.
Scripting: Enabled user code to build against .NET Standard 2.1 and .NET Framework 4.8.
Scripting: Ensure players using the Mono scripting runtime backend always use a JIT (Just-In-Time) friendly set of class libraries, even if the ".NET Standard 2.0" API Compatibility Level is chosen. This provides consistency for Mono players no matter what API Compatibility Level is chosen in the player settings.
Scripting: Improved runtime performance of many UnityEngine math scripting APIs when using the IL2CPP scripting back-end: Color, Color32, Math, Matrix4x4, Quaternion, Vector2, Vector2Int, Vector3, Vector3Int, Vector4.
Scripting: Improved runtime performance of UnityEngine math scripting APIs (Matrix4x4, Quaternion, Vector2, Vector2Int, Vector3, Vector3Int, Vector4) when using the Mono scripting back-end.
Scripting: Improved the Editor experience for setting up Unity version defines with assembly definition files.
Scripting: Many search UI and indexing improvements
Scripting: Multithreaded asset garbage collection and increased speed by up to 2.5x.
Scripting: OnChangedCallback is invoked when elements are duplicated in ReorderableList. (1307386)
Scripting: Reduce and optimize regex usage to improve performance
Scripting: Rename ".NET Standard" to ".NET Standard 2.1" in the Api Compatibility Level options to be more precise.
Scripting: Update Roslyn to 5.0.102, Update NetCore to 5.0.2.
Scripting: Updated com.unity.ide.visualstudio to version 2.0.9.
Scripting:
CompiliationPipeline.GetAssembliesnow correctly includes .NET Compiler Platform (Roslyn) analyzers in
ScriptCompilerOptions.
Search: Improved asset indexer performances and index size.
Search: Improved asset search performance by ~4x.
Search: Share the same search debouncing threshold with the Project Browser and the Search window. (1298380)
Search: Use a single Search Provider to search for any indexed object.
Serialization: Improved the way that
SerializeReferencehandles missing types. Instances where the type is missing are replaced with
null. Other instances are editable and if there are fields that previously referred to the missing type which are still null, the missing type is preserved.
Serialization: Objects referenced from
SerializeReferencefields now have stable IDs, which reduces the risk of conflicts when multiple users collaborate on a scene file. Additionally, it also improves support for Undo and Prefab modes, especially when
SerializeReferenceis used inside arrays and lists. In addition, references now come with a new format with backward compatibility support for older assets.
Shadergraph: Added View Vector Node doc.
Shaders: Improved caching of the Shader import artifacts when a shader is reverted or has no changes after a reimport.
Shaders: Reduced memory consumption when loading shaders.
Shaders: Removed fixed shader keyword limits. Global keywords are now limited to UInt32 space, local shader keywords are now limited to UInt16 space.
Terrain: Improved worst-case performance while painting on high-resolution (2k x 2k or higher) terrain heightmaps. (1283138)
Terrain: Terrain brushes that sample empty regions at the edge of a terrain now sample the nearest terrain's edge. This corrects brush preview off the edge of a terrain, and corrects the bug of melting terrain edges for the following brushes: Smooth Height, Contrast, Sharpen Peaks, Slope Flatten, Hydrolic, Thermal, Wind, Pinch, Smudge and Twist.
Tests: Improved the logging in iOS automation so that existing log messages are clarified, and added new ones.
UI: Added a visualization for the raycast padding around a Graphic object.
UI: Improved tooltips so that when a tooltip is displayed, hovering another UI control that can display a tooltip makes the new tooltip appear immediately.
UI: Reused PropertyFields backing fields when possible.
UI: Updated the icons for Terrain's tool selection.
UI Toolkit: Added a new RuntimeDefault theme with less overhead for runtime usage.
UI Toolkit: Improved UI Toolkit event debugger. Improvements include optimizations, adjustable UI, settings, event and callback filtering, and event replay.
UI Toolkit: Improved USS validation to support more complex properties.
UI Toolkit: Made performance improvements to reduce the number of managed heap allocations while rendering sprites in the UI Toolkit.
UI Toolkit: Modified buttons to be focusable.
UI Toolkit: Modified TransitionEvents to be collapsed when relevant.
UI Toolkit: Set UIDocument's execution order to -100 to ensure root visual element is created when user's OnEnable runs.
UI Toolkit: Usage hints can now be changed on a
VisualElementwithout having to remove it from the hierarchy to help preserve styling and layout.
Undo System: Explorable undo history UI.
Version Control: Added Checkin and Update confirmation notification.
Version Control: Improved load time performance.
Version Control: Made stability and performance improvements to the Version Control package (com.unity.collab-proxy).
Video: Improved the automatic selection of target material texture properties in VideoPlayer. It detects [MainTexture] attributes, then detects them by the _MainTex naming convention.: Updated UnityWebRequest's libCurl backend (used on most platforms).
WebGL: Added support for Screen Orientation Locking and Auto-Rotating for mobile browsers which supports the Screen Orientation API.
WebGL: Refactored
unityInstance.Quit()in UnityLoader.js and
call QuitCleanupfrom both
unityInstance.Quit()and
Application.Quit().
WebGL: Updated WebGL compiler to Emscripten 2.0.19 and removed support for the obsolete asm.js linker target.
Windows: Changed Alt + Enter to default to native resolution which makes the image more crisp and reduces the chance of letterboxing.
XR: Added support for adding new reference objects at runtime. Added support for ARCore session recording and playback.
XR: Removed "Preview" text from UI display element.
XR: Updated AR Foundation package dependencies to XR Management 4.0.
XR: Updated Magicleap XR Plugin package to 6.2.2.
XR: Updated Oculus XR Plugin package to 1.8.1.
XR: Updated OpenXR Package to 1.2.0.
XR: Updated OpenXR Package to 1.2.2.
XR: Updated Windows MR package to 5.2.2.
XR: Updated WindowsMR to version 5.2.1.
XR: Updated XR Plug-in Management to 4.0.3..
2D: Added: Allow user to register for notification when the SpriteRenderer's Sprite property has changed.
2D: Added: New API to query SpriteAtlas information IsIncludedInBuild and GetMasterAtlas.
2D: Added: Support for default sprite mask material in URP and public api method to retrieve the default 2d mask material.
Android: Added: Added AndroidJavaObject.CloneReference to enable having multiple references to the same Java object. (1277152)
Android: Added: New APIs to manage fast-follow and on-demand delivered asset packs. The APIs wrap Google's PlayCore functionality.
Android: Added: TargetDevices player setting, so users can select if they want their Android application to run on all devices, just Android phones, tablets, and TV devices, or just Chrome OS devices. Bundles: Added: Added public API to specify the amount of memory reserved for the shared AssetBundle loading cache.
Asset Bundles: Added: New API DownloadHandlerAssetBundle.autoLoadAssetBundle for loading AssetBundles asynchronously from DownloadHandlerAssetBundle.
Asset Import: Added: Added removeConstantScaleCurves in Model Importer. (1252606)
Asset Import: Added: AssetPostprocessor.OnPreprocessCameraDescription and AssetPostprocessor.OnPreprocessLightDescription.
Asset Import: Added: New public methods MonoImporter.SetIcon/GetIcon and PluginImporter.SetIcon/GetIcon.
Asset Pipeline: Added: Added a method to the TextureImporter to get the source texture width and height.
Asset Pipeline: Added: AssetDatabase.SaveAssetIfDirty() to save a specific asset, rather than making a call to AssetDatabase.SaveAssets().
Build Pipeline: Added: Added the BuildOptions.CleanBuildCache flag to force the incremental player build pipeline to do a clean rebuild of everything.
Build Pipeline: Added: Callback function BuildPlayerProcessor.PrepareForBuild. This callback can be implemented by users who wish to produce artifacts before the build starts, or to add StreamingAssets to a build without first putting them in the project assets folder.
Build Pipeline: Deprecated: PackedAsset.file has been deprecated. Instead, to find the matching report file for a particular asset the recommended way is to do a filename lookup in the report.
Burst: Added: Intrinsics: Neon - Added support for basic vld1 APIs
Editor: Added: API on the QueryEngine to better control filtering.: Added: Making hyperLinkClicked public. It is now possible to subscribe to the event EditorGUI.hyperLinkClicked to handle click on a TextField with <a></a> tag.
Editor: Changed: AndroidArchitecture.x86 and AndroidArchitecture.x86_64 have been renamed with capital X's. They are now AndroidArchitecture.X86 and AndroidArchitecture.X86_64.: Allow to opt out of automatic ambient probe and default reflection probe generation.
GI: Added: LightingSettings.lightmapCompression has been added and determines the quality of compression used for lightmaps.
GI: Deprecated: LightmapSettings.textureCompression has been deprecated in favor of LightingSettings.lightmapCompression.
Graphics: Added: "Expand/Collapse All" buttons to Rendering Debugger window menu.
Graphics: Added: A new API for compiling shaders from editor code and obtaining reflection info was added to ShaderData.Pass.
Graphics: Added: A new player setting "Upload Cleared Texture Data" was added to revert to the old default behavior of uploading initialised data to video memory when creating a texture from script.
Graphics: Added: Added a blitter utility class. Moved from HDRP to RP core.
Graphics: Added: Added a realtime 2D texture atlas utility classes. Moved from HDRP to RP core.
Graphics: Added: Added an option to change the visibilty of the Volumes Gizmos (Solid, Wireframe, Everything), available at Preferences > Core Render Pipeline.
Graphics: Added: Added class for drawing shadow cascades
UnityEditor.Rendering.ShadowCascadeGUI.DrawShadowCascades.
Graphics: Added: Added class for drawing shadow cascades
UnityEditor.Rendering.ShadowCascadeGUI.DrawShadowCascades.
Graphics: Added: Added CommandBuffer.SetGlobalInteger().
Graphics: Added: Added common include file for meta pass functionality. (1211436)
Graphics: Added: Added Editor window that allow showing an icon to browse the documentation.
Graphics: Added: Added Fallback Material to DrawSettings.
Graphics: Added: Added helper for Volumes (Enable All Overrides, Disable All Overrides, Remove All Overrides). customize the rtHandleProperties of a particular RTHandle. This is a temporary work around to assist with viewport setup of Custom post process when dealing with DLSS or TAAU. some getters for the Streaming Virtual Texturing settings.: Added
IAdditionalDatainterface to identify the additional datas on the core package.
Graphics: Added: Adding project-wide settings for RenderPipeline with RenderPipelineGlobalSettings.
Graphics: Added: Allowing Rendering Layer Names to not collide in UI. Includes a new API RenderPipelineAsset.prefixedRenderingLayerMaskNames to fetch a unique list of rendering layer mask names for UI needs.
Graphics: Added: AssetPostprocessor.OnPostprocessTexture3D ( Texture3D )
AssetPostprocessor.OnPostprocessTexture2DArray ( Texture2DArray ).
Graphics: Added: Automatic spaces to enum display names used in Rendering Debugger and add support for InspectorNameAttribute.
Graphics: Added: Calculating correct rtHandleScale by considering the possible pixel rounding when DRS is on.
Graphics: Added: DebugUI.Flags.IsHidden to allow conditional display of widgets in Rendering Debugger.
Graphics: Added: DebugUI.Foldout.isHeader property to allow creating full-width header foldouts in Rendering Debugger.
Graphics: Added: DefaultFormat is extended with the new DepthStencil and Shadow values. You can use SystemInfo.GetGraphicsFormat with these new values to get the default GraphicsFormat for a DepthStencil or Shadow RenderTexture on a platform.
Graphics: Added: Documentation links to Light Sections.
Graphics: Added: Introduce the RendererList API.
Graphics: Added: Made GetQualitySettings() method public. This method is used by internal code to implement undo functionality in the Unity Editor
Graphics: Added: Method to generate a Texture2D of 1x1 with a plain color.
Graphics: Added: Mouse & touch input support for Rendering Debugger runtime UI, and fix problems when InputSystem package is used.
Graphics: Added: New API functions inside DynamicResolutionHandler to get mip bias. This allows dynamic resolution scaling applying a bias on the frame to improve on texture sampling detail.
Graphics: Added: New API functions with no side effects in DynamicResolutionHandler, to retrieve resolved drs scale and to apply DRS on a size.
Graphics: Added: New API in DynamicResolutionHandler to handle multicamera rendering for hardware mode. Changing cameras and resetting scaling per camera should be safe. draw API as a set of Graphics.RenderX() functions. All the old draw API Graphics.DrawX() functions work as before but many of them can be easily converted to the new API to gain from the added functionality. Some added functionality of the new API:
- Support custom per-instance data for RenderMeshInstanced(), per-instance motion vector and rendering layer mask definitions, and easy light probe setup
- Support for multi-command indirect draws & future proofing for hardware implementation on supported platforms
- Custom world bounds for all mesh rendering (e.g. to support mesh deformation in vertex shaders).
Graphics: Added: New method DrawHeaders for VolumeComponentsEditors.
Graphics: Added: New methods on CoreEditorDrawers, to allow adding a label on a group before rendering the internal drawers.
Graphics: Added: New SRPLensFlareData Asset.
Graphics: Added: New utility function GraphicsFormatUtility.GetDepthStencilFormat. This function lets you easily select the right format on each platform for a certain amount of depth and/or stencil bits.
Graphics: Added: Red, Green, Blue Texture2D on CoreEditorStyles.
Graphics: Added: Reminder if the data of probe volume might be obsolete.
Graphics: Added: Rendering.SupportedRenderingFeatures.reflectionProbesBlendDistance to provide SRPs with the ability to enable the blend distance fields in the ReflectionProbe inspector.: Added: Sampling noise to probe volume sampling position to hide seams between subdivision levels.
Graphics: Added: ScriptableRenderContext.SubmitForRenderPassValidation added to validate whether RenderPass API calls inside the context are eligible for execution.
Graphics: Added: Several utils functions to access SphericalHarmonicsL2 in a more verbose and intuitive fashion.
Graphics: Added: SpeedTree8MaterialUpgrader, which provides utilities for upgrading and importing SpeedTree 8 assets to scriptable render pipelines.
Graphics: Added: Support for additional properties for Volume Components without custom editor.
Graphics: Added: Support for Lens Flare Data Driven (from images and Procedural shapes), on HDRP.
Graphics: Added: SystemInfo. supportsMultisampleResolveDepth to query platforms Multisample resolve of depth attachments support.
Graphics: Added: SystemInfo.maxGraphicsBufferSize added for querying the maximum size of a Mesh/Graphics/Compute buffer. Creating larger ones now also throws exceptions (previously was often just crashing). (1319589, 1319594)
Graphics: Added: This PR adds a API function checking is all the systems of an Visual Effect are sleeping.
Graphics: Added: Unification of Material Editor Headers Scopes.
Graphics: Added: Unity.External.NVIDIA APIs to expose NVIDIA-specific plugin functionality (for controlling DLSS and other features). These APIs are available by enabling the NVIDIA native package on the package manager.
Graphics: Added:
ReflectionProbe.UpdateCachedState()to update the internal data related to reflection probe used by the culling system.
Graphics: Changed: Exposed UseSceneFiltering API as public.
Graphics: Changed: Renamed Texture2D.Resize to Reinitialize. (1312670)
Graphics: Changed: RenderBufferStoreActions.Resolve and RenderBufferStoreActions.StoreAndResolve can now be set using the RenderTarget API.
Graphics: Changed: VFX.VFXManager.PrepareCamera and VFX.VFXManager.ProcessCameraCommand now can take an optional parameter for camera XR settings.
Graphics: Deprecated: Deprecated ShadowAuto, DepthAuto and VideoAuto graphics formats and introduce a new alternative api. (See the Upgrade Guide for details.).
Graphics: Deprecated: Most of BatchRenderGroup API will be fully deprecated in 2022.2 (and replaced by a new API).
Graphics: Obsoleted: ReflectionProbe.defaultReflectionSet has been deprecated in favor of ReflectionProbe.defaultReflectionTexture
Graphics: Removed: Removed GraphicsFormatUtility.GetDepthStencilFormat(int) after being public for two alpha releases.
HDRP: Added: "Conservative" mode for shader graph depth offset.
HDRP: Added: Ability to animate many physical camera properties with Timeline.
HDRP: Added: Ability to control focus distance either from the physical camera properties or the volume.
HDRP: Added: Added a better support for LODs in the ray tracing acceleration structure.
HDRP: Added: Added a built-in custom pass to draw object IDs.
HDRP: Added: Added a complete solution for volumetric clouds for HDRP including a cloud map generation tool. Falloff Mode (Linear or Exponential) in the Density Volume for volume blending with Blend Distance.
HDRP: Added: Added a Force Forward Emissive option for Lit Material that forces the Emissive contribution to render in a separate forward pass when the Lit Material is in Deferred Lit shader Mode.
HDRP: Added: Added a minimum motion vector length to the motion vector debug view.
HDRP: Added: Added a parameter to control the vertical shape offset of the volumetric clouds. (1358528)
HDRP: Added: Added a property on the HDRP asset to allow users to avoid ray tracing effects running at too low percentages. (1342588)
HDRP: Added: Added a property to control the fallback of the last bounce of a RTGI, RTR, RR ray to keep a previously existing side effect on user demand.
HDRP: Added: Added a setting in the HDRP asset to change the Density Volume mask resolution of being locked at 32x32x32 (HDRP Asset > Lighting > Volumetrics > Max Density Volume Size).
HDRP: Added: Added a shortcut to HDRP Wizard documentation.
HDRP: Added: Added a slider that controls how much the volumetric clouds erosion value affects the ambient occlusion term.
HDRP: Added: Added a slider to control the fallback value of the directional shadow when the cascade have no coverage.
HDRP: Added: Added an example in the documentation that shows how to use the accumulation API for high quality antialiasing (supersampling).
HDRP: Added: Added an option to have double sided GI be controlled separately from material double-sided option.
HDRP: Added: Added an option to render screen space global illumination in half resolution to achieve real-time compatible performance in high resolutions. (1353727)
HDRP: Added: Added browsing of the documentation of Compositor Window.
HDRP: Added: Added color and intensity customization for Decals.
HDRP: Added: Added dependency to mathematics and burst, HDRP now will utilize this to improve on CPU cost. First implementation of burstified decal projector is here.
HDRP: Added: Added info box when low resolution transparency is selected, but its not enabled in the HDRP settings. This will help new users find the correct knob in the HDRP Asset.
HDRP: Added: Added light unit slider for automatic and automatic histrogram exposure limits.
HDRP: Added: Added new AOV APIs for overriding the internal rendering format, and for outputing the world space position.
HDRP: Added: Added new API in CachedShadowManager.
HDRP: Added: Added pivot point manipulation for Decals (inspector and edit mode).
HDRP: Added: Added shader graph unit test for IsFrontFace node.
HDRP: Added: Added slides to control the shape noise offset.
HDRP: Added: Added support for internal plugin materials and HDSubTarget with their versioning system.
HDRP: Added: Added support for reflection probes as a fallback for ray traced reflections. (1338644)
HDRP: Added: Added support for the camera bridge in the graphics compositor
HDRP: Added: Added support for Unlit shadow mattes in Path Tracing. (1335487)
HDRP: Added: Added support of motion vector buffer in custom postprocess.
HDRP: Added: Added TargetMidGrayParameterDrawer.
HDRP: Added: Added the receiver motion rejection toggle to RTGI (1330168)
HDRP: Added: Added the support of volumetric clouds for baked and realtime reflection probes.
HDRP: Added: Added three animation curves to control the density, erosion, and ambient occlusion in the custom submode of the simple controls.
HDRP: Added: Added tooltips for content inside the Rendering Debugger window.
HDRP: Added: Added two toggles to control occluder rejection and receiver rejection for the ray traced ambient occlusion (1330168)
HDRP: Added: Added UV manipulation for Decals (edit mode).
HDRP: Added: Added ValidateMaterial callbacks to ShaderGUI.
HDRP: Added: Added View Bias for mesh decals.
HDRP: Added: Added warning for when a light is not fitting in the cached shadow atlas and added option to set maximum resolution that would fit.
HDRP: Added: API to allow OnDemand shadows to not render upon placement in the Cached Shadow Atlas.
HDRP: Added: Area Light support for Hair and Fabric master nodes.
HDRP: Added: Deferred shading debug visualization.
HDRP: Added: Documentation for volumetric clouds.
HDRP: Added: Exposed update upon light movement for directional light shadows in UI.
HDRP: Added: Global settings check in Wizard.
HDRP: Added: Help URL for volumetric clouds override.
HDRP: Added: Lens Flare Samples.
HDRP: Added: Localization on Wizard window.
HDRP: Added: LTC Fitting tools for all BRDFs that HDRP supports.
HDRP: Added: Mixed RayMarching/RayTracing mode for RTReflections and RTGI.
HDRP: Added: New checkbox to enable mip bias in the Dynamic Resolution HDRP quality settings. This allows dynamic resolution scaling applying a bias on the frame to improve on texture sampling detail.
HDRP: Added: New control slider on RTR and RTGI to force the LOD Bias on both effects.
HDRP: Added: Path tracing support for AxF material.
HDRP: Added: Path tracing support for stacklit material.
HDRP: Added: Scale Mode setting for Decals.
HDRP: Added: Speed Tree 8 shader graph as default Speed Tree 8 shader for HDRP.
HDRP: Added: Support for Fabric material in Path Tracing.
HDRP: Added: Support for lighting full screen debug mode in automated tests.
HDRP: Added: Support for mip bias override on texture samplers through the HDAdditionalCameraData component.
HDRP: Added: Support for multi volumetric cloud shadows.
HDRP: Added: Support for screen space shadows (directional and point, no area) for shadow matte unlit shader graph.
HDRP: Added: Support for surface gradient based normal blending for decals.
HDRP: Added: Support for tessellation for all master node in shader graph.
HDRP: Added: Support for volumetric clouds in planar reflections.
HDRP: Added: Support of interpolators for SV_POSITION in shader graph.
HDRP: Added: Toggle to render the volumetric clouds locally or in the skybox.
HDRP: Added: Way for fitting a probe volume around either the scene contents or a selection..
iOS: Deprecated: ScreenOrientation.Landscape as it was a synonym for ScreenOrientation.LandscapeLeft, and not "some Landscape orientation", which is confusing. (1320447)
Linux: Added: LinuxServer value added to RuntimePlatform enum.
macOS: Added: OSXServer value added to RuntimePlatform enum.
Package: Added: (Recorder) Added public API for AOVRecorderSettings.: Expose the improved patch friction mode that will distribute the normal force over the friction anchors and thus match analytical results closer.
Physics: Added: Exposed a new property in RaycastHit called colliderInstanceID, which returns the instance ID of the collider the ray collided with.
Physics: Added: ForceMode argument to ArticulationBody.Add force and related functions.
Physics: Added: Property for retrieving ArticulationBody components during a collision event. Articulation bodies can be retrieved by Collision.articulationBody.
Physics: Added: Property for retrieving either ArticulationBody or Rigidbody components to collision events under Collision.body.
Plugins: Added: New IUnityLog interface for message logging to Unity console and log in native plugins
Plugins: Added: New IUnityProfilerV2 interface with Profiler Counters API in native plugins
Prefabs: Added: Exposed FindAllInstancesOfPrefab to scripting.: Added Profiler.EmitSessionMetaData api to pass generic metadata to the Profiler associated with profiling session.
Profiler: Added: API to create GPU sampling ProfilerMarker.
Profiler: Added: New C# custom Categories API.
Profiler: Changed: Added GPU profiling capabilities to ProfilerRecorder API.
Scripting: Added: Add FileUtil.GetPhysicalPath and FileUtil.GetLogicalPath methods to convert logical paths to physical and vice versa.
Scripting: Added: APIs for the AsyncReadManager, to enable chaining and canceling of reads.
Scripting: Added: Component.GetComponentInParent(Type t, bool includeInactive) method to match GameObject. (1331778)
Search: Added: Add SearchService.ShowPicker API to pick any search item result..
Services: Added: Added new com.unity.services.core package that is used for common behaviour of Game Service packages
Services: Changed: Updating analytics dashboard to point to new location
Shadergraph: Added: Ability to enable tiling and offset controls for a Texture2D input.
Shadergraph: Added: Ability to mark textures / colors as \[MainTexture\] and \[MainColor\].
Shadergraph: Added: Added a ShaderGraph animated preview framerate throttle.
Shadergraph: Added: Added ability to define custom vertex-to-fragment interpolators.
Shadergraph: Added: Added custom interpolator documentation
Shadergraph: Added: Added custom interpolator thresholds on Shader Graph project settings page.
Shadergraph: Added: Added information about selecting and unselecting items to the Blackboard article.
Shadergraph: Added: Added many node synonyms for the Create Node search so that it's easier to find nodes..
Shadergraph: Added: Added
Spriteoption to Main Preview, which is similar to
Quadbut does not allow rotation.
Spriteis used as the default preview for URP Sprite shaders.
Shadergraph: Added: Adding control of anisotropic settings on inline Sampler state nodes in ShaderGraph.
Shadergraph: Added: Categories to the blackboard, enabling more control over the organization of shader properties and keywords in the Shader Graph tool. These categories are also reflected in the Material Inspector for URP + HDRP, for materials created from shader graphs.
Shadergraph: Added: For Texture2D properties, added linearGrey and red as options for default texture mode.
Shadergraph: Added: For Texture2D properties, changed the "bump" option to be called "Normal Map", and will now tag these properties with the [NormalMap] tag.
Shadergraph: Added: HLSL file implementing a version of the Unity core LODDitheringTransition function which can be used in a Shader Graph.
Shadergraph: Added: New dropdown property type for subgraphs, to allow compile time branching that can be controlled from the parent graph, via the subgraph instance node.
Shadergraph: Added: New target for the built-in render pipeline, including Lit and Unlit sub-targets.
Shadergraph: Added: Split Texture Transform node to allow using/overriding the provided tiling and offset from a texture input.
Shadergraph: Added: Stage control to ShaderGraph Keywords, to allow fragment or vertex-only keywords.
Shadergraph: Added: Stereo Eye Index, Instance ID, and Vertex ID nodes added to the shadergraph library.
Shadergraph: Added: Subshadergraphs for SpeedTree 8 shadergraph support: SpeedTree8Wind, SpeedTree8ColorAlpha, SpeedTree8Billboard.
Shadergraph: Added: Toggle "Disable Global Mip Bias" in Sample Texture 2D and Sample Texture 2D array node. This checkbox disables the runtimes automatic Mip Bias, which for instance can be activated during dynamic resolution scaling.
Shadergraph: Added:
Branch On Input Connectionnode. This node can be used inside a subgraph to branch on the connection state of an exposed property.
Shadergraph: Added:
Calculate Level Of Detail Texture 2Dnode, for calculating a Texture2D LOD level.
Shadergraph: Added:
Dropdownnode per dropdown property, that can be used to configure the desired branch control.
Shadergraph: Added:
Gather Texture 2Dnode, for retrieving the four samples (red component only) that would be used for bilinear interpolation when sampling a Texture2D.
Shadergraph: Added:
Use Custom Bindingoption to properties. When this option is enabled, a property can be connected to a
Branch On Input Connectionnode. The user provides a custom label that will be displayed on the exposed property, when it is disconnected in a graph.
Shaders: Added: Added a missing API to check shader compilation warnings.
Added a missing API to get information about individual shaders. (1340374): Added: CommandBuffer.EnableKeyword,CommandBuffer.DisableKeyword can now be used to enable a local shader keyword.
Shaders: Added: LocalKeyword.isOverridable property to check whether a given local shader keyword can be overridden by global shader keyword state..
Shaders: Deprecated: ShaderKeyword.GetGlobalKeywordName, ShaderKeyword.GetName and ShaderKeyword.GetKeywordType..
Terrain: Changed: IOnInspectorGUI ShowBrushGUI overloads removed and replaced with a single ShowBrushGUI call with default parameters.
Terrain: Changed: TerrainAPI namespace is no longer part of an experimental namespace and has been renamed TerrainTools.
Terrain: Changed: TerrainPaintTool GetDesc changed to GetDescription.
Terrain: Changed: TerrainUtility moved to UnityEngine.TerrainUtils namespace.
Terrain: Changed: TerrainUtility.TerrainMap moved to UnityEditor.TerrainUtils.TerrainMap. TerrainMap.TileCoord moved to UnityEditor.TerrainUtils.TerrainTileCoord.
Terrain: Changed: UnityEditor.TerrainAPI.TerrainPaintUtilityEditor.BrushPreview changed to UnityEditor.TerrainTools.TerrainBrushPreviewMode.
UI: Added: GetPersistentListenerState added to UnityEvent.
UI Toolkit: Added: Added visualTreeAssetSource property to VisualElement to allow identifying the VisualTreeAsset a visual tree was cloned from: Expose ScrollView.mode property.
UI Toolkit: Added: New public UI Toolkit APIs:
- Dropdownfield.choices
- BasePopupField.choices
- MaskField.choices
- MaskField.choicesMasks.
UI Toolkit: Added: Tool for converting assets created with the package to use them without the package installation, and to convert them back to package versions.: Deprecated: CurveField.borderUssClassName and GradientField.borderUssClassName are now deprecated since the related visual element is not required to render a border anymore.
UI Toolkit: Deprecated: Deprecated OnKeyDown method in ListView. Use the event system instead, see SendEvent.
UI Toolkit: Obsoleted: ListView's onItemChosen and onSelectionChanged are now obsolete.
Universal: Added: 2D Light Texture Node. A Shader Graph node that enable sampling of the Light Textures generated by the 2D Renderer in a lit scene.
Universal: Added: Added View Vector node to mimic old behavior of View Direction node in URP.
Universal: Added: Depth and DepthNormals passes to particles shaders.
Universal: Added: Enabled deferred renderer in UI.
Universal: Added: Fixed an error where multisampled texture being bound to a non-multisampled sampler in XR. (1297013)
Universal: Added: SpeedTree 8 Shader Graph but did not set it as the default when importing or upgrading Speed Tree 8 assets. Because URP doesn't yet support per-material culling, this Shader Graph does not yet behave in the same way as the existing handwritten SpeedTree 8 shader for URP.
Universal: Added: Support for light layers, which uses Rendering Layer Masks to make Lights in your Scene only light up specific Meshes.
Universal: Added: Support for SSAO in Particle and Unlit shaders.
Universal: Added: _SURFACE_TYPE_TRANSPARENT keyword to URP shaders.
URP: Added: "Allow Material Override" option to Lit and Unlit ShaderGraph targets. When checked, allows Material to control the surface options (transparent/opaque, blend mode, etc).
URP: Added: Added a help button on material editor to show the shader documentation page.
URP: Added: Added GetUniversalAdditionalLightData, a method that returns the additional data component for a given light or create one if it doesn't exist yet.
URP: Added: Added Lights 2D to the Light Explorer window.
URP: Added: Added Motion Vector render pass for URP.
URP: Added: Added Render Settings Converter to the Render Pipeline Converter, this tool creates and assigns URP Assets based off rendering settings of a Builtin project.
URP: Added: Added support for default sprite mask shaders for the 2D Renderer in URP.
URP: Added: Blending and box projection for reflection probes.
URP: Added: Decal support. This includes new Decal Projector component, Decal renderer feature and Decal shader graph.
URP: Added: Fixed incorrect shadow fade in deferred rendering mode.
URP: Added: Light cookies support to directional, point and spot light. Directional light cookie is main light only feature.
URP: Added: New UI for Render Pipeline Converters. Used now for Built-in to Universal conversion.
URP: Added: New URP Debug Views under Window/Analysis/Rendering Debugger.
URP: Added: Optional Depth Priming. Allows the forward opaque pass of the base camera to skip shading certain fragments if they don't contribute to the final opaque output.
URP: Added: Possibility to rename light layer values.
URP: Added: Sections on Light Inspector.
URP: Added: Store Actions' option that enables bandwidth optimizations on mobile GPU architectures.
URP: Added: Support for controlling Volume Framework Update Frequency in UI on Cameras and URP Asset as well as through scripting.: URP Global Settings Asset to the Graphics Settings - a common place for project-wide URP settings.
URP: Added: VFX: Basic support of Lit output.
URP: Added: VFX: Fix light cookies integration.
URP: Added: XR: Added Late Latching support to reduce VR latency (Quest).
URP: Changed: Reorder camera inspector to be in the same order as HDRP.
Version Control: Added: VCS support can now be added to Unity with managed code only as opposed to implementing native plugin. See UnityEditor.VersionControl.VersionControlObject and related classes.
VFX Graph: Added: Added HDRP Decal output context.
VFX Graph: Added: Added Is Inside subgraph into VFX Graph additions package.
VFX Graph: Added: Added new setting in "Preferences -> Visual Effects" to control the fallback behavior of camera buffers from MainCamera node when the main camera is not rendered.
VFX Graph: Added: Added support for Texture2D Arrays in Flipbooks.
VFX Graph: Added: Material Offset setting in inspector of the rendered outputs.
VFX Graph: Added: Motion vectors enabled for particle strips.
VFX Graph: Added: New tool : Signed Distance Field baker.
VFX Graph: Added: New tool to help set VFX Bounds.
VFX Graph: Added: Placement option (Vertex, Edge, Surface) in Sample Mesh & Skinned Mesh, allows triangle sampling.
VFX Graph: Added: Provide explicit access to spawnCount in graph
VFX Graph: Added: Restore "Exact Fixed Time Step" option on VisualEffectAsset.
VFX Graph: Added: Sample vertices of a transformed skinned mesh with Position (Skinned Mesh) and Sample Skinned Mesh operator.
VFX Graph: Added: Structured Graphics Buffer support as exposed type
VFX Graph: Added: Support 2D Renderer in URP for Unlit.
VFX Graph: Added: Support of direct link event to initialize context (which support several event within the same frame)
VFX Graph: Added: The VFX editor automatically attach to the current selection if the selected gameobject uses the currently edited VFX asset.
VFX Graph: Added: Two new buttons are available in the editor's tool bar. One will display a popup panel to handle attachement and one to lock/unlock the current attachement.
Video: Added: Advanced video encoding controls for H.264 (for Windows only) and VP8..
XR: Added: Update XR Plug-in Management to 4.0.1
XR: Removed: Removed the Windows XR SDK Plug-in from Unity. Microsoft now supports Windows MR devices using OpenXR in Unity 2021, and recommends using Unity's OpenXR plugin..
2D: Allowed non-public fields with the SerializeField attribute as custom fields for RuleTile.
2D: Changed some PSDImporter settings to use checkboxes instead of drop-down menus.
2D: Replaced usage of Triangle.Net with in house tessellation solution.
2D: Updated com.unity.2d.sprite package license
2D: Updated com.unity.2d.tilemap package license
2D: Updated the SceneView overlays used by the Tile Palette to use UIToolkit/new Overlays framework instead of IMGUI. (1342226)
AI: Updated component-based workflow notice in the Navigation window.
Android: Allowed Android Player to use Vulkan on GPUs that are currently unknown to Unity on Android 11 or newer.
Android: Changed how Unity checks to see if an obb is compatible with an apk. Both the apk and obb now have unity_obb_guid file inside them and if the contents match between them, Unity treats them as being compatible.
Android: Changed the minimum supported Android version to 5.1 (API 22).
Android: Removed OpenGL ES 2.0 from Auto Graphics API. The preferred API is now Vulkan.
Android: Removed support for putting gradle resources in Assets/Plugins/Android/[res, assets]. you either need to use Android archive plug-ins, Android Library plug-ins, or move those files to Streaming Assets.
Android: Removed the overwrite comment in gradle files and manifest files '// GENERATED BY UNITY. REMOVE THIS COMMENT TO PREVENT OVERWRITING WHEN EXPORTING AGAIN'. Use templates if you want your changes to persist.
Android: Upgraded the Android Gradle Plugin version from 3.6.0 to 4.0.1.
Android: Upgraded the Gradle version from 5.6.4 to 6.1.1.
Android: When Auto Graphics API is enabled, Require ES3.1, Require ES3.1+AEP, Require ES 3.2 properties in Android Player Settings are now available.
Android: When you export an Android project, Unity no longer creates a symbols zip package because it was always missing libil2cpp.so symbols. After you build your project manually, zip unityLibrary/symbols package if you want to upload it to Google Play.
Animation: Updated Animation Rigging package to version 1.1.0.
Animation: Updated Animation Rigging package to version 1.1.1.
Asset Import: Unity will not attempt to relaunch Maya or 3DsMax upon first timeout. (1281786)
Asset Pipeline: Changed AssetPostprocessors calls so that they are ordered by their GetPostprocessOrder and then by their FullName (namespace.classname).
Build Pipeline: Unity no longer writes unsaved changes from open scenes into player builds. Instead, it asks to save changes to disk.
Burst: Added full support for Armv8.2 Neon intrinsics.
Burst: Altered the IL Post Processed ‘direct call’ Burst function pointers to defer until they are needed to be compiled.
Burst: Assigned rpmalloc as the native allocator on Windows to speed up concurrently executing LLVM work.
Burst: Changed how exceptions throw types and how messages are stored in Burst binaries to reduce overall binary size.
Burst: Changed how exceptions throw types, and how messages are stored in our Burst binaries to reduce binary size.
Burst: Changed how SLEEF global variables for trigonometric functions are pulled into Burst to reduce duplications.
Burst: Changed how Unity resolves function references in the compiler to improve resolving an existing function reference by 3x.
Burst: Changed the Burst minimum Editor version to 2019.4.
Burst: Changed the link step to not use response files if the command line was small enough, saving the cost of the round-trip to the disk.
Burst: Changed to inliner heuristics to improve build time and reduce executable size.
Burst: Disabled threading within the
lldlinker instances used for in-Editor and desktop cross compilation.
Burst: DOTS Runtime now shares the logging code path with the general case.
Burst: half <-> float/double conversions now use native hardware where possible (Arm or AVX2).
Burst: Improved how Unity handles generic resolution in Cecil to cache the strictly resolved generic types and to save time in the compiler.
Burst: Improved the compiling process of a method when its assembly’s dependencies have changed so that the Burst version of the method is immediately used.
Burst: Modified the IL Post Processed 'direct call' Burst function pointers so that they are not compiled until they are needed.
Burst: Named constant array data after the static field it belongs to in assembly.
Burst: Reduced the time it takes for Burst to check if any Burst-compilable code has changed to improve iteration speed.
Burst: Removed the ability to experiment with
Unity.Burst.Intrinsics.Common.Pause.
Burst: Removed the entry-point name job/function-pointer that caused the throw in exception strings.
Burst: Removed the entry-point name of the job or function-pointer that caused an exception in exception strings to support the Burst compiler's requirement for deterministic results, which are not compatible with per-entry-point function derivations.
Burst: Restricted use of Burst in secondary Unity processes. Code normally Burst-compiled now runs under Mono.
Burst: Shared the logging code path of the general case with DOTS Runtime.
Burst: Upgraded Burst to use LLVM Version 11.0.1 by default, bringing the latest optimization improvements from the LLVM project.
Burst: Upgraded Burst to use LLVM Version 11.0.1 by default.
Editor: Added a new search field to filter dependencies.
Editor: Changed the behaviour of an Editor Window to ignore minimum and maximum sizes when being docked. Each window defines how it should adapt to the available space. (1269298)
Editor: Deleting an object reference array entry in the Inspector now removes that array element. Previously, this was a two-step process.
Editor: Made changes such that the default parent object is no longer simultaneously displayed for all loaded scenes. Now, when you use Set Default Parent Object, the scene to which the object belongs is set as active.
Editor: Modified includes and excludes in the Index Manager to keep the last selected file pattern in the enum field when you add another item.
Editor: Moved asset importing and cache server related preferences to the Asset Pipeline preferences window page.
Editor: Moved some main toolbar elements to the left align container.
Editor: Moved the UI widget used for Light Cookies from the standard Property Field to the ObjectField that provides texture preview and asset directory search capabilities, across HDRP and built-in.
Editor: Removed limitation on TooltipAttribute so you can apply it anywhere. In the Editor, currently only Tooltips on fields are visible.
Editor: Removed the dependencies help box.
Editor: Updated Collaborate package to allow users to migrate to Plastic.
Editor: Updated com.unity.cinemachine to 2.8.0.
GI: Removed Enlighten deprecation notice for precomputed realtime global illumination. It is now fully supported. Baked GI using Enlighten is still deprecated.
Graphics: *Added the blend distance of the reflection probe to Unity_SpecCubeN_BoxMax.w
*Added information about the relative importance between SpecCube 0 and SpecCube1 to unity_SpecCube1_BoxMin.w
Graphics: Added a macro layer for 2D texture sampling macros to Platform ShaderLibrary API headers. This layer starts with a PLATFORM_SAMPLE2D definition, and lets you inject sampling behavior on a render pipeline level. For example, being able to a global mipmap bias for temporal upscalers.
Graphics: Added an ArgumentException for Cubemap pixel access functions (GetPixel/GetPixels/GetPixels32/GetPixelData & SetPixel/SetPixels/SetPixels32/SetPixelData) when encountering an error.
Graphics: Added an ArgumentException for CubemapArray pixel access functions (GetPixel/GetPixels/GetPixels32/GetPixelData & SetPixel/SetPixels/SetPixels32/SetPixelData) when encountering an error.
Graphics: Added an ArgumentException for Texture2DArray pixel access functions (GetPixel/GetPixels/GetPixels32/GetPixelData & SetPixel/SetPixels/SetPixels32/SetPixelData) when encountering an error.
Graphics: Added an ArgumentException for Texture3D pixel access functions (GetPixel/GetPixels/GetPixels32/GetPixelData & SetPixel/SetPixels/SetPixels32/SetPixelData) when encountering an error.
Graphics: Added an ArgumentException for WebCamTexture pixel access functions (GetPixel/GetPixels/GetPixels32) when encountering an error. Calling these functions before the first frame update throws an exception instead of returning blank data.
Graphics: Altered LensFlare (SRP) so it can be disabled per element.
Graphics: Changed the handling of additional properties to base class.
Graphics: Changed the menu path for Generate Shader Includes from Edit > Render Pipeline > Generate Shader Includes to Edit > Rendering > Generate Shader Includes.
Graphics: Changed the menu path for LookDev from Assets > Create > LookDev > Environment Library to Assets > Create > Rendering > Environment Library (Look Dev).
Graphics: Changed the menu path for the Graphics Compositor from Window > Render Pipeline > Graphics Compositor to Window > Rendering > Graphics Compositor.
Graphics: Changed the menu path for the Look Dev window from Window > Render Pipeline > Look Dev to Window > Analysis > Look Dev.
Graphics: Changed the menu path for the Render Graph Viewer from Window > Render Pipeline >* Render Graph Viewer* to Window > Analysis > Render Graph Viewer.
Graphics: Changed the menu path for the Render Pipeline Debug window from Window > Render Pipeline > Render Pipeline Debug to Window > Rendering > Render Pipeline Debugger.
Graphics: DynamicResolutionHandler.GetScaledSize function now clamps, and never allows to return a size greater than its input.
Graphics: Improved IntegrateLDCharlie() to use uniform stratified sampling for faster convergence towards the ground truth.
Graphics: Improved load asset time for probe volumes.
Graphics: Improved quality of RGBM encoded ASTC textures and fallback to ETC2 was removed.
Graphics: Improved the warning messages for Volumes and their Colliders.
Graphics: LensFlare (SRP) tooltips now refer to meters.
Graphics: LensFlare Element editor now has a Thumbnail preview.
Graphics: LWRP package has been deprecated. LWRP package was maintaned with the sole purpose of providing an upgrade path to URP. See URP 2021.2 documentation for notes on how to upgrade LWRP package to 2021.2.
Graphics: Made occlusion Radius for lens flares in directional lights, be independant of the camera's far plane.
Graphics: Modified VirtualTexturing resolver to always resize to the requested width and height.
Graphics: Moved Assets > Create > Shader >Shader Variant Collection to Assets > Create > Shader Variant Collection.
Graphics: Moved menu item "Decal Projector" to GameObject > Decal Projector.
Graphics: Moved menu item "Density Volume" to GameObject > Volume > Density Volume.
Graphics: Moved menu item "Sky and Fog Volume" to GameObject > Volume > Sky and Fog Global Volume.
Graphics: New projects that use the 3D project template now use 1920x1080 as the default resolution for the Standalone build target.
Graphics: New projects that use the 3D project template now use ASTC texture compression for the Android build target.
Graphics: New projects that use the 3D project template now use DXT5nm-style normal maps for Android, iOS, and tvOS build targets.
Graphics: New projects that use the 3D project template now use normal quality lightmaps (RGBM-encoded) for Android, iOS, and tvOS build targets.
Graphics: Removed ability to resize
unity_SpecCubeN_BoxMaxand
unity_SpecCubeN_BoxMinto encompass the bounds of the object itself, if an SRP is active.
Graphics: Removed DYNAMIC_RESOLUTION snippet on lens flare common shader. Its not necessary any more on HDRP, which simplifies the shader.
Graphics: Removed the postprocessing package from the core packages list.
Graphics: Renamed
D32_SFloat_S8_Uintand
S8_Uintto
D32_SFloat_S8_UIntand
S8_UIntin the
IUnityRenderingExtensionsnamespace. Native plug-ins that use the old names need to be updated to use the new name.
Graphics: Restricted DXT/BCn texture compression to textures with multiple-of-four width and height. This ensures the same behavior as the Texture Importer and requires multiple-of-four dimensions when compressing.
Graphics: Restricted NPOT (Non-Power-of-Two size) textures to a single mip level if the device does not fully support NPOT. Note that this restriction only affects WebGL 1 devices, and OpenGLES 2.0-based devices that do not support the OES_texture_npot extension.
Graphics: Serialized the Probe Volume asset as binary to improve footprint on disk and loading speed.
Graphics: Skinned Mesh Renderer GPU skinning job markers are now grouped together in captures, rather than all appearing in the root of the capture, making it easier to navigate.
Graphics: The Volume Gizmo Color is now in Colors > Scene > Volume Gizmo.
Graphics: The RTHandleSystem no longer requires a single number of samples for all MSAA textures. You can now set the number of samples independently for all textures.
Graphics: Updated postprocessing package to 3.1.0.
Graphics: Updated SRP templates to 12.0.0.
Graphics: Updated the icon for IES, LightAnchor and LensFlare.
Graphics: Updated the IMGUI Debugger to always display on top of other windows.
Graphics: Updated the postprocessing package to 3.0.3
Graphics: Volume Gizmo alpha changed from 0.5 to 0.125.
Graphics:
ClearFlag.Depthdoes not implicitely clear stencil anymore. Added
ClearFlag.Stencil.
HDRP: Added a more consistent shading normal calculation for path tracing. This avoids impossible shading/geometric normal combinations. (1323455)
HDRP: Added debug setting to Rendering Debugger Window to list the active XR views.
HDRP: Added Material validator in Rendering Debugger.
HDRP: Added XR single-pass test mode to Rendering Debugger Window.
HDRP: Altered hair to use GGX LTC for area light specular.
HDRP: Augmented debug visualization for probe volumes.
HDRP: Avoid unnecessary RenderGraphBuilder.ReadTexture in the "Set Final Target" pass.
HDRP: Cached the base types of the Volume Manager to improve memory and cpu usage.
HDRP: Changed 'Allow dynamic resolution' from Rendering to Output on the Camera Inspector.
HDRP: Changed custom render callback so when you use it, Global Camera shader constants are pushed automatically.
HDRP: Changed Debug windows name and location. Now located at: Windows -> General -> Rendering Debugger.
HDRP: Changed Density Volume for Local Volumetric Fog.
HDRP: Changed light reset to preserve type.
HDRP: Changed Link FOV to Physical Camera, and enabled the ability to show and hide everything on the Projection Section.
HDRP: Changed normal used in path tracing to create a local light list from the geometric to the smooth shading one.
HDRP: Changed some light unit slider value ranges to better reflect the lighting scenario.
HDRP: Changed the Channel Mixer Volume Component UI to show all the channels.
HDRP: Changed the convergence time of SSGI to 16 frames and the preset value.
HDRP: Changed the HDRP Render Graph to use the new RendererList API for rendering and (optional) pass culling.
HDRP: Changed the menu path for Check Scene Content from Edit > Render Pipeline > HD Render Pipeline > Check Scene Content for Ray Tracing to Edit > Rendering > Check Scene Content for HDRP Ray Tracing.
HDRP: Changed the menu path for Edit > Render Pipeline > HD Render Pipeline > Upgrade from Builtin pipeline > Upgrade Project Materials to High Definition Materials to Edit > Rendering > Materials > Convert All Built-in Materials to HDRP.
HDRP: Changed the menu path for Export HDRP Sky to Image from Edit > Render Pipeline > HD Render Pipeline > Export Sky to Image to Edit > Rendering > Export HDRP Sky to Image.
HDRP: Changed the menu path for Render Selected HDRP Camera to log Exr from Edit > Render Pipeline > HD Render Pipeline > Render Selected Camera to log Exr to Edit > Rendering > Render Selected HDRP Camera to log Exr.
HDRP: Changed the menu path for the HDRP Wizard from Window > Render Pipeline > HD Render Pipeline Wizard to Window > Rendering > HDRP Wizard.
HDRP: Changed the name of FOV Axis to Field of View Axis.
HDRP: Changed the NVIDIA install button to the standard FixMeButton.
HDRP: Changed the property Sorting Priority for the Materials with Transparent Surface type so that it is clamped on the UI from -50 to 50.
HDRP: Changed the resolution of the sky used for camera misses in Path Tracing to match the resolution of the render buffer. (1304114)
HDRP: Changed the storage format of volumetric clouds presets for easier editing.
HDRP: Changed the tooltip for color shadows and semi-transparent shadows. (1307704)
HDRP: Changed where HDRP Global Settings are saved to their own asset (HDRenderPipelineGlobalSettings) and HDRenderPipeline's default asset refers to this new asset.
HDRP: Copied and referenced the default LookDev volume profile in the Asset folder instead of the package folder.
HDRP: Decreased the minimal Fog Distance value in the Density Volume to 0.05.
HDRP: Density Volumes can now take a 3D RenderTexture as mask, the mask can use RGBA format for RGB fog.
HDRP: Disabled specular occlusion for what we consider medium and larger scale ao > 1.25 with a 25cm falloff interval.
HDRP: Disabled TAA jitter while using Frame Debugger.
HDRP: Disabled TAA sharpening on the alpha channel.
HDRP: Display a warning help box when decal atlas is out of size.
HDRP: Displayed an info box and disabled MSAA asset entry when ray tracing is enabled.
HDRP: Fixed a null ref exception when running playmode tests with the Rendering Debugger window opened.
HDRP: Fixed upscaling issue that is exaggerated by DLSS. (1347250)
HDRP: Hybrid duplicated reflection probes set to br ignored during light baking.
HDRP: Improved how the HDRP Wizard handles the Render Pipeline settings. The section Global contains data from the HDRP Settings section and the Render Pipeline Asset property in Project Settings > Graphics. The section Current Quality contains data from the Render Pipeline Asset property in the Quality level that is currently in use.
HDRP: Improved labels for cloud scroll direction and cloud type.
HDRP: Improved lighting models for AxF shader area lights.
HDRP: Improved physically-based Depth of Field with better near defocus blur quality.
HDRP: Improved screen space global illumination.
HDRP: Improved shadow cascade GUI drawing with pixel perfect, hover, and focus functionalities.
HDRP: Improved the area cookie behavior for higher smoothness values to reduce artifacts.
HDRP: Improved the Camera Inspector, new sections and better grouping of fields.
HDRP: Improved the fly through ghosting artifacts in the volumetric clouds.
HDRP: Improved the performance and visual quality of the clamping approach for RTR and RTGI.
HDRP: Improved the RTGI denoising.
HDRP: Improved volumetric clouds (added new noise for erosion, reduced ghosting while flying through, altitude distortion, and ghosting when changing from local to distant clouds, fixed issue in wind distortion along the Z axis).
HDRP: Increased the minimum density of the volumetric clouds.
HDRP: It is now considered a miss when a ray hits the sky in the ray marching part of mixed ray tracing.
HDRP: Made debug panel mip bias functions internal, not public.
HDRP: Made LitTessellation and LayeredLitTessellation fallback on Lit and LayeredLit respectively, in DXR.
HDRP: Made various improvements to the volumetric clouds.
HDRP: Make some volumetric clouds properties additional to reduce the number default parameters. (1357926)
HDRP: Modified the history validation pass so that Unity only performs it once for each frame and not for every effect.
HDRP: Move Rendering Debugger "Windows from Windows->General-> Rendering Debugger windows" to "Windows from Windows->Analysis-> Rendering Debugger windows".
HDRP: Moved Edit/Render Pipeline/HD Render Pipeline/Upgrade from Builtin pipeline/Upgrade Scene Terrains to High Definition Terrains to Edit/Rendering/Materials/Convert Scene Terrains to HDRP Terrains.
HDRP: Moved Edit/Render Pipeline/HD Render Pipeline/Upgrade from Builtin pipeline/Upgrade Selected Materials to High Definition Materials to Edit/Rendering/Materials/Convert Selected Built-in Materials to HDRP.
HDRP: Moved invariants outside of loop to speed up CPU in the light loop code.
HDRP: Moved MaterialHeaderScopes to Core.
HDRP: Moved menu item "C# Custom Pass" to Assets > Create > Rendering > HDRP C# Custom Pass.
HDRP: Moved menu item "C# Post Process Volume" to Assets > Create > Rendering > HDRP C# Post Process Volume.
HDRP: Moved menu item "Custom FullScreen Pass" to Assets > Create > Shader > HDRP Custom FullScreen Pass.
HDRP: Moved menu item "Custom Renderers Pass" to Assets > Create > Shader > HDRP Custom Renderers Pass.
HDRP: Moved menu item "Decal Shader Graph" to Assets > Create > Shader Graph > HDRP > Decal Shader Graph.
HDRP: Moved menu item "Diffusion Profile" to Assets > Create > Rendering > HDRP Diffusion Profile.
HDRP: Moved menu item "Eye Shader Graph" to Assets > Create > Shader Graph > HDRP > Eye Shader Graph.
HDRP: Moved menu item "Eye Shader Graph" to Assets > Create > Shader Graph > HDRP > Hair Shader Graph.
HDRP: Moved menu item "Fabric Shader Graph" to Assets > Create > Shader Graph > HDRP > Decal Fabric Shader Graph.
HDRP: Moved menu item "High Definition Render Pipeline Asset" to Assets > Create > Rendering > HDRP Asset.
HDRP: Moved menu item "Lit Shader Graph" to Assets > Create > Shader Graph > HDRP > Lit.
HDRP: Moved menu item "Post Process Pass" to Assets > Create > Shader > HDRP Post Process.
HDRP: Moved menu item "StackLit Shader Graph" to Assets > Create > Shader Graph > HDRP > StackLit Shader GraphShader Graph.
HDRP: Moved menu item "Unlit Shader Graph" to Assets > Create > Shader Graph > HDRP > Unlit Shader Graph.
HDRP: Moved the Decal Gizmo Color initialization to preferences.
HDRP: Moved the HDRP render graph debug panel content to the Rendering debug panel.
HDRP: Moved the
supportRuntimeDebugDisplayoption from HDRPAsset to HDRPGlobalSettings.
HDRP: Reduced the maximum distance per ray step of volumetric clouds.
HDRP: Refactored platform abstraction code for shader optimization.
HDRP: Removed backplate from rendering of lighting cubemaps.
HDRP: Removed Bilinear and Lanczos upscale filter.
HDRP: Removed redundant checkboxes (Show Inactive Objects and Isolate Selection) from the Emissive Materials tab of the Light Explorer.
HDRP: Removed the MaterialPass option from probe volume Evaluation modes.
HDRP: Removed the option for reflection probes to render SSAO, SSGI, SSR, ray tracing effects, or volumetric reprojection.
HDRP: Renamed the "Link Light Layer" property to "Custom Shadow Layer".
HDRP: Renamed the Cloud Offset to Cloud Map Offset in the volumetric clouds volume component. (1358528)
HDRP: Renamed the Decal Projector to HDRP Decal Projector.
HDRP: Replaced the context menu with a search window when you add a custom pass.
HDRP: Restored the old version of the RendererList structs/API for compatibility.
HDRP: Split up the HDProjectSettings with the new HDUserSettings in UserProject. Now the Wizard working variable should not intefere with the versioning tool. (1330640)
HDRP: Surface ReflectionTypeLoadExceptions in HDUtils.GetRenderPipelineMaterialList(). Without surfacing these exceptions, developers cannot act on any underlying reflection errors in the HDRP assembly.
HDRP: The default black texture to use for mixed reality is now opaque. It's alpha value is now 1 whereas previously it was 0.
HDRP: The depth of field at half or quarter resolution is now computed consistently with the full resolution option. (1335687)
HDRP: The Film Grain effect does not affect the alpha channel now.
HDRP: Updated the HDRP config package so that it is embeded instead of copied locally. The
Packagesfolder is versioned by Collaborate. (1276518)
HDRP: Updated the recursive rendering documentation.
HDRP: Updated the UI for the Frame Settings section: default values in the HDRP Settings section and the Custom Frame Settings property are always editable.
HDRP: Updated Virtual Texturing Resolver to now perform RTHandle resize logic in HDRP instead of in core Unity.
HDRP: Used the new API for updating Reflection Probe state (fixes garbage allocation). (1290521)
HDRP: Visual Environment ambient mode is now Dynamic by default.
iOS: Changed default texture compression format from PVRTC to ASTC.
License: Disabled package entitlement feature.
Mobile: Changed minimum iOS/tvOS version to 12.
Package: (Recorder) Prevent invalid GPU callback data from being written to a frame: this change skips the problematic frame and logs an error message.
Package: Added Sequences (com.unity.sequences) to the Cinematic Studio feature set.
Package: Added the Code Coverage package to the Engineering feature set.
Package: Added the Localization package as pre-release.
Package: Changed the package display name from "Unity Recorder" to "Recorder" in the package manager.
Package: Fixed a wrong label for the WebM codec in the Recorder package.
Package: Made Unity Recorder 3.0.0-pre.1 a Release Candidate package.
Package: Released Localization package 1.0.3
Package: Released version 1.7.1 of the Visual Scripting package
Package: Removed legacy Recorders: MP4, EXR, PNG, WEBM and GIF Animation from the Recorder package.
Package: Update Sequences to 1.0.3.
Package: Updated Cinemachine package to 2.7.3.
Package: Updated Code Coverage package to 1.0.0.
Package: Updated Code Coverage package to v1.0.0-pre.3
Package: Updated Code Coverage package to v1.0.1. This version includes improvements and fixes.
Package: Updated com.unity.cinemachine to 2.7.2.
Package: Updated [email protected].
Package: Updated com.unity.live-capture 1.0.1-pre.465 package to com.unity.live-capture 1.0.1.
Package: Updated com.unity.purchasing to 3.0.0-pre.6.
Package: Updated com.unity.purchasing to 4.0.0. Refer to the package changelog online here:.
Package: Updated com.unity.sequences to 1.0.0.
Package: Updated FBX Exporter package to 4.0.1.
Package: Updated FBX Exporter package to 4.1.0:.
Package: Updated In App Purchasing package to 3.0.1.
Package: Updated package to com.unity.live-capture 1.0.1-pre.465.
Package: Updated Sequences (com.unity.sequences) to version 1.0.0-pre.6.
Package: Updated Sequences to 1.0.2.
Package: Updated the Code Coverage package to v1.0.0-pre.4.
Package: Updated the FBX Exporter package to 4.0.0-pre.4:.
Package: Updated the FBX Exporter package to 4.1.0-pre.2. See FBX Exporter overview.
Package: Updated the Purchasing package to version 3.2.1.
Package: Updated the version of com.unity.cinemachine to 2.8.0-pre.1.
Package: Updated the version of com.unity.purchasing package to 3.1.0
Package: Visual Scripting: Changed NotEquals node in non-scalar mode to be consistent with Equals.
Package:
Com.unity.purchasingupdated to 3.2.2. Please refer to the package changelog online here:.
Package Manager: Changed location of the Git LFS cache enabled by setting the
UPM_ENABLE_GIT_LFS_CACHE's environment variable to always be located under the global cache root, even when the cache root location is customized.
Package Manager: Changed the error and warning box to look like the info box.
Package Manager: Renamed the Import again button to Reimport.
Package Manager: Updated In App Purchasing package to include missing documentation.
Physics: Adjusted anchor position based on anchor/parentAnchor transforms to better fit the expected result.
Profiler: Stability and performance improvements of [email protected]. See more details at.
Profiler: The Unity Profiler now only shows threads that have profiler markers generated since you opened the Profiler.
Scene/Game View: Changed the default shortcut for the Show Overlay menu option to Spacebar.
Scene/Game View: Fixed styling issues with the Overlays feature.
Scene/Game View: Moved Component Tools Overlay to the regular Tools Overlay.
Scene/Game View: Updated the new default shortcut for Toggle overlays to "`".
Scripting: Quaternion ToString() prints five decimal digits by default. (36265)
Scripting: Vector2, Vector3, Vector4, Bounds, Plane, Ray, Ray2D ToString by default prints two decimal digits (up from one). (1205206)
Search: ref:
now only searches results that have a direct dependency on.
Search: Removed the resource Search Provider (res:).
Services: In the In-App Purchasing (IAP) Settings, when IAP package version 2 or less is installed, the “Migrate” button section is no longer available.
Shadergraph: Added borders to inspector items styling, to better differentiate between separate items.
Shadergraph: Adjusted the Blackboard article to clarify multi-select functionality.
Shadergraph: Changed BranchOnInputNode to choose NotConnected branch when generating a preview.
Shadergraph: Changed the "Create Node" action in ShaderGraph stack separator context menu to "Add Block Node" and added it to the main stack context menu.
Shadergraph: Condensed report errors and warnings to a single error for ShaderGraph SubGraphs.
Shadergraph: Improved docs for SampleTexture2D, SampleTexture2DLOD, SampleTexture2DArray, SampleTexture3D, SampleCubemap, SampleReflectedCubemap, TexelSize, NormalFromTexture, ParallaxMapping, ParallaxOcclusionMapping, Triplanar, Sub Graphs, and Custom Function Nodes to reflect changes to texture wire data structures.
Shadergraph: Improved documentation for Swizzle Node.
Shadergraph: Limited max number of inspectable items in the Inspector View to 20 items.
Shadergraph: Modified the the shader permutation variant limit so that only ShaderGraph keywords count towards the limit; SubGraph keywords do not.
Shadergraph: Moved menu item "Blank Shader Graph" to Asset > Create > Shader Graph > Blank Shader Graph.
Shadergraph: Moved menu item "Sub Graph" to Asset > Create > Shader Graph > Sub Graph.
Shadergraph: Moved menu item "VFX Shader Graph" to Asset > Create > Shader Graph > VFX Shader Graph.
Shadergraph: Prevent users from setting enum keywords with duplicate reference names and invalid characters. (1287335)
Shadergraph: Properties and Keywords are no longer separated by type on the blackboard. Categories now allow for any combination of properties and keywords to be grouped together as the user defines.
Shadergraph: Updated Custom Function Node to use new ShaderInclude asset type instead of TextAsset (.hlsl and .cginc softcheck remains).
Shadergraph: Updated/corrected View Direction doc.
Shadergraph: Vector2/Vector3/Vector4 property types will now be properly represented by a matching Vector2/Vector3/Vector4 UI control in the URP + HDRP Material Inspector as opposed to the fallback Vector4 field that was used for any multi-dimensional vector type.
Shaders: Added a shader warning for when reserved constants names with consecutive underscores are used.
Shaders: Increased the global keyword limit to 384.
Shaders: Shader compiler logs are now generated in a project's Logs folder instead of the Library folder.
Shaders: Shader.DisableKeyword, Shader.IsKeywordEnabled and CommandBuffer.DisableKeyword API will no longer create a global keyword if it doesn't exist.
Terrain: Updates the version of Terrain Tools included the Package Manager to 4.0.3. (Previously 4.0.0-pre.2)
Tests: Changed iOS automation code so that it uses
Shell.ExecuteProgramAndGetStdoutfor process handling.
Timeline: Updated the Timeline package version to 1.6.1
Timeline: Updated Timeline package to 1.6.0-pre.3.
Timeline: Updated Timeline package to version 1.6.0-pre.1
UI Toolkit: By default, rendering data of
VisualElementswith an opacity of zero is now generated and remains up-to-date, allowing animation in opacity without causing performance drops.
UI Toolkit: Marked the com.unity.ui package, which is incompatible with 21.2 and above, as deprecated.
UI Toolkit: Optimized some data access for Live Reload feature.
UI Toolkit: Optimized some data access for the Live Reload feature.
UI Toolkit: Removed additional overhead of attaching to panel for Live Reload when the option is turned off to improve performance in loading VisualTreeAssets.
UI Toolkit: URLs in UXML and USS files now support explicit GUID-based asset references. This allows assets referenced by UI assets to be renamed or moved within your project without breaking asset references. The UI Builder saves both UXML and USS files using this format. Note that this URL format is backward-compatible, but the URL query parameters are ignored in older Unity versions.
Universal: Added Depth and DepthNormals passes to particle shaders.: Deprecated GetShadowFade in Shadows.hlsl. Use GetMainLightShadowFade or GetAdditionalLightShadowFade instead.
Universal: DepthNormals passes now sample normal maps if used on the material, otherwise output the geometry normal.
Universal: Enabled subsurface scattering with global illumination for handwritten Universal ST8 shaders.
Universal: Improved shadow cascade GUI drawing with pixel perfect, hover and focus functionalities.
Universal: Material editor now uses the same MaterialHeaderScope as HDRP.
Universal: Modified URP profiling scopes. Remove low impact scopes from the command buffer to improve performance. Fixed the name and invalid scope for the context.submit() scope. Changed the default profiling name of ScriptableRenderPass to Unnamed_ScriptableRenderPass.
Universal: Moved menu item "2D Renderer" to Assets > Create > Rendering > URP 2D Renderer.
Universal: Moved menu item "Forward Renderer" to Assets > Create > Rendering > URP Forward Renderer.
Universal: Moved menu item "Lit Shader Graph" to Asset > Create > Shader Graph > URP > Lit Shader Graph.
Universal: Moved menu item "Pipeline Asset (2D Renderer)" to Assets > Create > Rendering > URP Asset (with 2D Renderer).
Universal: Moved menu item "Pipeline Asset" (Forward Renderer) to Assets > Create > Rendering > URP Asset (with Forward Renderer).
Universal: Moved menu item "Post-process Data" to Assets > Create > Rendering > URP Post-process Data.
Universal: Moved menu item "Renderer Feature" to Assets > Create > Rendering > URP Renderer Feature.
Universal: Moved menu item "Sprite Lit Shader Graph" to Asset > Create > Shader Graph > URP > Sprite Lit Shader Graph.
Universal: Moved menu item "Sprite Unlit Shader Graph" to Asset > Create > Shader Graph > URP > Sprite Unlit Shader Graph.
Universal: Moved menu item "Unlit Shader Graph" to Asset > Create > Shader Graph > URP > Unlit Shader Graph.
Universal: Moved menu item "Upgrade Project Materials to 2D Renderer Materials" to Edit > Rendering > Materials > Convert All Built-in Materials to URP 2D Renderer.
Universal: Moved menu item "Upgrade Project Materials to URP Materials" to Edit > Rendering > Materials > Convert All Built-in Materials to URP.
Universal: Moved menu item "Upgrade Project URP Parametric Lights to Freeform" to Edit > Rendering > Lights > Convert Project URP Parametric Lights to Freeform.
Universal: Moved menu item "Upgrade Scene Materials to 2D Renderer Materials" to Edit > Rendering > Materials > Convert All Built-in Scene Materials to URP 2D Renderer.
Universal: Moved menu item "Upgrade Scene URP Parametric Lights to Freeform" to Edit > Rendering > Lights > Convert Scene URP Parametric Lights to Freeform.
Universal: Moved menu item "Upgrade Selected Materials to URP Materials" to Edit > Rendering > Materials > Convert Selected Built-in Materials to URP.
Universal: Moved menu item "XR System Data" to Assets > Create > Rendering > URP XR System Data.
Universal: Moved the code that evaluates the fog from the vertex shader to the pixel shader. This improves the rendering of fog for big triangles and the fog quality. This can change the look of the fog slightly.
Universal: Opacity as Density blending feature for Terrain Lit Shader is now disabled when the Terrain has more than four Terrain Layers. This is now similar to the Height-blend feature for the Terrain Lit Shader.
Universal: Optimized the Bokeh Depth of Field shader on mobile by using half precision floats.
Universal: Reduced the size of the fragment input struct of the TerrainLitPasses, LitGBufferPass, SimpleLitForwardPass, and SimpleLitGBufferPass lighting shaders.
Universal: Removed unused temporary depth buffers for Depth of Field and Panini Projection.
Universal: Renamed the Forward Renderer asset to the Universal Renderer asset. The Universal Renderer asset contains the Rendering Path property, which you can set to either the Forward Rendering Path or the Deferred Rendering Path.
Universal: Renamed UniversalRenderPipelineCameraEditor to URPCameraEditor.
Universal: Shadow fade now uses border value for calculating shadow fade distance and fall off linearly.
Universal: SSAO Texture is now R8 instead of ARGB32 if supported by the platform.
Universal: The UNITY_Z_0_FAR_FROM_CLIPSPACE macro now remaps the coordinates to the [0, far] range on all platforms consistently. Previously, Unity did not perform the remapping on OpenGL platforms, discarding the range [-near, 0].
URP: Changed 2D Lights to inherit from Light2DBase.
URP: Changed material upgrader to upgrade AnimationClips in projects that have curves bound to renamed material properties.
URP: Changed Pixel Snapping and Upscale Render Texture in the PixelPerfectCamera to a dropdown.
URP: Changed process to stripping shader variants per renderer feature instead of combined renderer features.
URP: Changed the default name when a new urp asset is created.
URP: Changed the opaque pass depth to be copied instead of scheduling a depth prepass when MSAA is enabled and a depth texture is required.
URP: Improved PixelPerfectCamera UI/UX.
URP: Made 2D shadow casting more efficient.
URP: Modified the behavior of setting a camera's Background Type to "Dont Care" on mobile. "Dont Care" now fills the render target with arbitrary data at the beginning of the frame, which might be faster in some situations. Note that there are no guarantees for the exact content of the render target, so projects should only use "Dont care" if they are guaranteed to render to, or otherwise write every pixel every frame.
URP: Moved all 2D APIs out of the experimental namespace.
URP: Refactored some of the array resizing code around decal projector rendering to use new APIs in render core.
URP: UniversalRendererData and ForwardRendererData GUIDs have been reversed so that users coming from 2019LTS, 2020LTS and 2021.1 have a smooth upgrade path, you may encounter issues coming from 2021.2 Alpha/Beta versions and are recommended to start with a fresh library if initial upgrade fails.
URP: URP Asset Inspector - Advanced settings have been reordered under
Show Additional Properties.
Version Control: ### Changed
- Updated license to better conform with expected customer usage.
- Updated documentation file to meet standards.
- Updated third-party usage.
- No longer requires downloading of the full Plastic client. Basic features work without additional installation. Features that require the full Plastic client will allow download and install as needed.
- Usability improvements around checking in code.
- Improved update workspace tab UX.
- Plastic SCM context menu is now available even if the Plastic SCM window is closed.
Version Control: Migration tests
Improved usage analytics around Editor and Plugin version
Workspace Migration Adjustments.
Version Control: Simplified and decluttered UI.
VFX Graph: Allowed the remaking of existing links.
VFX Graph: Moved menu item "Point Cache Bake Tool" to Window > VFX > Utilities > Point Cache Bake Tool.
VFX Graph: Moved menu item "Rebuild And Save All VFX Graphs" to Edit > VFX > Rebuild And Save All Visual Effect Graphs.
VFX Graph: Moved menu item "Visual Effect Defaults" to Assets > Create > VFX > VFX Defaults.
VFX Graph: Moved menu item "Visual Effect Graph" to Assets > Create > VFX > VFX Graph.
VFX Graph: Moved menu item "Visual Effect Graph" to Window > VFX > VFX Graph.
VFX Graph: Moved menu item "Visual Effect Subgraph Block" to Assets > Create > VFX > VFX Subgraph Block.
VFX Graph: Moved menu item "Visual Effect Subgraph Operator" to Assets > Create > VFX > VFX Subgraph Operator.
VFX Graph: Property Binder : Handle Remove Component removing linked hidden scriptable objectfields.
VFX Graph: Property Binder : Prevent multiple VFXPropertyBinder within the same game object.
VFX Graph: Sphere and Cube outputs are now experimental..
WebGL: Eliminated the Python dependency from the Brotli compressor.
XR: The Oculus XR Plugin package has been updated to 1.9.0.
XR: Updated OpenXR Package to 1.2.8.
XR: Updated the Oculus XR Plugin package to 1.10.0.
XR: Updated the Oculus XR Plugin package to 1.7.0.
XR: Updated the version of Oculus XR Plugin package to 1.9.1.
XR: Updated verified Windows Mixed Reality package to version 5.2.0.
XR: Updated Windows MR XR SDK Plug-in to 5.1.0.
XR: Updated XR Plug-in Management to 3.2.17.: Call Tilemap.tilemapChanged callback when Tilemap component is reset or ResizeBounds is called. (1304936): Fix Sprite Preview in inspector becomes unrecognizable when Sprite size is big (1299189)
2D: Fixed 2D Animation manual documentation.
2D: Fixed 2D Animation package description.
2D: Fixed 2D PSDImporter doesn't apply settings from Sprite Editor Window when changes made in Inspector. (1339799)
2D: Fixed 2D PSDImporter package description.
2D: Fixed an issue where Name and Texture fields were overlapping with each other in the Secondary Textures module of the Sprite Editor. (1284356)
2D: Fixed Bone and Sprite influence lists to display correctly. (1349041)
2D: Fixed duplication of Tilemap Selection Box when the Grid and the Tilemap are offset in transform. (1293341)
2D: Fixed exception thrown when manually adding vertices in the Skinning Editor to a Sprite without mesh. (1340105)
2D: Fixed exception when adding a new Rule when no Rule is selected.
2D: Fixed extrusion of CompositeCollider2D when an offset distance has been set. (1328999)
2D: Fixed GridSelection on a Tile Palette losing its target when the Tile Palette is saved. (1327582)
2D: Fixed initial rendering animated tiles when a CompleteObjectUndo is registered for a Tilemap while in Play mode.
2D: Fixed issue when the size of a GridSelection is set to negative values. (1318891)
2D: Fixed issue where Tilemap does not preserve transform changes or color when inserting or deleting cells. (1315084)
2D: Fixed issue with setting a Spritesheet with padding between Sprites on a Tile Palette having a positional offset when there should not be one.
2D: Fixed issue with sprite mask debug color when sprite renderers are batched. (1328538)
2D: Fixed mouse position calculation after SceneView overlay changes.
2D: Fixed MouseDrag including previous mouse positions from initial drag.
2D: Fixed MouseDrag not including final mouse position after drag.
2D: Fixed NullReferenceException from being thrown when doing a Grid Select on a Grid which is not enabled. (1295122)
2D: Fixed offset placement of Tile placed when dragging in a single Sprite or Tiles onto the Tile Palette window.
2D: Fixed on deselecting game object from the Inspector window leads to deselecting Sprite Shape Renderer. (1317728)
2D: Fixed Paint tool triggering a Tile Palette edit when Paint tool is active and is removed from the Tile Palette default tools.
2D: Fixed performance regression in PSDImporter Editor. (1349148)
2D: Fixed potential Sprite reference lost when upgrading from 2021.1. (1358979)
2D: Fixed SpriteRect and Name File Id does not match in meta file. (1319819)
2D: Make tooltips appear closer to the label for Tilemap Info in the Tilemap Editor rather than in the center. (1294929)
2D: Mark com.unity.2d.tilemap.extras as discoverable
2D: Prevent Tile Palette Prefabs from showing as a Active Target for the Tile Palette window when selected.
2D: Prevented users from selecting a disabled GameObject as an active target for the Tile Palette. (1327021)
2D: Removes GC.Alloc when Tilemap.HasSyncTileCallback is called which is internally called for each SetTile/s.
2D: Sprite Atlas importer does not show name on top (1300861)
2D: Sprite.texture is null when it's loaded from SpriteAtlas in an AssetBundle and the Play mode is entered from the Prefab mode. (1345723)
2D: Swapped behavior of rotating clockwise and counter-clockwise.
2D: Unable to exclude Objects for Packing property from Sprite Atlas preset (1294393)
AI: Fixed a crash when exiting play mode while a NavMesh asynchronous update call is being scheduled. (1297742)
AI: Fixed crashes from building from meshes larger than the allowed size threshold. (1298356)
AI: Fixed issue where the NavMeshModifierBox did not override the area type with existing higher index. (1078153)
AI: Fixed the gizmos of navigation OffMeshLinks when the distance from start to end is small. (805223)
AI: Improved NavMeshAgent creation failure log, to help select the source object (1274983)
Android: Added a new AndroidDevice.hardwareType property, which is set to AndroidHardwareType.ChromeOS if running on a Chrome OS device. This is helpful if an app needs to run Chrome OS-specific code.
Android: Added a warning if making an IL2CPP Android build without Arm64 binaries (1318322)
Android: Added fullscreen flag to manifest to better handle static splash screen. (1310347)
Android: Android: Fix lightmap quality warning text in PlayerSettings. (1337631)
Android: Bumped Android Logcat package version to 1.2.2.
Android: Clamp Android minimum bundle version to greater than 0 (1307476)
Android: Disable cut/copy/paste popup that was appearing on the hidden Android inputfield (1317688)
Android: Fix high memory usage for textures when uploading textures at runtime using Vulkan (1300900)
Android: Fix Java local reference leaking in UnityWebRequest, VideoPlayer. (1297185)
Android: Fix rendering errors when trying to use Particle Systems with instancing on devices that don't support it (1312433)
Android: Fixed Android build failures due to unsupported manifest features when targeting API 23 or below. (1340517)
Android: Fixed artifacts when exceeding geometry working set memory limit on Mali GPUs when using Vulkan GraphicsJobs.
Android: Fixed Build&Run when apk name contains duoble quote. (1323395)
Android: Fixed compatibility with OpenGL ES shaders in asset bundles built with Unity 2018.x or older. (1329702)
Android: Fixed computeBufferStartIndex of ComputeBuffer.GetData being ignored when using Vulkan. (1299902)
Android: Fixed ComputeGrabScreenPos and ComputeScreenPos when using Vulkan "Apply display rotation during rendering". (1340975)
Android: Fixed crash during shutdown on Adreno devices when using Vulkan. (1330396)
Android: Fixed crash when using R16 UNorm and similar formats with Vulkan on devices that don't support it. (1314282)
Android: Fixed incorrect resolution scaling on PowerVR devices when BlitType Auto is used (1287131)
Android: Fixed Patch not working on some newer Android devices due to permission issue. (1343844)
Android: Fixed runtime decompression of ASTC HDR cubemaps on devices that don't support ASTC HDR. (1323739)
Android: Fixed screen safe area values at startup. (1327752)
Android: Fixed shader compile error when signed bitfieldExtract is generated for ES 3.0 shader target. (1327731)
Android: Fixed shaders with bitfield operations compilation errors on Adreno3XX GPUs.
Android: Il2cpp resources will be extracted during player launch only when needed, for ex., changes in scripts. Previously they would be extracted each time you make a new build from Unity.
Android: Preserve ComputeBuffer data when doing partial updates using ComputeBuffer.SetData (1300424)
Android: Resolved an Android build failure when the Target SDK was set to below 24. (1340438)
Android: Resolved an issue that prevented features such as tessellation and geometry shaders from being marked as supported on Android devices whose driver supports OpenGL ES 3.1 with AEP but not 3.2.
Android: Update Android Logcat package to version 1.2.1
Android: Updated Kotlin version to fix potential compatibility problems in Android Studio. (1325245)
Animation: 1D BlendTree's threshold values were draggable when not the hot control. (1217253)
Animation: Added a tooltip for the auto live link button in the animator window. (1283065)
Animation: Added checks to prevent and capture crash for the GetRootBlendTreeChildWeights function. (1282475)
Animation: Added option to set the single layer optimization for AnimationLayerMixerPlayable that is enable by default in previous version. (1159548)
Animation: Fixed a bug where the .controller file would grow in size even after undoing states. (1194086)
Animation: Fixed a bug where the parameters list being previewed would not display in the inspector window. (1190190)
Animation: Fixed AddAssetToSameFile assert thrown on adding SMB on unpersisted AnimatorState or AnimatorStatemachine. (1233556)
Animation: Fixed an animation performance test failing on specific device (iOS/Android) (1307702)
Animation: Fixed an issue when trying to record elements from array where the index elements were 2 or 4. (1242410)
Animation: Fixed an issue where an animation playable events would still fire while it was paused. (1227098) stabilize feet would not get saved upon entering playmode. (1245722)
Animation: Fixed an issue where the transition to a base layer state machine would be invisible (1287749)
Animation: Fixed an issue where the transition would automatically disappear if made from a lower layer state machine to an upper layer one. (1188984)
Animation: Fixed an issue where the Vector property of the material component would not have a blue tint to highlight it was being animated when in preview mode. (1333416)
Animation: Fixed an issue where warnings would appear while typing the first numbers of the time in blend tree before confirming the value. (1250904)
Animation: Fixed animation curve editor swapping unintentionally when editing curves in two different inspectors. (1308938)
Animation: Fixed animation events to fire correctly when overriding the loop in a AnimationClipPlayable. (1292994)
Animation: Fixed animation transition preview playback marker to update correctly when window is floating and animation is paused. (1285405)
Animation: Fixed Animator MatchTarget to work correctly with longer time. (1052600)
Animation: Fixed BlendTree graph where nodes switch positon when play is pressed. (1306710) edge highlighting logic with livelink in mecanim with edge cases involving Any State and Entry nodes. (1171704)
Animation: Fixed for disappearing Animator State Machine information. (1307535)
Animation: Fixed GetLayerWeight function in Animator to always return 1 if getting the base layer weight. (1315029)
Animation: Fixed human pose offset in Animation C# Job when root node is scaled. (1266529)
Animation: Fixed human pose with missing bones shifting when used in an Animation C# Job. (1214897)
Animation: Fixed humanoid SetLookAtWeight method for weights larger than 0.5. (1307253)
Animation: Fixed manipulation of the Current Blend Value (the red line)in BlendTree Inspector.
Animation: Fixed nan appearing in AABB when root motion is enabled in a StateMachineBehaviour by initializing MotionXReference structure upon allocation. (1279206)
Animation: Fixed ScaleConstraint on child with parent having a nulled scaled axis. (1243185)
Animation: Fixed slow performance depend on the first selected. (1236353)
Animation: Fixed static analysis warning. (1232341)
Animation: Fixed use of PropertyStreamHandle with Addressable AnimatorController. (1341031) invalid error messages and display correct inspector when viewing a state with invalid StateMachineBehaviours. (1319708) error being logged when accessing an archive file that was modified while it was still opened. (1319389)
Asset Bundles: Fixed issue where Caching.IsVersionCached returns false when loading a previously cached bundle. (1186310)
Asset Bundles: Fixed issue where loading an asset from a bundle asynchronously while loading a texture synchronously causes a deadlock on the main thread.
Asset Import: Adding a ScriptedImporter attribute to a non-ScriptedImporter class no longer crashes the editor. (1308671)
Asset Import: Apply/Revert buttons in the inspector are correctly disabled after changing a value that is being overridden by the Importer script or an AssetPostprocessor. (1287345)
Asset Import: Assembly Ref / Definition files now have padding. (1311970)
Asset Import: Changing player settings Graphics APIs while editor is in Android platform no longer reimports all textures, video clips or fonts. (1329621)
Asset Import: Editing the animation clip directly from the sub-asset on first import is no longer broken. (1304418)
Asset Import: Fixed crash when importing FOV animations from 3DsMax. (1324054)
Asset Import: Fixed crash/corruption when importing animations.
Asset Import: Fixed missing normal property values from materials imported from 3DsMax 2021's Physical materials. (1313450)
Asset Import: Fixed Texture Import Platform settings getting reset when multi editing.
Asset Import: GameObjects & Prefabs can no longer be duplicated using Ctrl + D. (1304106)
Asset Import: GatherDependenciesFromSourceFile declared in parent classes is now properly called from derived classes. (1203843)
Asset Import: ModelImporter now only renames sibling nodes with duplicate names. (1233702)
Asset Import: New flag to allow rigs with different topologies to be swapped. (974120)
Asset Import: Only call frame rate errors when animations are imported. (1222562)
Asset Import: Rename of Inspector labels to make them more consistent.
Asset Import: SearchAndRemap now functions as expected in packages. (1218857)
Asset Import: Switching Texture Importer tabs does not dirty the importer. (1321256)
Asset Import: The Default Clip selection no longer gets stuck. (1279563)
Asset Import: Updated Log Warning to include name / object reference (1304432)
Asset Pipeline: All domain reloads are now done inside asset db. This fixes problem with reloading of asset objects when doing manual refresh. (1341910)
Asset Pipeline: Asset loading is safe in this callback. (1267939)
Asset Pipeline: Enabled PluginSettingsWorks.WSASettings integration test (1086909)
Asset Pipeline: Fix for crash that could happen after safe mode exit
Asset Pipeline: Fixed a crash that could occur when opening a project with a meta file conflict. (1310334)
Asset Pipeline: Fixed a very rare bug causing directory monitor not pick up all the changes that happened before a Refresh.
Asset Pipeline: Fixed an assert when fetching previews for assets in AssetBundles. (1311115)
Asset Pipeline: Fixed an issue where a scene could become corrupt if renamed to match the name of a recently deleted scene. (1263621)
Asset Pipeline: Fixed an issue where AssetDatabase.SaveAssetIfDirty() wouldn't save the asset if a sub-object was dirty, but the main object wasn't. (1341834)
Asset Pipeline: Fixed an issue where renaming an asset in the Project Browser could cause the selection highlight to disappear. (1351301)
Asset Pipeline: Fixed issue where an invalid GUID was being reported, but the file in which it resided was not. (1275878)
Asset Pipeline: Fixed issue with asset reference getting lost, if asset is modified and domain reload is done in the same refresh. (1357812)
Asset Pipeline: Fixed issue with incorrect progress bar text during startup. (1339167)
Asset Pipeline: Fixed issue with missing domain reload when entering play mode and LockReloadAssemblies is set. (1367222)
Asset Pipeline: Fixed issue with some FBX models being imported with a scale of 0 when 'Remove Constant Scale Curves' is enabled. (1348264)
Asset Pipeline: Fixed missing automatic scale down or import workers in order to not use excess system resources. (1343401)
Asset Pipeline: Fixed problem where artifact dependency could get ignored. (1318602)
Asset Pipeline: Fixed the progress bar being full during the import of assets. (1298760)
Asset Pipeline: Fixed to script type dependency hash generation. The issues could cause unnecessary imports and in some cases missing reimports. (1295635)
Asset Pipeline: Fixed various issues relating to assets not being correctly unloaded during AssetDatabase.Refresh(). (1186177, 1213291, 1255803, 1299716)
Asset Pipeline: Improved performance of flushing the preload operation queue from the main thread. This can occur when accessing an operation's result on the main thread before it is completed.
Asset Pipeline: InitializeOnLoad method shouldn't be used for asset operations, because InitializeOnLoad is called before asset importing is completed. (1279619)74994)
Asset Pipeline: PostProcessAllAssets callback now supports all asset db operations. (1144276)
Asset Pipeline: Preview of material is now correctly regenerated when shader changes (1298200)
Asset Pipeline: Previews are now correct for a prefeab when assets referenced by the prefab (like texture) changes (1284853)
Asset Pipeline: Updated reload tests to cover async domain reload.
Asset Pipeline: Using the AssetDatabase.CreateAsset() API to create an asset from a TextAsset object where the file type specified is not a native Unity format such as .ASSET will now report an error about incorrect usage of CreateAsset. (1241343)
Asset Pipeline: When asset object is reloaded, it is now reset before loaded with new values. This fixes problem with fields with default values not being set to default value, if field is removed from assets. (1337405)
Audio: (OSX) Sound effects in Audio Mixer were not always selected when clicked. (1124032)
Audio: AudioClip reference was lost when loading a new Scene even if AudioSource was set to DontDestroyOnLoad. (1314527)
Audio: Fixed AudioClip reference being lost when loading a new Scene even if AudioSource is set to DontDestroyOnLoad.
Audio: Fixed deadlock caused by interaction between output suspend/resume logic and DSPGraph output hooks.
Audio: Fixed DSPGraph playback not pausing when player is paused.
Audio: Fixed editor crash when undoing after reordering snapshots in the audio mixer.
Fixed exception when deleting snapshots. (1324578)
Audio: Fixed microphone API not working when automatic output device suspension was active. (1318560)
Audio: SoundManager optimizations for lowering main thread performance degradations caused by having a large amount of loaded audio clips in a scene. (1146312)
Audio: Topological changes such as adding/removing/moving effects in the audio mixer resulted in glitches and, depending on mixer configuration, loud bursts. (666910)
Bug Reporter: Fixed a bug where the crash reporting symbol uploader process would crash on parsing certain dSYM files.
Bug Reporter: Improved failing filename error message to make it reflect the source of the problem better. (1298484)
Bug Reporter: Multiple Qt library copies are no longer included.
Bug Reporter: Reset Bug Reporter style to match Windows styling (1296042)
Build Pipeline: Added an API to gather the lighting and fog modes used by the active scene. (1293228)
Build Pipeline: Added build target Dedicated Server. System: Fixed a problem with the detection of Microsoft.VCLibs SDK extension for UWP builds.
Burst: Added PreserveAttribute to prevent the internal log from being stripped in il2cpp builds.
Burst: Added PreserveAttribute to prevent the internal log from being stripped in il2cpp builds.
Burst: Broken link restored for known issues with debugging and profiling.
Burst: Broken link restored for known issues with debugging and profiling.
Burst: clang segmentation fault on iOS when member function debug information was emitted, it is disabled for this platform now.
Burst: Clang segmentation fault on iOS when member function debug information was emitted, it is disabled for this platform now.
Burst: Corrected 'Enable safety checks tooltip`.
Burst: Corrected 'Enable safety checks tooltip`.
Burst: Crash when extracting sequence point information for error reporting/debug information generation.
Burst: Direct Call extension methods that only differ on argument types are now supported (previously Burst's
AssemblyLoaderwould complain about multiple matches).
Burst: Dots runtime function pointer transform has been simplified, making it less brittle and fixing some bad IL generation.
Burst: Dots runtime function pointer transform has been simplified, making it less brittle and fixing some bad IL generation.
Burst: Fixed a bug in LLVM that it would incorrectly convert some memset -> memcpy if both pointers derived from the same memory address, and where one indexed into the 0th element of the pointer.
Burst: Fixed a bug that occurred when an explicitly laid out struct was used by a dup instruction, which caused an internal compiler error.
Burst: Fixed a bug that occurred when an explicitly laid out struct was used by a
dupinstruction, which caused an internal compiler error.
Burst: Fixed a bug where eager-compilation could pick up out-of-date global Burst menu options for compiling.
Burst: Fixed a bug where eager-compilation could pick up out-of-date global Burst menu options for compiling.
Burst: Fixed a bug where explicitly casting from an int to IntPtr would not sign extend the value.
Burst: Fixed a bug where explicitly casting from an int to
IntPtrwould not sign extend the value.
Burst: Fixed a bug where having any [DllImport] in a class that used the Direct Call mechanism could result in an illegal CompileFunctionPointer call being produced by our post processor.
Burst: Fixed a bug where having any
[DllImport]in a class that used the Direct Call mechanism could result in an illegal
CompileFunctionPointercall being produced by our post processor.
Burst: Fixed a bug where if a user had defined multiple implicit or explicit casts, the compiler could resolve to the wrong cast. a bug where loading from a vector within a struct, that was got from a
NativeArrayusing an indexer, would cause the compiler to crash.
Burst: Fixed a bug where the Burst post-processing for direct call would cause duplicate function pointers to be compiled, wasting compile time in the editor and caused an Editor launch stall.
Burst: Fixed a bug where the Burst post-processing for direct call would cause duplicate function pointers to be compiled, wasting compile time in the editor and caused an Editor launch stall.
Burst: Fixed a bug where the multi-CPU dispatcher (used for player builds targetting multiple CPU architectures) could end up generating invalid instructions.
Burst: Fixed a bug where the progress bar would report double the amount of pending compile jobs if a user changed the Burst options while background compilation was going on.
Burst: Fixed a bug where the progress bar would report double the amount of pending compile jobs if a user changed the Burst options while background compilation was going on.
Burst: Fixed a bug whereby sometimes some LLVM intrinsics could be incorrectly marked as unused causing invalid codegen with calls to math.acos.
Burst: Fixed a bug with using multiple
IsXXXSupportedintrinsics in the same boolean condition would fail.
Burst: Fixed a minor debug information bug where built-in types with methods (like
System.Int32) would generate incorrect debug information.
Burst: Fixed a possible DivideByZeroException due to race condition in TermInfoDriver initialization code.
Burst: Fixed a regression where managed static fields, in static constructors that would also be compiled with Burst, could cause a compile time failure for mixing managed and unmanaged state. alignment issues associated with xxHash3 on ArmV7 (case 1288992).
Burst: Fixed an issue where Burst would erroneously error on BurstCompile.CompileFunctionPointer calls when building for the DOTS Runtime.
Burst: Fixed an issue where Burst would erroneously error on
BurstCompile.CompileFunctionPointercalls when building for the DOTS Runtime.
Burst: Fixed an issue where if a user used a math function (like
cos,
sin, etc) then LLVM would preserve both the scalar and vector implementations even if they were trivially dead, causing us to inject otherwise dead functions into the resulting binary.
Burst: Fixed Burst's handling of stack-recovery, in the editor, on Apple Silicon hardware. (1345235)
Burst: Fixed compilation errors when targeting Arm CPUs and using some of the Intel intrinsics
Burst: Fixed compilation errors when targeting Arm CPUs and using some of the Intel intrinsics.
Burst: Fixed compilation errors when targeting Intel CPUs and using some of the Arm Neon intrinsics
Burst: Fixed compilation errors when targeting Intel CPUs and using some of the Arm Neon intrinsics.
Burst: Fixed crashes on 32 bit windows when calling function pointers from managed code and using IL2CPP.
Burst: Fixed crashes on 32 bit windows when calling function pointers from managed code and using IL2CPP.
Burst: Fixed DOTS Runtime JobProducer Bursting code to support JobProducers with multiple generic arguments, complex job wrapper and generic jobs.
Burst: Fixed managed implementation of sub_ss intrinsic
Burst: Fixed managed implementation of sub_ss intrinsic.
Burst: Fixed managed implementations of blend_epi32 and mm256_blend_epi32 intrinsics on Mono
Burst: Fixed managed implementations of blend_epi32 and mm256_blend_epi32 intrinsics on Mono.
Burst: Fixed multi-CPU dispatcher (used for player builds targetting multiple CPU architectures) generating invalid instructions.
Burst: Fixed namespace issue triggering a warning in the editor.
Burst: Fixed some intrinsics not checking target CPU against required CPU, so it was possible to use some intrinsics without an IsXXXSupported check.
Burst: Fixed some intrinsics not checking target CPU against required CPU, so it was possible to use some intrinsics without an IsXXXSupported check.
Burst: Fixed the 1.5 restriction that Direct Call methods can only be called from the main thread, now they work when called from any thread.
Burst: Fixes DOTS Runtime JobProducer Bursting code to support JobProducers with multiple generic arguments, complex job wrapper and generic jobs.
Burst: Function calls using in modifiers on blittable structs where being treated as non blittable.
Burst: Gracefully handle failing to find a particular assembly in the ILPP to prevent an ICE.
Burst: IL Function Pointer Invoke Transformation now uses correct runtime library for dots runtime.
Burst: IL Function Pointer Invoke Transformation now uses correct runtime library for dots runtime.
Burst: IL Function Pointer Invoke Transformation updated to handle transforms that affect instructions that are the destination of a branch.
Burst: IL Function Pointer Invoke Transformation updated to handle transforms that affect instructions that are the destination of a branch.
Burst: Internal Compiler Error if a call was discarded (via BurstDiscard for example), but the callsites required an ABI transform e.g. struct return.
Burst: Internal Compiler Error if a call was discarded (via BurstDiscard for example), but the callsites required an ABI transform e.g. struct return.
Burst: Intrinsics: Neon - fixed vget_low and vget_high producing suboptimal code
Burst: Intrinsics: Neon - fixed vget_low and vget_high producing suboptimal code.
Burst: Made
math.shufflecompile correctly when non-constant
ShuffleComponent's are used.
Burst: Multiple bugfixes (please look at for a detailed list).
Burst: PDB debug information for instance methods that also used struct return were incorrect.
Burst: PDB debug information for instance methods that also used struct return were incorrect.
Burst: Private [BurstCompile] methods no longer throw MethodAccessException
Burst: Private
[BurstCompile]methods no longer throw
MethodAccessException.
Burst: Revert to internal linkage for Android X86 (32bit) to ensure ABI compliance.
Burst: String interpolation issues when using Dots / Tiny runtime.
Burst: String interpolation issues when using Dots / Tiny runtime.
Burst: Strings can now be passed between methods.
Burst: The Direct Call injected delegate now has a unique suffix to avoid type-name clashes.
Burst: When generating Line Table only debug information, an unreachable could occur due to a missing check.
Burst: When generating Line Table only debug information, an unreachable could occur due to a missing check..
Core: Fixed issue where Profiler/Memory Profiler cannot be connected to Standalone build when Run in Background is disabled. (1355728)
Documentation: Changed the documentation for HorizontalLayout and VerticalLayout. (1260855)
Documentation: Fixed html bug in TestRunnerApi API code snippet (DS-1973).
Fix typo bug in PreBuildSetup code example (DS-1974).
Fix incorrect syntax in command line reference (DS-1971).
Documentation: Fixed incorrect documentation for SaveCurrentModifiedScenesIfUserWantsTo. (1170364)
Documentation: Fixed missing function signatures from RayTracingAccelerationStructure.AddInstance C# API in 2021.2 documentation.
DX12: DX12 Standalone Player crashes at startup when using 32-bit player support. (1315964)
DX12: Fix int shader uniforms in .raytrace shaders being displayed as Floats in the Frame Debugger. (1305552)
DX12: Fixed flickering issue on mesh particles. (1357667)
DX12: Fixed wrong error message saying that vertex format SNorm16 is not supported when building a Ray Tracing Acceleration Structure. The format is supported.
DX12: Significant performance cost of using SRP batcher on DX12 reduced. (1286694)
Editor: (Dynamic Hints) Fixed: NullReferenceException when a prefab with a missing script is hovered in the ProjectBrowser
Editor: A warning is now displayed when modifying the enable analytics preference, informing the user that it will require a restart of the editor. (1307652) playerGraphicsAPI TestSettings parameter.
Editor: Added support for dragging across delayed UI fields to change variables. (1263630)
Editor: Added support for
GameCoreXboxOneand
GameCoreXboxSeriesreduced location path length.
Editor: Added tooltips in the Scene template Inspector. (1324637)
Editor: Added tooltips to the buttons of the Simulator view Control Bar. (1288711)
Editor: Allow hierarchy search to find scripts which share names of internal types. (1252479)
Editor: Allow multiple Unity versions to display in the "Open With..." menu and dialog. Allow the user to choose one as the default. (1202338)
Editor: Close add ratio window after selecting from aspect ratio menu. (1284690)
Editor: ColorUsageAttribute is now respected when the inspector window is in debug mode. (1312714)
Editor: Create default index when opening the index manager if it was never created before.
Editor: Custom editors that live in a Unity package will now be used only is a user defined custom editor is not found. (1300193)
Editor: Default PropertyDrawer.OnGUI no longer renders multiple overlapping labels. (1335958)
Editor: Deleting search query from project browser won't break the search window (1336787)
Editor: Disabled preset for ThemeStyleSheet. (1298540)
Editor: Display the menu item name when its execution time is longer than the user wait threshold (i.e. 3 seconds) (1313062)
Editor: Displayed a warning when the min and max values are equal for the Slider. (1328583): Favorite star is always visible for favorited items (1336789)
Editor: Fix arrow key functionality in dialogs in the Mac Editor. (1279832)
Editor: Fix cursor hide in Linux playmode. (1350956) touch input in the gameview is processed. (1258785)
Editor: Fix styling of selected search query if hovering (1336784)
Editor: Fix that Avatar Stage editing closes on clicking anywhere in the Scene view or Hierarchy when using 2 Inspector windows. (1330120)
Editor: Fixed "Cannot get non-existing progress id" error appearing in the Console when entering Play mode. (1312446)
Editor: Fixed a bug where test filter would match project or player path (DSTP-412).
Editor: Fixed a crash that occurred when using the Memory Profiler to capture memory use for very large scenes. (1316870)
Editor: Fixed a memory leak while using SerializedObjects in the AssetImporter inspectors. (1232758)
Editor: Fixed a regression in where users could no longer assign a Render Texture to the light cookie widget in the UI. (1355504)
Editor: Fixed add an extra null check for monitor enumeration. (1320164)
Editor: Fixed an edge case where removing and re-adding a sub asset would cause the local file id of the object to change unnecessarily. (1323357)
Editor: Fixed an error is thrown after re-building a library of previous Editor version project when the Profiler of 2020.2 project is opened. (1273439)
Editor: Fixed an issue in macOS where popup buttons would show their popup far from the button if the button was near the bottom of the screen. (1323332)
Editor: Fixed an issue to avoid difference in Width and Height for EditorGUI.RectIntField fields compared to other fields in the Transform section. (1297283)
Editor: Fixed an issue to avoid MinMaxSlider disappears for UI Slider. (1323384)
Editor: Fixed an issue to avoid typing or pasting unlimited characters in Project and Console search fields. (1331001)
Editor: Fixed an issue to avoid warning log when selected sub-asset with an empty name. (1333540)
Editor: Fixed an issue to display checkmark next to "Everything" in drop-down for "Culling Mask" property value (1299181)
Editor: Fixed an issue to display Color32 Picker Context Menu at right position on right click. (1334328)
Editor: Fixed an issue to display Normalmap Encoding PlayerSetting only in supported platforms. (1330505)
Editor: Fixed an issue to display proper LayerMask properties value on selection. (1308984) to set top bit flag of an uint enum with inspector. (1298289)
Editor: Fixed an issue to stop clearing Asset's name property when resetting it via the Inspector.
Editor: Fixed an issue to stop sharing Player Settings properties between Player Settings window and Serialized Preset. (1263069)
Editor: Fixed an issue where an empty column is expanded when detaching/attaching UISystemPreviewWindow in the Profiler. (12412 Default Text preset is not applied when creating a new Text object. (1328458)
Editor: Fixed an issue where null reference exceptions can be thrown when opening a URP project. (1310784)
Editor: Fixed an issue where out argument out of range exceptions are thrown when deleting Japanese characters in the input field. (1201105)
Editor: Fixed an issue where Shift-Delete does not delete the property for Object field. (1286390)
Editor: Fixed an issue where the Assembly definition asset does not save after an apply action on import setting pop up. (1309567)
Editor: Fixed an issue where the mouse cursor over the text field's cancel button is displayed as text instead of arrow and the cursor flickers when mouse is hovered over the cancel button. (1314173, 1314177)
Editor: Fixed an issue where warning appears when Scrollbar Navigation is set to Vertical and Direction is set to "Top To Bottom" (1245473)
Editor: Fixed an issue where warnings are thrown in the console when the layout is set to default while in play mode. (1317240) assets not getting moved when there's a folder of the same name in the selection. (1318098)
Editor: Fixed color picker keeps updating color preview when the EyeDropper is used and Esc key is pressed. (1291991)
Editor: Fixed console window fails to repaint unless hovered over if it had been maximized before. (1300081)
Editor: Fixed crash when adding a component to an object fails and prompts a modal dialog. (1348654)
Editor: Fixed Ctrl-click on macOS editor not bringing up "Properties..." context menu on inspector object reference fields properly. (1316779)
Editor: Fixed cursor flickering from double arrow to single arrow over splitter on Mac and Windows. (1295344)
Editor: Fixed cursor locking on Windows when the cursor is on a non-primary display. (1282412)
Editor: Fixed debug assert message in MenuControllerLinux.cpp's OnSizeAllocate() call to GetGtkWindowSize(). (1319050)
Editor: Fixed dragging horizontally along the last sibling in the Hierachy and other TreeViews to specify an alternative parent and sibling for the dragged items. (1294910)
Editor: Fixed empty reason on passed tests results xml (DSTR-63).
Editor: Fixed error is thrown on performing undo operation on a gameobject after adding 'New Script' component. (1312440)
Editor: Fixed failure to load window layout when Editor tries to create new asset from SettingsProvider callback at startup. (1322299)
Editor: Fixed File->Open Recent Scene menu entries not working correctly after upgrading project from versions earlier than 2021.2.0a5. (1338322)
Editor: Fixed floating windowing jumping desktop spaces when using cmd + tab to refocus the editor on mac. (1298279)
Editor: Fixed for Canvas Group Interactable flag being applied to the GameObject even when the Canvas Group component is disabled. (1324097)
Editor: Fixed gameview not responding to some input when the mouse is over another window in the macOS editor. (1358134)
Editor: Fixed gradient swatches were not refreshed after undoing preset change. (1261595)
Editor: Fixed GUIToScreenpoint being inconsistent between play mode and standalone. (1305557)
Editor: Fixed hierarchy window top Scene header foldout not visible when scrolled. (1298679)
Editor: Fixed incorrect bounds when LineRenderer GameObject was not enabled and point editing mode was activated. (1288693)
Editor: Fixed infinite layout error loop when Editor UI is broken. (1327876)
Editor: Fixed instancing being ignored in the Shadow Pass when using the Mobile Diffuse shader. (1318675)
Editor: Fixed issue when
.suffix was applied to BuildTargets without extension.
Editor: Fixed issue where the Intensity parameter of a Default Light Preset is not applied after creating Directional Light Game object. (1199933)
Editor: Fixed issue with CRTL Drag not working when a single item is selected in project browser one-column layout. (1222445)
Editor: Fixed issue with Reference Icon overlapping with Preset Manager text on decreasing the width of the Project Settings window. (1282739)
Editor: Fixed issue with Stack trace input Field being Misaligned on resizing Player setting Preset. (1276715)
Editor: Fixed keycode for new input system on Linux to reflect hardware keycode/physical key location. (1343619)
Editor: Fixed missing comma in the manifest file used by the guardian tool.
Editor: Fixed NullReferenceException error is thrown when pressing up/down arrow key in the Project's search bar while in Play Mode. (1318065)
Editor: Fixed NullReferenceException when trying to open Object Picker for ScriptableObject variable. (1293117)
Editor: Fixed Profile Analyzer - Mac keyboard commands not updating correct chart. (1327944)
Editor: Fixed ReorderableList having wrong label/field width ratio.
Editor: Fixed Repeat and Retry attribute for UnityTest in PlayMode (DSTR-237).
Editor: Fixed Scene's Hierarchy visibility and pickability settings being reset after building. (1271518)
Editor: Fixed selection issues with Shift + Arrow Up/Down in the Hierarchy. (1320614)
Editor: Fixed settings move erratically when the setting you are looking for is located in another platform's tab. (1293497)
Editor: Fixed slow enter playmode time for a specific scene file that contained sequential File ID Hint values. (1308128)
Editor: Fixed so that undocked windows can exit full screen/unmaximize. (1293516)
Editor: Fixed some styling issues with the main editor toolbar (1296757)
Editor: Fixed Terrain dependency cloning. issue where editor doesn't show unsaved changes pop up if editor is closed using Unity -> Quit menu item. (1320565)
Editor: Fixed the issue where old window was not loosing focus after clicking on a mini pop-up of new window and further new window also gets focus. (1219099)
Editor: Fixed the issue with missing tooltip for Editor tools button. (1296952)
Editor: Fixed the issue with the Blue highlight being misaligned for the Cooking options dropdown in Mesh Collider. (1276638)
Editor: Fixed the LOD Group Inspector frames being too dark when using the Light Skin. (1311960)
Editor: Fixed the path to the scene template icon when querying icon from template path. (1325888)
Editor: Fixed the Recent Scenes menu not being updated after saving via Save As and moved scenes not being correctly tracked.
Editor: Fixed the Scene View not updating when the LineRenderer Show Wireframe option was changed.
Editor: Fixed tooltips being misaligned. (1325676)
Editor: Fixed top of Game View is black when "Use display in HDR mode" is enabled and "Color Space" is set to "Linear". (1285015)
Editor: Fixed undo on the Advanced Object Selector using the Search Picker not reverting the object field to its original value. (1336998)
Editor: Fixed Unity does not load the last scene after a crash. (1308699)
Editor: Generate Release Notes URL according to unity version (1301927)
Editor: IMGUI buttons don't work in Device Simulator when using the new Input System. (1333953)
Editor: Improved model import performance by a tiny amount.
Editor: Improved performance of copy/paste when duplicating large numbers of objects. (1208321)
Editor: Initializing the static Progress class from a thread no longer throws exceptions. (1337421)
Editor: Limited the length of the error messages in the UserRetryDialogs to not more than 200 characters per line. (1167593)
Editor: Menu Bar doesn't flicker anymore when dragging across monitors. (1219094)
Editor: Multiple improvements around automatic test-run of tests
Editor: Nested enumerator execution order fix (DSTR-227).
Editor: No more exception thrown in the console when inputting unsupported text in the Project Browser search bar. (1336292)
Editor: Open Prefab' button now uses less inspector space. (1270965)
Editor: Paste as Child should paste GO relative to parent instead of keeping world transform
Editor: Pausing playmode in the macOS Editor will no longer keep keys released in pause mode in the pressed position when playmode is unpaused. (1322149)
Editor: Prefab object selection performance issue resolved. (1352527)
Editor: Prevent crash when running editor with Mac system debug menu enabled through defaults. (1301807)
Editor: Prevent popup windows from closing in the Linux Editor when child popups are not yet focused (1309702)
Editor: Refactoring to make placing windows in the Mac editor more robust and ensure windows are opened on one screen. (1297362)
Editor: Release mouse if it is dragging when a dialog is opened in the windows editor. (1271832)
Editor: Removed XDK Xbox One platform after Unity 2020.3.
Editor: Reorderable list null item is now displayed correctly. (1339759)
Editor: Resolve variables before reading style values. (1297227)
Editor: set gdk cursor invisible when forcing software cursor in editor playmode (1212108)
Editor: Show disable index in the index manager. (1307781)
Editor: String, Integer, Float, Character and BoundsInt type SerializedProperties now have Copy/Paste context menu options.
Editor: support Hungarian (and other) unicode characters in editor (1184456)
Editor: The arrow cursor in the Linux Editor is no longer slightly offset. (1256724)
Editor: The background task window is no longer repositioned and resized when opened from the status bar. (1337646))
Editor: The progress bar in the status bar no longer gets stuck in an empty state. The progress bar in the status bar no longer shows instantaneous progresses. (1341616)
Editor: The Unity Documentation shortcut is no longer installed to the Windows start menu if documentation is not installed. (921689)
Editor: This fix makes sure to unify the il2cpp default order for managed stripping levels in the Player Settings drop-down menu to the one of Mono (Minimal > Low > Medium > High).
Editor: Transform rotations from asset bundles are now correctly shown in the inspector. (958333)
Editor: UI not running any tests if run select on nested namespaces (DSTR-256).: Updated the style of some buttons and button groups when hovered, pressed and focused (1314662)
Editor: While Editor is entering Play Mode clicking menu doesn't return wrong entry anymore. (1263313)
Editor: Wrong ShaderGUI for crosspipeline shaders. (1339817)
Game Core: Respect Vsync values set in editor. missing indirect lighting when using Enlighten Realtime GI in HDRP Player (1367133)
GI: Fix Reflection probe (gizmo and texture inspector) appearance in linear color space mode. (1293558)
GI: Fixed a crash occurring while sculpting terrain and baking. (1266511)
GI: Fixed a crash that happens when GPULM tiling is ON, exporting the training data is ON and Ambient occlusion is OFF. (1341803)
GI: Fixed an issue where sometimes the callstack in Editor.log was incomplete on Windows. (1221524)
GI: Fixed an issue where the GPU lightmapper seems to be stuck in an endless loop before finishing a bake. (1258690)
GI: Fixed automated GI tests in HDRP and URP where multiple editors are used.
GI: Fixed baked lighting on terrain holes and better performance. (1307459)
GI: Fixed changes in lighting settings or lightmap parameters not affecting the appearance of baked reflection probes when baking via "Generate Lighting" in the lighting tab. (1324641)
GI: Fixed crash when closing editor while generating lighting. (1354238)
GI: Fixed crash with CPU OpenCL devices. (1338498)
GI: Fixed fallback to CPU lightmapper when writing LightCookies buffer and using Clear baked Data. (1321887)
GI: Fixed light baking gets stuck in a infinite loop when unloading a light baked scene if you have another scene open. (1337508)
GI: Fixed memory usage and performance regression when baking light probes. (1324307)
GI: Fixed out of bounds access in probeDepthOctahedronExpandedBuffer when generating Probe Volumes 1.0 data via experimental API. (1321881)
GI: Fixed prefab instances losing their lighting when they are unpacked, and the scene is reloaded. (1285269)
GI: Fixed rare crash when entering play mode while running a GI lightbake. (1301678)
GI: Fixed reflection probes not being zeroed out when lighting is cleared.
GI: Fixed scene lighting data not getting updated when the selected Lighting Data Asset for the scene is changed. (1263683)
GI: Fixed some fallbacks from GPULM to CPULM when using baked lighting and cookies. (1320169)
GI: Fixed unused return value during remapping of scene object id's. (1300323)
GI: Fixedcorruption in Probe baking when lightmap UVs are not provided. (1337226)
GI: Ignore OpenCL CPU devices that are incompatible with the GPU lightmapper on macOS due to insufficient local workgroup size. (1293520)
GI: Improved error logging when reporting errors relating to UV unwraps during a lightmap bake. (1327322) emission tooltip take Enlighten realtime GI state into account. (1329323)
GI: Make GPU lightmapper detect Intel IRIS Xe MAX GPU with 4GB memory. (1331794)
GI: Make it possible for the job manager to shut down the editor even if the OpenCL driver stopped working. (1276653)
GI: Make the compression label in Lighting Window - Baked Lightmaps better. (1297198)
GI: Reenabling a disabled light, reflection probe, or light probe group now makes it immediately visible in the Light Explorer. (1320277)
GI: Reimport all lightmap textures when "Lightmap Encoding" project setting is changed. (1195551)
GI: Removed erroneous asserts in scene object identifier remapping code.
GI: Removed terrain trees from being drawn in the shadowmask scene visualization mode as background objects as they do not receive a shadowmask, anyway. (1295410)
GI: Removed the Unity process count check that is used to guard the GI Cache from getting trimmed by other Editors, as the asset import workers are counted as Editors. (1313354)
GI: Support high intensity skyboxes for baking. (1222492)
GI: Use shadow penumbra to sort lights with penumbra by in the GPU lightmapper instead of indirect color. (1319138)
Graphics: * Improved support for ghosting particles.
- Improved sharpening and other minor fixes from NVIDIA. (1345143)
Graphics: Add CameraBuffer VFX type to encapsulate camera buffers that can be Texture2D or Texture2DArray depending on the platform (1213482)
Graphics: Add control to independently clear stencil buffer in CommandBuffer api.
Clearing depth does not implicitly clear stencil anymore.
Graphics: Added check for Vulkan support in Unity player. (1308206)
Graphics: Added exception when creating 3d textures with depth/stencil format on metal, as this is not supported. (1296524)
Graphics: Added GetActiveTerrains method which will fill a user-provided List with the active terrains, allowing users to control the resulting allocation. (1324062)
Graphics: Added Metal Compute Pipeline Validation to the Editor (1283446): Added support for having multiple different Renderers on the same Game Object when using ray tracing. (1305305)
Graphics: Backport sllowing for Depth Sharing in Vulkan.
Graphics: Calculating correct rtHandleScale by considering the possible pixel rounding when DRS is on.
Graphics: Creating a RenderTexture in D3D12 with RenderTexture.antiAliasing value higher than supported by hardware will no longer crash the editor. (1310791)
Graphics: Disable Motion Vectors pass in Output Meshes even when other output has motion vectors enabled (1313043)
Graphics: Disabled "create material" button in terrain inspector when viewed through a preset. (1290453)
Graphics: Disabled ShadowCaster pass in Output Meshes if castShadows is disabled.
Graphics: Fix bug where in some situations the AsyncUploadBuffer was not persisting even when QualitySettings.asyncUploadPersistentBuffer was set to true. (1150408)
Graphics: Fix CPU performance issue when light probes and ray tracing effects are used together.
Graphics: Fix crash when using RenderPass without depth on Metal devices
Graphics: Fix issue with GrayScaleRGBToAlpha for 16bpc textures (1295408)
Graphics: Fix rare deadlock that can occur when a texture fails to load. (1265360)
Graphics: Fix shadow normal bias slider getting clamped at 3. (1213200)
Graphics: Fix Simplify button clipping in the LineRenderer Inspector when the window was narrow. (1308478)
Graphics: Fix to RenderTexture.format not returning correct values in the case of RenderTextureFormat.Depth and RenderTextureFormat.Shadow (1365548)
Graphics: Fixed a bug in SRPs where models appeared white in the preview window. (1297670)
Graphics: Fixed a bug with Cubemap.GetPixel(CubemapFace face, int x, int y) not passing its parameters correctly. (1305539)
Graphics: Fixed a case that render thread calls main thread only API in editor. (1317190)
Graphics: Fixed a crash that would occur when trying to create a VT GPU cache larger than the available GPU memory. (1293468) crash when passing in DepthAuto or ShadowAuto into SystemInfo.GetCompatibleFormat. (1343093)
Graphics: Fixed a critical issue on android devices & lens flares. Accidentally creating a 16 bit texture was causing gpus not supporting them to fail.
Graphics: Fixed a large, visible stretch ratio in a LensFlare Image thumbnail.
Graphics: Fixed a rare crash in shadow rendering. (1350950)
Graphics: Fixed a regression where calling the the Texture2D.Resize method with a Texture format parameter caused the underlying GraphicsFormat to flip color spaces on each call. (1312670) float error on some mobile platforms.
Graphics: Fixed alignment in Volume Components.
Graphics: Fixed ALL/NONE to maintain the state on the Volume Component Editors.
Graphics: Fixed an incorrect error check in the BC7 decompressor. (1339809)
Graphics: Fixed an issue with ReadPixels() over 3 API's, where the first slice would always be returned instead of the specified depth slice. The bug was being caused by the active cubemap face only being used, the fix checks whether it should use the active depth slice or cubemap face. (979487)
Graphics: Fixed an uninitialized value problem found by Vulkan. (1309741)
Graphics: Fixed and issue where scene view filtering would now work properly for SRP's. (1180254)
Graphics: Fixed another recently added internal bug where when the shader debug level in Switch player editor settings is changed, the shaders were not corectly rebuilt.
Graphics: Fixed API to draw color temperature for Lights.
Graphics: Fixed assertion failure when releasing rendererlists in certain scenes. (1342215)
Graphics: Fixed assertion on compression of L1 coefficients for Probe Volume.
Graphics: Fixed black pixel issue in AMD FidelityFX RCAS implementation
Graphics: Fixed bug where ComputeShader.IsSupported for OpenGL (ES) would only return false on the first call for kernel that did not compiler at runtime. (1334034)
Graphics: Fixed crash when a compute shader does not compile when using OpenGL. (1324695)
Graphics: Fixed crash when calling AsyncGPUReadback.RequestIntoNativeArray with a temp allocated array. (1336583)
Graphics: Fixed crash when calling GetPixelData with invalid arguments. (1322485)
Graphics: Fixed crash when DX12 Hardware Dynamic Resolution Scaling is enabled on XR. (1323531)
Graphics: Fixed crash when executing CommandBuffer.DrawProcedural and some other functions that refer to an already deleted ComputeBuffer or GraphicsBuffer. (1323447)
Graphics: Fixed crash when launching tutorials on Linux with AMD/Intel cards. (1323204)
Graphics: Fixed crash when switching resolutions rapidly on the unity editor when hardware DRS is enabled on HDRP. (1353948)
Graphics: Fixed crash wrong format desc of GraphicsFormat.VideoAuto on metal. (1296529)
Graphics: Fixed cropped thumbnail for Image with non-uniform scale and rotation
Graphics: Fixed disappearing mesh when "Keep Quads" is enabled in import settings. (1327826)
Graphics: Fixed double underscores in Hybrid Renderer shader constant names.
Graphics: Fixed DrawProcedural reporting incorrect triangle counts to FrameStats.
Graphics: Fixed empty RenderTexture.ResolveAA debug marker appearing in Frame Debugger and other frame capture tools on mobile platforms. (1330944)
Graphics: Fixed error when change Lens Flare Element Count followed by undo. (1346894)
Graphics: Fixed explicit half precision not working even when Unified Shader Precision Model is enabled.
Graphics: Fixed false-positive error message during ReadbackImage on GLCore. (1297065)
Graphics: Fixed frame debugger crash when using the RenderPass API with MSAA enabled. (1317665)
Graphics: Fixed gizmo rendering in SRP when wireframe mode is selected (1251022)
Graphics: Fixed GPU crash on Intel integrated cards when opening the editor with a scene that had VFX. (1332956)
Graphics: Fixed HDRP Camera Binder errors related to depthBuffer and colorBuffer properties. (1353845)
Graphics: Fixed HDRP Runtime test failure in Vulkan caused by incorrect shader code generation. (1323529)
Graphics: Fixed IES Importer related to new API on core.
Graphics: Fixed in crash for software DRS for DLSS in dx12 when changing the resolution mid frame. (1352848)
Graphics: Fixed incorrect GeometryJob Fence initialisation causing graphical corruption in UI canvas rendering.
Graphics: Fixed incorrect warning in when creating a texture from script with a compressed format that is not supported on the Editor platform. (1317998)
Graphics: Fixed issue displaying a warning of different probe reference volume profiles even when they are equivalent.
Graphics: Fixed issues caused by automatically added EventSystem component, required to support Rendering Debugger Runtime UI input. (1361901)
Graphics: Fixed L2 for Probe Volumes.
Graphics: Fixed layered rendering (and validation errors) with 3D Textures when using Vulkan. (1323740)
Graphics: Fixed layered rendering to mips of 3D textures when using Vulkan. (1329180)
Graphics: Fixed Lens Flare 'radialScreenAttenuationCurve invisible'.
Graphics: Fixed Lens Flare position for celestial at very far camera distances. It now locks correctly into the celestial position regardless of camera distance. (1363291)
Graphics: Fixed Lens Flare rotation for Curve Distribution.
Graphics: Fixed Lens Flare Thumbnails.
Graphics: Fixed library function SurfaceGradientFromTriplanarProjection to match the mapping convention used in SampleUVMappingNormalInternal.hlsl and fix its description.
Graphics: Fixed Light Layers with duplicate names being hidden in Light or Renderer component. (1335982)
Graphics: Fixed Light Probe evaluation in ray tracing shaders resulting in wrong ambient colors. Fixed Light Probe Proxy Volume setup not binding SH2 L2 band uniforms in ray tracing shaders. (1330711)
Graphics: Fixed LightAnchor too much error message, became a HelpBox on the Inspector.
Graphics: Fixed memory leak when changing SRP pipeline settings, and having the player in pause mode.
Graphics: Fixed Metal DebugGroups during Xcode GPU Frame Capture getting incorrectly nested when using RenderPass API. (1330942)
Graphics: Fixed Metal multisample depth resolve not working. (1330714)
Graphics: Fixed missing increment/decrement controls from DebugUIIntField & DebugUIUIntField widget prefabs.
Graphics: Fixed missing support for coarse/fine derivatives in shader code.
Graphics: Fixed normal bias field of reference volume being wrong until the profile UI was displayed.
Graphics: Fixed performance regression when changing Mesh vertices or indices. (1326091)
Graphics: Fixed potentially conflicting runtime Rendering Debugger UI command by adding an option to disable runtime UI altogether. (1345783)
Graphics: Fixed problem on domain reload of Volume Parameter Ranges and UI values.
Graphics: Fixed ProBuilder mesh's texture disappearing after enabling Path Tracing. This happened when the vertex color channel was not set. (1348821)
Graphics: Fixed random crash when reloading VFX under special circumstances. (1291710)
Graphics: Fixed readback/blit from backbuffer in Editor when running GLES. (1301446)
Graphics: Fixed regression where RenderTextureDescriptor.depthBufferBits or RenderTexture.depth could return GraphicsFormat.None when setting the properties to 24 or 32 bit. (1340405)
Graphics: Fixed RenderPass API MSAA and clear action issues when writing to the backbuffer on Android (1315433)
Graphics: Fixed Right Align of additional properties on Volume Components Editors.
Graphics: Fixed rotation issue now all flare rotate on positive direction. (1348570)
Graphics: Fixed ScriptableRenderContext.ExecuteCommandBuffer crashing when called with a disposed command buffer (1306222)
Graphics: Fixed selection of sRGB format for rendertextures inspector. (1295276)
Graphics: Fixed situation where Hybrid Renderer could throw errors because of invalid reflection data.
Graphics: Fixed skybox cubemap corruption in Vulkan. (1195394)
Graphics: Fixed spacing between property fields on the Volume Component Editors. SystemInfo.supportsRenderTargetArrayIndexFromVertexShader when using Vulkan. (1269732)
Graphics: Fixed tessellation factors access in domain shaders on Metal/Vulkan. (1337590)
Graphics: Fixed Texture resource state that can be incorrect when the destination texture of Graphics.Blit() is also set as _MainTex of the blit material. (1323521)
Graphics: Fixed Texture2D.GetPixel(int x, int y, int miplevel) internally passing the miplevel parameter incorrectly. (1284757)
Graphics: Fixed Texture3D.CreateExternalTexture to work correctly with Vulkan. (1322987)
Graphics: Fixed TextureGenerator.GenerateTexture throws null pointer exception when running with enablePostProcessor to true. (1283888)
Graphics: Fixed the display name of a Volume Parameter when is defined the attribute InspectorName.
Graphics: Fixed the documentation of CommandBuffer.EndSample. (1264605)
Graphics: Fixed the location of the "Shader Graph" asset create menu to be next to the "Shader" menu. (1337080)
Graphics: Fixed the Mac Standalone Player memory leak that came with the change to presenting with CVDisplayLink. (1365570)
Graphics: Fixed the multicamera tests on Linux. (1318477)
Graphics: Fixed the random position of the Rendering Submenu on Assets > Create. (1341763)
Graphics: Fixed the selection of the Additional properties from ALL/NONE when the option "Show additional properties" is disabled.
Graphics: Fixed Undo from script refreshing thumbnail.
Graphics: Fixed unstable async readback test. (1288678)
Graphics: fixed Unwrap crash when automatic margin calculation results in very small margin (1308365)
Graphics: Fixed various issues wrt uploading Virtual texturing tiles when using non-native texture formats. (1337269)
Graphics: Fixed Volume Gizmo size when rescaling parent GameObject.
Graphics: Fixed Vulkan API AccessTextureByID which was failing due to an incorrect internal implementation.
Graphics: Fixing bug were a very strong emissive could leak if placed behind a canvas surface in the scene. (1295722)
Graphics: Flagged kFormatB10G11R11_UFloatPack32 as a HDR format (1310527)
Graphics: Force reload of VFX graph compute shaders when reloading assembly. (1107178)
Graphics: Frames were being unnecessarily dropped before presenting on Metal OSX when using CVDisplayLink; this is now fixed. (1363963)
Graphics: Game View Stats Saved by Batching is always 0 when using SRPBatcher. (1329391): Graphics: Fixing DLSS vulkan black pixels / frame corruption of first frame. (1335735)
Graphics: GUIView.GrabPixels() on Metal will now fill the RenderTexture with valid content (1223120)
Graphics: Help boxes with fix buttons do not crop the label.
Graphics: Make metal query max tessellation factor from the driver instead of clamping to 16 always. (1289859)
Graphics: Metal shaders will compile correctly when referencing tessellation factors in the domain shader. (1139698)
Graphics: Minor UX improvements on Quality Settings Panel.
Graphics: NVIDIA package no longer gets enabled when a project is updated to a new version of unity. This was the result of a bad configuration. (1342012)
Graphics: On Metal, enforce depth clearing when "Don't care" load action is used, to avoid potential subtle issues later on. (1330613)
Graphics: Order of DLSS quality enumerations changed for better and more intuitive user experience. This change does not break API. (1335732)
Graphics: Partially fix limitation of sending only one event per frame : the direct link supports multiple event sent within the same frame.
Graphics: Pause VFX when frame debugger is enabled. (1195088)
Graphics: Properly handle terrain pixel error calculations for orthographic cameras. (1244606)
Graphics: Provided an appropriate API to update builtin reflection probes internal data. (1207660)
Graphics: Put objects with negative scale into separate static batch. This makes normal maps display correctly on those objects. (1205209)
Graphics: Reduced main thread hitching caused by Shader loading.
Graphics: Remove URP and HDRP templates. They are now dynamic templates
Graphics: Removed GraphicsFormat warning for RenderTextures when changing color space to gamma. (1284779)
Graphics: Removed the error message when encountering incompatible pipeline stages on DX12.
Graphics: Removed the unneeded data copy of the initialised memory to video memory when creating a texture from script. (1337186)
Graphics: Removed unnecessary api files for NVIDIA Module.
Graphics: RenderTextures are no longer forced to use Clamp border sampling if a format with depth is used. (1292651)
Graphics: Resolved exact fixed time step flickering while using strip (and other unexpected behavior). (1289829)
Graphics: Scissor test was automatically disabled when changing render-targets. It is not the case anymore and is consistent with other platforms.
Graphics: Skip wind calculations for Speed Tree 8 when wind vector is zero. (1335487)
Graphics: Support undo of Global Settings assignation (1342987)
Graphics: The warning was removed because URP / HDRP now no longer need to have an asset assigned to both Graphics and Quality to work. (1335986)
Graphics: Updated Postprocessing v2 package to 3.1.1.
Graphics: Updated SpeedTree importer editor to correctly regenerate materials with custom render pipelines. Only shows the "Receive Shadows" toggle if that functionality is supported by the current SupportedRenderingFeatures. (1338973)
Graphics: Using CopyTexture on textures with different MSAA sample counts throws an error. (1308132)
Graphics: Virtual Texturing fallback texture sampling code correctly honors the enableGlobalMipBias when virtual texturing is disabled.
Graphics: Visual Effects in prefabs always show as modified. (1285787)
Graphics: Visual effects will continue to cast shadows even when they are not visible in camera. (1279851)
Graphics: VisualEffect could cause an unexpected assert if the graph samples a skinned mesh renderer with cloth (1307808)
Graphics: When adding Overrides to the Volume Profile, only show Volume Components from the current Pipeline.
Graphics: When creating PVRTC texture which is not POT and square throw an exception, as it is not supported and might result in crashes later on. (1329461)
HDRP: Added a new property to control the ghosting reduction for volumetric clouds. (1357702)
HDRP: Allow negative wind speed parameter.
HDRP: Assets going through the migration system are now dirtied.
HDRP: Cleanup Shader UI.
HDRP: Fix crash on VolumeComponentWithQualityEditor when the current Pipeline is not HDRP
HDRP: Fixed a bug with Reflection Probe baking would result in an incorrect baking reusing other's Reflection Probe baking.
HDRP: Fixed a bug with Reflection Probe baking would result in an incorrect baking reusing other's Reflection Probe baking. limit case when the camera is exactly at the lower cloud level. (1316988)
HDRP: Fixed a locale issue with the diffusion profile property values in ShaderGraph on PC where comma is the decimal separator.
HDRP: Fixed a memory leak related to not disposing of the RTAS at the end HDRP's lifecycle.
HDRP: Fixed a NaN generating in Area light code.
HDRP: Fixed a null ref exception when adding a new environment to the Look Dev library.
HDRP: Fixed a null ref exception when no opaque objects are rendered.
HDRP: Fixed a nullref in volume system after deleting a volume object. (1348374)
HDRP: Fixed a nullref when binding a RTHandle allocated from a RenderTextureIdentifier with CoreUtils.SetRenderTarget.
HDRP: Fixed a regression that broke punctual and directional raytraced shadows temporal denoiser. (1360132)
HDRP: Fixed a warning to Rendering Debugger Runtime UI when debug shaders are stripped.
HDRP: Fixed a warning when enabling tile/cluster debug.
HDRP: Fixed ability to override AlphaToMask FrameSetting while camera in deferred lit shader mode.
HDRP: Fixed ability to override AlphaToMask FrameSetting while camera in deferred lit shader mode.
HDRP: Fixed access to main directional light from script.
HDRP: Fixed Additional Velocity for Alembic not taking correctly into account vertex animation.
HDRP: Fixed aliasing artifacts that are related to numerical imprecisions of the light rays in the volumetric clouds. (1340731)
HDRP: Fixed ambient occlusion strenght incorrectly using GTAOMultiBounce
HDRP: Fixed an error when deleting the 3D Texture mask of a local volumetric fog volume (1339330) issue in the planar reflection probe convolution.
HDRP: Fixed an issue that clamped the volumetric clouds offset value. (1357318)
HDRP: Fixed an issue that made camera motion vectors unavailable in custom passes.
HDRP: Fixed an issue that made Custom Pass buffers inaccessible in ShaderGraph.
HDRP: Fixed an issue where auto baking of ambient and reflection probe done for builtin renderer would cause wrong baking in HDRP.
HDRP: Fixed an issue where disabled reflection probes were still sent into the the ray tracing light cluster.
HDRP: Fixed an issue where first frame of SSAO could exhibit ghosting artefacts.
HDRP: Fixed an issue where runtime debug window UI would leak game objects.
HDRP: Fixed an issue where selection in a debug panel would reset when cycling through enum items.
HDRP: Fixed an issue where sometime a docked lookdev could be rendered at zero size and break.
HDRP: Fixed an issue with debug overriding emissive material color for deferred path. (1313123)
HDRP: Fixed an issue with Decal normal blending producing NaNs.
HDRP: Fixed an issue with half res ssgi upscale.
HDRP: Fixed an issue with normal management for recursive rendering (1324082) an issue with the capture callback (now includes post processing results).
HDRP: Fixed an issue with the mipmap generation internal format after rendering format change.
HDRP: Fixed an issue with volumetric clouds on vulkan. (1354802)
HDRP: Fixed artifact appearing when diffuse and specular normal differ too much for eye shader with area lights.
HDRP: Fixed artifacts in volumetric cloud shadows.
HDRP: Fixed assert failure when enabling the probe volume system for the first time.
HDRP: Fixed AxF debug output in certain configurations (1333780)
HDRP: Fixed bad feedback loop occuring when auto exposure adaptation time was too small.
HDRP: Fixed banding in the volumetric clouds. (1353672)
HDRP: Fixed black pixel issue in AMD FidelityFX RCAS implementation.
HDRP: Fixed blocky looking bloom when dynamic resolution scaling was used.
HDRP: Fixed box light attenuation.
HDRP: Fixed case where the SceneView don't refresh when using LightExplorer with a running and Paused game. (1354129)
HDRP: Fixed cases in which object and camera motion vectors would cancel out, but didn't.
HDRP: Fixed Clouds on Metal or platforms that don't support RW in same shader of R11G11B10 textures.
HDRP: Fixed computation of geometric normal in path tracing (1293029)
HDRP: Fixed conflicting runtime debug menu command with an option to disable runtime debug window hotkey.
HDRP: Fixed contact shadow debug views not displaying correctly upon resizing of view.
HDRP: Fixed contact shadows tile coordinates calculations.
HDRP: Fixed controls for clouds fade in. (1353548)
HDRP: Fixed corruption in player with lightmap uv when Optimize Mesh Data is enabled (1357902)
HDRP: Fixed CPU performance of decal projectors, by a factor of %100 (walltime) on HDRP PS4, by burstifying decal projectors CPU processing.
HDRP: Fixed cropping issue with the compositor camera bridge (1340549)
HDRP: Fixed custom pass custom buffer not bound after being created inside a custom pass.
HDRP: Fixed custom pass delete operation. (1354871) CustomPassUtils scaling issues when used with RTHandles allocated from a RenderTexture.
HDRP: Fixed CustomPassUtils.Copy function not working on depth buffers.
HDRP: Fixed decal draw order for ShaderGraph decal materials.
HDRP: Fixed Decal's pivot edit mode 2D slider gizmo not supporting multi-edition.
HDRP: Fixed Decal's UV edit mode with negative UV.
HDRP: Fixed decals in material debug display.
HDRP: Fixed diffusion profile being reset to default on SpeedTree8 materials with subsurface scattering enabled during import.
HDRP: Fixed diffusion profile breaking after upgrading HDRP (1337892)
HDRP: Fixed diffusion profile displayed in the inspector.
HDRP: Fixed disabled menu item for volume additional properties.
HDRP: Fixed distortion when resizing the graphics compositor window in builds. (1328968)
HDRP: Fixed Dof, would sometimes get corrupted when DLSS was on caused by TAA logic accidentally being on for DOF. (1357722)
HDRP: Fixed double camera preview.
HDRP: Fixed double contribution from the clear coat when having SSR or RTR on the Lit and StackLit shaders. (1352424)
HDRP: Fixed edge bleeding when rendering volumetric clouds.
HDRP: Fixed EmissiveLighting Debug Light mode not managing correctly emissive for unlit.
HDRP: Fixed enabling a lensflare in playmode.
HDRP: Fixed error in the RTHandle scale of Depth Of Field when TAA is enabled.
HDRP: Fixed error when disabling opaque objects on a camera with MSAA.
HDRP: Fixed error with motion blur and small render targets.
HDRP: Fixed exposure issues with volumetric clouds on planar reflection.
HDRP: Fixed exposure not being properly handled in ray tracing performance (RTGI and RTR). (1346383) allowing to change dynamic resolution upscale filter via script.
HDRP: Fixed for discrepancies in intensity and saturation between screen space refraction and probe refraction.
HDRP: Fixed for wrong cached area light initialization.
HDRP: Fixed Force RGBA16 when scene filtering is active. (1228736)
HDRP: Fixed GBuffer clear option in FrameSettings not working.
HDRP: Fixed gbuffer depth debug mode for materials not rendered during the prepass.
HDRP: Fixed GC allocations from XR occlusion mesh when using multipass.
HDRP: Fixed ghosting issues if the exposure changed too much (RTGI).
HDRP: Fixed gizmo rendering when wireframe mode is selected.
HDRP: Fixed HDAdditionalLightData's CopyTo and HDAdditionalCameraData's CopyTo missing copy.
HDRP: Fixed HDRP material being constantly dirty.
HDRP: Fixed HDRP material upgrade failing when there is a texture inside the builtin resources assigned in the material. (1339865)
HDRP: Fixed HDRP's ShaderGraphVersion migration management which was broken.
HDRP: Fixed impossibility to release the cursor in the template.
HDRP: Fixed incorrect debug wireframe overlay on tessellated geometry (using littessellation), caused by the picking pass using an incorrect camera matrix.
HDRP: Fixed incorrect light list indexing when TAA is enabled. (1352444)
HDRP: Fixed incorrect RTHandle scale in DoF when TAA is enabled.
HDRP: Fixed infinite propagation of nans for RTGI and SSGI. (1349738)
HDRP: Fixed Intensity Multiplier not affecting realtime global illumination.
HDRP: Fixed invalid cast exception on HDProbe.
HDRP: Fixed invalid pass index 1 in DrawProcedural error.
HDRP: Fixed issue in Probe Reference Volume authoring component triggering an asset reload on all operations.
HDRP: Fixed issue in wizard when resource folder don't exist.
HDRP: Fixed issue of accessing default frame setting stored in current HDRPAsset instead fo the default HDRPAsset.
HDRP: Fixed issue that caused a rebake of Probe Volume Data to see effect of changed normal bias. when debug full screen 'Transparent Screen Space Reflection' do not take in consideration debug exposure
HDRP: Fixed issue when switching between non-persistent cameras when path tarcing is enabled. (1337843)
HDRP: Fixed issue with a compute dispatch being with 0 threads on extremely small resolutions.
HDRP: Fixed issue with an assert getting triggered with OnDemand shadows.
HDRP: Fixed issue with automatic exposure settings not updating scene view.
HDRP: Fixed issue with change in lens model (perfect or imperfect) wouldn't be taken into account unless the HDRP asset was rebuilt.
HDRP: Fixed issue with constant buffer being stomped on when async tasks run concurrently to shadows.
HDRP: Fixed issue with Depth of Field CoC debug view.
HDRP: Fixed issue with depth slope scale depth bias when a material uses depth offset.
HDRP: Fixed issue with fading in SSR applying fade factor twice, resulting in darkening of the image in the transition areas.
HDRP: Fixed issue with faulty shadow transition when view is close to an object under some aspect ratio conditions
HDRP: Fixed issue with gbuffer debug view when virtual texturing is enabled.
HDRP: Fixed issue with hierarchy object filtering.
HDRP: Fixed issue with history buffer allocation for AOVs when the request does not come in first frame.
HDRP: Fixed issue with NaNs in Volumetric Clouds on some platforms.
HDRP: Fixed issue with on-demand directional shadow maps looking broken when a reflection probe is updated at the same time.
HDRP: Fixed issue with physically-based DoF computation and transparent materials with depth-writes ON.
HDRP: Fixed issue with RAS build fail when LOD was missing a renderer.
HDRP: Fixed issue with shadow mask and area lights.
HDRP: Fixed issue with the LayerMaskParameter class storing an erroneous mask value. (1345515)
HDRP: Fixed issue with velocity rejection when using physically-based DoF.
HDRP: Fixed issue with vertex color defaulting to 0.0 when not defined, in ray/path tracing. (1348821)
HDRP: Fixed label style in pbr sky editor.
HDRP: Fixed LayerMask editor for volume parameters.
HDRP: Fixed lens flare not rendering correctly with TAAU or DLSS.
HDRP: Fixed lens flare occlusion issues with TAA.
HDRP: Fixed lens flare occlusion issues with transparent depth. It had the wrong depth bound. (1365098)
HDRP: Fixed light anchor min distance value + properties not working with prefabs. (1345509)
HDRP: Fixed light gizmo showing shadow near plane when shadows are disabled.
HDRP: Fixed light layer issue when performing editing on multiple lights.
HDRP: Fixed LightCluster debug view for ray tracing.
HDRP: Fixed lights shadow frustum near and far planes.
HDRP: Fixed LookDev environment library assignement after leaving playmode.
HDRP: Fixed material inspector that allowed setting intensity to an infinite value.
HDRP: Fixed material keywords with fbx importer.
HDRP: Fixed memory leak with XR combined occlusion meshes.
HDRP: Fixed migration step overriden by data copy when creating a HDRenderPipelineGlobalSettings from a HDRPAsset.
HDRP: Fixed misc TAA issue: Slightly improved TAA flickering, Reduced ringing of TAA sharpening, tweak TAA High quality central color filtering.
HDRP: Fixed misleading text and improving the eye scene material samples. (1368665)
HDRP: Fixed missing API documentation for LTC area light code.
HDRP: Fixed missing BeginCameraRendering call for custom render mode of a Camera.
HDRP: Fixed missing context menu for “Post Anti-Aliasing” in Camera. (1357283)
HDRP: Fixed missing DisallowMultipleComponent annotations in HDAdditionalReflectionData and HDAdditionalLightData. (1365879)
HDRP: Fixed missing global wind parameters in the visual environment.
HDRP: Fixed Missing lighting quality settings for SSGI. (1312067)
HDRP: Fixed missing option to use POM on emissive for tessellated shaders.
HDRP: Fixed missing Update in Wizard's DXR Documentation.
HDRP: Fixed model import by adding additional data if needed.
HDRP: Fixed motion vector for custom meshes loaded from compute buffer in shader graph (like Hair).
HDRP: Fixed multi cameras using cloud layers shadows.
HDRP: Fixed multicamera rendering for Dynamic Resolution Scaling using dx12 hardware mode. Using a planar reflection probe (another render camera) should be safe.
HDRP: Fixed multiple any hit occuring on transparent objects. (1294927)
HDRP: Fixed multiple HDRP Frame Settings panel issues: missing "Refraction" Frame Setting. Fixing ordering of Rough Distortion, it should now be under the Distortion setting.
HDRP: Fixed normals provided in object space or world space, when using double sided materials.
HDRP: Fixed null reference exception in Raytracing SSS volume component.
HDRP: Fixed nullref in layered lit shader editor.
HDRP: Fixed nullref when enabling fullscreen passthrough in HDRP Camera.
HDRP: Fixed object outline flickering with TAA.
HDRP: Fixed objects disappearing from Lookdev window when entering playmode. (1309368)
HDRP: Fixed off by 1 error when calculating the depth pyramid texture size when DRS is on.
HDRP: Fixed overdraw in custom pass utils blur and Copy functions (1333648)
HDRP: Fixed override camera rendering custom pass API aspect ratio issue when rendering to a render texture.
HDRP: Fixed parameter ranges in HDRP Asset settings.
HDRP: Fixed path traced subsurface scattering for transmissive surfaces. (1329403)
HDRP: Fixed path traced transparent unlit material (1335500)
HDRP: Fixed path tracing accumulation not being reset when changing to a different frame of an animation.
HDRP: Fixed PCSS filtering issues with cached shadow maps.
HDRP: Fixed performance issue with ShaderGraph and Alpha Test.
HDRP: Fixed Pixel Displacement that could be set on tessellation shader while it's not supported.
HDRP: Fixed pixelated appearance of Contrast Adaptive Sharpen upscaler and several other issues when Hardware DRS is on.
HDRP: Fixed possible QNANS during first frame of SSGI, caused by uninitialized first frame data.
HDRP: Fixed potential NaN on apply distortion pass.
HDRP: Fixed Probe volume debug exposure compensation to match the Lighting debug one.
HDRP: Fixed pyramid color being incorrect when hardware dynamic resolution is enabled.
HDRP: Fixed ray traced reflections that were too dark for unlit materials. Reflections are now more consistent with the material emissiveness.
HDRP: Fixed Realtime lightmap not working correctly in player with various lit shader.
HDRP: Fixed recursive rendering transmittance over the sky. (1323945)
HDRP: Fixed reflection probes being injected into the ray tracing light cluster even if not baked (1329083)
HDRP: Fixed register spilling on FXC in light list shaders. Render Pipeline Debugger.
HDRP: Fixed Render Graph Debug UI not refreshing correctly in the Rendering Debugger.
HDRP: Fixed rendering of objects just after the TAA pass (before post process injection point).
HDRP: Fixed ResourceReloader that was not call anymore at pipeline construction.
HDRP: Fixed Rough Distortion frame setting not greyed out when Distortion is disabled in HDRP Asset.
HDRP: Fixed rounding issue when accessing the color buffer in the DoF shader.
HDRP: Fixed sceneview debug mode rendering. (1211436)
HDRP: Fixed screen being over-exposed when changing very different skies.
HDRP: Fixed screen-space shadows with XR single-pass and camera relative rendering. (1348260)
HDRP: Fixed Shader advanced options for lit shaders.
HDRP: Fixed shadow matte not working with ambient occlusion when MSAA is enabled
HDRP: Fixed shadow sampling artifact when using the spot light shadow option 'custom spot angle'.
HDRP: Fixed shadowmask editable when not supported.
HDRP: Fixed side effect on styles during compositor rendering.
HDRP: Fixed silhouette issue with emissive decals.
HDRP: Fixed skybox for ortho cameras.
HDRP: Fixed some aliasing ussues with the volumetric clouds.
HDRP: Fixed some depth comparison instabilities with volumetric clouds.
HDRP: Fixed some labels being clipped in the Render Graph Viewer
HDRP: Fixed some of the extreme ghosting in DLSS by using a bit mask to bias the color of particles. VFX tagged as Exclude from TAA will be on this pass.
HDRP: Fixed some reference to old frame settings names in HDRP Wizard.
HDRP: Fixed some render texture leaks.
HDRP: Fixed some resolution aliasing for physically based depth of field. (1340551)
HDRP: Fixed sorting for mesh decals.
HDRP: Fixed spacing on LayerListMaterialUIBlock.
HDRP: Fixed specular anti aliasing for layeredlit shader.
HDRP: Fixed specular occlusion sharpness and over darkening at grazing angles.
HDRP: Fixed spot light radius not changed when editing the inner or outer angle of a multi selection. (1345264)
HDRP: Fixed SSGI frame setting not greyed out while SSGI is disabled in HDRP Asset.
HDRP: Fixed SSR Accumulation with Offset with Viewport Rect Offset on Camera.
HDRP: Fixed SSR Precision for 4K Screen.
HDRP: Fixed SSS on console platforms.
HDRP: Fixed sub-shadow rendering for cached shadow maps.
HDRP: Fixed support for instanced motion vector rendering.
HDRP: Fixed support for light/shadow dimmers (volumetric or not) in path tracing.
HDRP: Fixed support for ray binning for ray tracing in XR (case ). (1346374) camera near plane not being taken into account when rendering the clouds. (1353548)
HDRP: Fixed the clouds missing in the ambient probe and in the static and dynamic sky.
HDRP: Fixed the double sided option moving when toggling it in the material UI (1328877)
HDRP: Fixed the earth curvature not being properly taken into account when evaluating the sun attenuation. (1357927)
HDRP: Fixed the emissive being overriden by ray traced sub-surface scattering.
HDRP: Fixed the fallback sun for volumetric clouds having a non null intensity. (1353955)
HDRP: Fixed the fallback to custom went changing a quality settings not workings properly (1338657)
HDRP: Fixed the FreeCamera and SimpleCameraController mouse rotation unusable at low framerate. (1352679)
HDRP: Fixed the incorrect value written to the VT feedback buffer when VT is not used.
HDRP: Fixed the LensFlare flicker with TAA on SceneView. (1356734)
HDRP: Fixed the missing parameter to control the sun light dimmer. (1364152)
HDRP: Fixed the performance of the volumetric clouds in non-local mode when large occluders are on screen.
HDRP: Fixed the possibility to hide custom pass from the create menu with the HideInInspector attribute.
HDRP: Fixed the ray traced sub subsurface scattering debug mode not displaying only the RTSSS Data (1332904)
HDRP: Fixed the RTAO debug view being broken.
HDRP: Fixed the shader graph files that was still dirty after the first save. (1342039)
HDRP: Fixed the sun leaking from behind fully opaque clouds.
HDRP: Fixed the transparent cutoff not working properly in semi-transparent and color shadows. (1340234)
HDRP: Fixed the various history buffers being discarded when the fog was enabled/disabled. (1316072)
HDRP: Fixed the volume not being assigned on some scene templates.
HDRP: Fixed the volumetric clouds cloud map not being centered over the world origin.
HDRP: Fixed the volumetric clouds having no control over the vertical wind. (1354920)
HDRP: Fixed ThreadMapDetail to saturate AO & smoothness strength inputs to prevent out-of-bounds values set by users. (1357740)
HDRP: Fixed tiled artifacts in refraction at borders between two reflection probes.
HDRP: Fixed timing issues with accumulation motion blur
HDRP: Fixed undo of some properties on light editor.
HDRP: Fixed undo on light anchor.
HDRP: Fixed undo-redo on layered lit editor.
HDRP: Fixed Undo/Redo instability of light temperature.
HDRP: Fixed unexpected rendering of 2D cookies when switching from Spot to Point light type (1333947)
HDRP: Fixed unexpectedly strong contribution from directional lights in path-traced volumetric scattering. (1304688)
HDRP: Fixed update order in Graphics Compositor causing jumpy camera updates. (1345566)
HDRP: Fixed update upon light movement for directional light rotation.
HDRP: Fixed usage of Panini Projection with floating point HDRP and Post Processing color buffers.
HDRP: Fixed various issues with non-temporal SSAO and rendergraph.
HDRP: Fixed various SSGI issues. (1327919, 1339297, 1340851)
HDRP: Fixed Vertex Color Mode documentation for layered lit shader.
HDRP: Fixed VFX flag "Exclude From TAA" not working for some particle types.
HDRP: Fixed VfX lit particle AOV output color space.
HDRP: Fixed viewport size when TAA is executed after dynamic res upscale. (1348541)
HDRP: Fixed volume interpolation issue with ScalableSettingLevelParameter.
HDRP: Fixed Volumetric Clouds not updated when using RenderTexture as input for cloud maps.
HDRP: Fixed volumetric fog being visually chopped or missing when using hardware Dynamic Resolution Scaling.
HDRP: Fixed volumetric fog being visually chopped or missing when using hardware Dynamic Resolution Scaling.
HDRP: Fixed volumetric fog in planar reflections.
HDRP: Fixed warning "Releasing render texture that is set to be RenderTexture.active!" on pipeline disposal / hdrp live editing.
HDRP: Fixed warning fixed on ShadowLoop include (HDRISky and Unlit+ShadowMatte).
HDRP: Fixed Warnings about "SceneIdMap" missing script in eye material sample scene
HDRP: Fixed white flash when camera is reset and SSR Accumulation mode is on.
HDRP: Fixed white flash with SSR when resetting camera history. (1335263)
HDRP: Fixed white flashes on camera cuts on volumetric fog.
HDRP: Fixed white flashes when history is reset due to changes on type of upsampler.
HDRP: Fixed wizard checking FrameSettings not in HDRP Default Settings.
HDRP: Fixed wobbling/tearing-like artifacts with SSAO.
HDRP: Fixed WouldFitInAtlas that would previously return wrong results if any one face of a point light would fit (it used to return true even though the light in entirety wouldn't fit).
HDRP: Fixed wrong color buffer being bound to pre refraction custom passes.
HDRP: Fixed wrong LUT initialization in Wireframe mode.
HDRP: Fixed wrong ordering in FrameSettings (Normalize Reflection Probes).
HDRP: Fixed XR depth copy (1286908)
HDRP: Fixed XR depth copy when using MSAA.
HDRP: Generating a GUIContent with an Icon instead of making MaterialHeaderScopes drawing a Rect every time.
HDRP: HDRP Wizard can still be opened from Windows > Rendering, if the project is not using a Render Pipeline.
HDRP: Indentation of the HDRenderPipelineAsset inspector UI for quality.
HDRP: MaterialReimporter.ReimportAllMaterials and MaterialReimporter.ReimportAllHDShaderGraphs now batch the asset database changes to improve performance.
HDRP: Mitigate ghosting / overbluring artifacts when TAA and physically-based DoF are enabled by adjusting the internal range of blend factor values. (1340541)
HDRP: Only display HDRP Camera Preview if HDRP is the active pipeline. (1350767)
HDRP: Prevent any unwanted light sync when not in HDRP. (1217575)
HDRP: Prevented user from spamming and corrupting installation of nvidia package.
HDRP: Reduced the number shader variants for the volumetric clouds.
HDRP: Reduced the volumetric clouds pattern repetition frequency. (1358717)
HDRP: Removed DLSS keyword in settings search when NVIDIA package is not installed. (1358409)
HDRP: Removed unsupported fields from Presets of Light, Camera, and Reflection Probes. (1335979)
HDRP: Significantly improved performance of APV probe debug.
HDRP: Support undo of HDRP Global Settings asset assignation.
HDRP: The default LookDev volume profile is now copied and referenced in the Asset folder instead of the package folder.
HDRP: The HDRP Wizard is only opened when a SRP in use is of type HDRenderPipeline.
HDRP: VFX : Debug material view incorrect depth test. (1293291)
HDRP: VFX : Debug material view were rendering pink for albedo. (1290752)
HDRP: VFX: Fixed LPPV with lit particles in deferred. (1293608)
HDRP: Viewport and scaling of Custom post process when TAAU or DLSS are enabled. (1352407)
HDRP: When the HDProjectSettings was being loaded on some cases the load of the ScriptableObject was calling the method
Resetfrom the HDProjectSettings, simply rename the method to avoid an error log from the loading.
IL2CPP: Added support for array values as custom attribute arguments. (1174903)
IL2CPP: Allow blittable, generic value types to be marshaled as delegate parameters. (1348863)
IL2CPP: Allow the debugger to grow the frame capacity on-demand. (1360149)
IL2CPP: Fix parsing of --custom-step command line argument to UnityLinker (1351726)
IL2CPP: Fixed "Unexpected generic parameter." exception when a generic method had a function pointer parameter (1364482)
IL2CPP: Fixed compiler error calling Enum.GetHashCode with System.Enum arguments. (1354855)
IL2CPP: Fixed issue with non-ASCII characters in installation path or project path that could cause builds to fail. (1322529)
IL2CPP: Fixed issue with UnityLinker that would cause Windows Runtime assemblies to be incorrectly loaded and parsed, causing builds to fail. (1315830)
IL2CPP: Fixed UnityLinker bug that caused callvirt instruction to be removed without removing the corresponding constrained instruction. (1297609)
IL2CPP: UnityLinker will now respect --unity-root-strategy if defined on the command line (1351728)
IMGUI: Fixed an issue where custom editor without CanEditMultipleObjects attribute is not displaying the script fields. (1279145)
IMGUI: Fixed an issue where the layers dropdown menu window is not closed while dragging the editor window. (1308381)
IMGUI: Fixed Gridview image of Texture not updating in ProjectBrowser when color space is updated in PlayerSettings. (1198127)
IMGUI: Fixed Project Window going blank when the objects are dragged between folders in Favourites. (1206532)
IMGUI: Hide the MaterialEditor in inspector when a MeshRenderer is hidden. (1289980)
Input System: Fixed input events being lost on Android.
Optimized input processing performance. (1337296)
iOS: Added missing API file updates.
iOS: Disabled audio output on pre-iOS14 simulators, as audio is broken here completely and causes a crash on startup. (1325806)
iOS: EditorUserBuildSettings.symlinkLibraries will affect sources in packages which are referenced by file tag in Packages/manifest.json (1301157)
iOS: Fixed an issue where display length is not updated when external display is disconnected. Users now can access only active displays (previously there was a possibility of caching known but inactive display indices). (1330759)
iOS: Fixed crash when performing two Microphone/WebCam permission requests at the same time. (1330126)
iOS: Fixed incorrect "Plugins colliding with each other" errors when using certain framework combinations. (1287862)
iOS: Fixed SystemInfo returning incorrect values for max compute buffer inputs on Metal (1299759)
iOS: Fixed tweaking WebCamTexture sampler setup resulting in GPU error on older devices. (1309523)
iOS: Provide Compass.headingAccuracy data. (1338663)
iOS: Updated the list of available frameworks in Plugin Inspector. (1194821)
iOS: When selected XCode project build destination is not allowed (root project folder, Assets folder, etc.) warning is displayed.
Kernel: Atomic 64-bit Load/Store on Win32/UWP x86 fixed (Reads and writes to 64-bit values are not guaranteed to be atomic on 32-bit Windows).
Kernel: Fixed issue where running player with auto connect on mobile device with disabled WiFi, disabling player connection. Then any attempt to connect to device manually will fail. (1311781)
Kernel: Fixed potential crash while extracting detangled name on OSX. (1308423)
Kernel: Stop using recently deprecated timer native functions on Mac/iOS/tvOS and replace with current official recommendation.
License: Added Licensing Client version into Editor Analytics.
License: Fixed memory leak in Licensing Module.
License: Fixed serverDirectory in license server configuration file.
Linux: Added handling for Norwegian Bokmal and Nynorsk in SystemInfo for macOS and Linux, and to SystemInfo in Runtime/Misc used by WebGL and MetroPlayer.
Linux: Fix "Not Responding" dialog window opens up in the Player when the splash screen's logo duration is set to 4.65 or higher. (1249666)
Linux: Fix crash under Wayland when opening Keyboard window from New Input System debug window. (1319311)
Linux: Fix recognition of game controllers with same usd product id. (1300415)
Linux: Fixed automated batch mode test from failing due to Unity exiting with error code 133 before all it's threads have shut down. (1303886)
Linux: Fixed gizmo dropdown not closing when clicking on the scene or game view. (1289590)
Linux: Fixed inspector prefab overrides window from drawing beyond the usable workarea (1119679)
Linux: Fixed main editor window from scrolling when using larger Gnome font sizes. (1311302)
Linux: Fixed main menu disappearing after certain layout change events. (1362449)
Linux: Fixed mismatch BeginSample/EndSample profiler errors when using a modal file save dialog while profiling. (1322750)
Linux: Fixed mismatch profiler BeginSample/EndSample errors when interacting with a modal dialog while the profiler is active in Editor mode. (1306180)
Linux: Fixed mouse clicks incrementing scroll x/y. (1308873)
Linux: Fixed plugin header files not being copied to the editor installer. (1345891)
Linux: Fixed shifted key events in the old input system. (1316748)
Linux: Input when using WASD does not get stuck when using the shift key in Play Mode. (1333044)
Linux: Keep the window that had focus last in focus after a modal dialog closes. (1319180)
Linux: Keyboard API now correctly displays non-US keyboard displaynames when using Keyboard.current.<key>.displayName (1266943)
Linux: Made updates to min/max size blocking if it results in a resized window to help ensure a window is ready after it's size has been changed. (1319323)
Linux: Make GUIViews obey the min/max constraints of the window they are in. (1327222)
Linux: NullReferenceException no longer appears when clicking the "Setting icon" under the "Profiler Module" dropdown. (1290647)
Linux: Removed broken gamepad auto mapping from SDL 2.0.14. (1322165)
Linux: Updated from SDL 2.0.12 to 2.0.14
macOS: Added support for Apple silicon Editor plugins. Fixed error where x64 plugins report a naming conflict with Apple silicon plugins. (1332566)
macOS: Dock is no longer ignored when exiting fullscreen and moving the window (1354879)
macOS: Fix Screen.resolutions refreshRate property. (1284854)
macOS: Fixed a bug related to 8 monitors connected at one time. (1272030)
macOS: Fixed crash if user unplugs a secondary display while running player. (1325384)
macOS: Fixed IMGUI mouse position when using the New Input System. (1298110)
macOS: Fixed incorrect workArea size when changing the dock at runtime. (1354356)
macOS: Fixed incorrect workArea size when changing the screen scaling at runtime. (1354329)
macOS: Fixed Input.inputString doesn't convert input to the suggestions from IME. (1305843)
macOS: Fixed Input.inputString not updating when cmd key is held down (1296862)
macOS: Fixed install path regression that resulted in iOS support not being able to be installed. (1337753)
macOS: Fixed memory leak in HDR Display related code
macOS: Fixed scene view lagging when the tile palette window was focused. (1316068)
macOS: Fixed Screen.currentResolution when Mac Retina Support Player Setting is off. (1286140)
macOS: Force to use GPU Lightmapper instead of CPU Lightmapper on Apple silicon (1341489)
macOS: If running under Rosetta 2, <Rosetta> will appear in the title bar next to the graphics mode. (1329708)
macOS: We now log basic system information when launching the Editor to the log file.
macOS: We now prompt for to save changes even when the window is minimized. (1320569)
macOS: When selected XCode project build destination is not allowed (root project folder, Assets folder, etc.) warning is displayed.
Mobile: No more MSAA and Pixel Light count warnings when building URP on Mobile platforms (1300605)
Mobile: Show provider icon and info text correctly.
Mono: Add missing facade dlls for Unity profiles (1367105)
Mono: Assembly.Load now loads distinct assemblies correctly. (1073523)
Mono: Fix missing .NET Standard 2.1 assemblies (System.Memory, System.Buffers...) (1367105)
Mono: Fixed "Loading assembly failed. File does not contain a valid CIL image" errors. (1336618)
Mono: Fixed FileSystemEventArgs.Name to be a relative path. (1344552)
Mono: Fixed issue in the mono web request stack where it would incorrectly wait on the Unity synchronization context for an asynchronous call to complete leading to the request being aborted on timeout. (1338465)
Mono: Fixed issue where the timeout of a HttpClient handler was not being used for requests. (1365107)
Mono: Fixed regression where a MissingMethodException would be thrown when IsComObject was called. (1346334)
Mono: Prevent unnecessary files from being copied into the Managed directory when managed code stripping is enabled. (1302474)
Multiplayer: Marked uNET HLAPI as deprecated.
N/A (internal): Fixed exception caused by EmbeddedLinuxPreProcessor, by only executing checks when EmbeddedLinux platform is selected.
Fixes the removal of unneeded EmbeddedLinux support files (e.g. unstripped debug symbols), to minimize the resulting archive size.
N/A (internal): Fixed native test instability CanLoadMesh. (1330725)
Package: (ml-agents) Removed unnecessary allocations and garbage collections during runtime.
Package: (Recorder) (macOS) Fixed an image stride issue for ProRes formats 4444 and 4444XQ.
Package: (Recorder) Do not perform the color space conversion from linear to sRGB for RenderTextures that are already sRGB.
Package: (Recorder) Ensured that the color space conversion from sRGB to linear is performed when required for EXR files
Package: (Recorder) Ensured that the Image Recorder encodes in sRGB when requested, even if the scripted render pipeline provides linear data.
Package: (Recorder) Fixed a memory leak in the AOV Recorder.
Package: (Recorder) Fixed an exception that occurred when sending a RenderTexture to a Recorder before creating this RenderTexture.
Package: (Recorder) Fixed an exception that occurred when the user performed the undo action after deleting a Recorder.
Package: (Recorder) Fixed audio recording issue when the frame interval is not starting at 0.
Package: (Recorder) Fixed invalid values in the alpha channel when performing texture sampling for different rendering and output resolutions.
Package: (Recorder) Fixed issues with the Recorder samples about synchronizing multiple recordings and resetting the Game view resolution.
Package: (Recorder) Fixed the Tagged Camera capture process to follow any camera changes that might occur.
Package: (Recorder) Fixed vertically flipped outputs on OpenGL hardware.
Package: (Recorder) Perform the appropriate color space conversion for Texture Sampling sources when required.
Package: Added Live Capture [1.0.1]. Use the Live Capture package to connect to the Unity Virtual Camera and Unity Face Capture companion apps to capture and record camera motion and face performances.
Package: Com.unity.purchasing updated to 4.0.3. Please refer to the package changelog online here:.
Package: Fixed bug that causes searcher window to be offset too far when accounting for host window boundaries.].
Package: Released com.unity.mathematics 1.2.4
Package: Updated [email protected].
Package: Updated Tutorial Authoring Tools to 1.0.0.
Package: Updated Tutorial Framework to 2.0.0.
Package: Visual Scripting :
- Fixed long values not preserved in literal nodes.
- Fixed root icons in breadcrumbs in the graph editor window.
- Fixed graph nodes icons
- Fixed project settings will not show when looking for graphs
- Fixed exception when user double clicks on a graph
- Raised warnings at edit time when a MouseEvent node is used when targeting handheld devices instead of build time.] Generated folder is removed when removing the VisualScripting package.] Preferences being searchable [BOLT-1218]
Package: [VisualScripting] Preferences spacing has been adjusted to avoid overlapsScripting] Warnings overflow in the console when deleting and adding a boolean variable in the blackboard
Package: [VisualScripting] Warnings when entering play mode when the "Script Changes While Playing" is set to Recompile And Continue Playing
Package Manager: Added an info icon will warn users when the package version they are using is not recommended for their Unity version.
Package Manager: Added missing tooltip to the refresh icon.
Package Manager: Deletion of root package folders from Project Browser is now prohibited. Users should use Package Manager Window to ensure proper removal of packages from their projects. (1285197)
Package Manager: Dependencies' state is automatically refreshed in the list when installed or removed by default. (1360937)
Package Manager: Documentation links are now secure or, if un-secure and not generated by Unity, a warning will show up letting users know they have to enable un-secure requests in "Player Setting". (1356909)
Package Manager: Enabled Update button for packages installed from Git, will check if any updates are available and update the Git package.
Package Manager: Errors are now automatically refreshed once packages are updated to fix the issue. (1342141)
Package Manager: Fix the button text being clipped and properties going out of panel under the foldout of Scoped Registries in Project Settings window.
Package Manager: Fixed a bug where a non-discoverable package is displayed as released on the user's Editor version. (1335740)
Package Manager: Fixed a bug where the Package Manager UI did not refresh after manually editing the package.json of an embedded package.
Package Manager: Fixed an issue where either no submodules or the wrong submodules could be cloned when using a Git-based dependency with both a path and a revision.
Package Manager: Fixed an issue where if the user has many asset store packages loaded in My Assets view, selecting the last package and scrolls up list show items with empty package name.
Package Manager: Fixed an issue where manually embedded packages from the project cache are not added in Perforce. (1314073)
Package Manager: Fixed an issue which could sometimes cause package resolution errors due to
EMFILEerrors in projects with a large number of packaged assets.
Package Manager: Fixed an issue which could sometimes lead to missing files in successfully resolved packages in projects with a large number of packaged assets.
Package Manager: Fixed confusing error message when 'View changelog' not available (1282094)
Package Manager: Fixed random stack overflow issue when installing and uninstalling packages. (1327700)
Package Manager: Fixed sample display name error info box when package is not in development and package manager set to lowest width.
Package Manager: Fixed size of add bouton.
Package Manager: Fixed tests compilation warnings.
Package Manager: Fixed the issue when sorting by "Update date" and filtering by "Downloaded", the order of the assets is inconsistent in My Assets tab. (1343200)
Package Manager: Fixed the issue where modifying sort option breaks package list ordering in the My Assets tab. (1343198)
Package Manager: Fixed the issue where sync code is not unregistered when the Package Manager window is closed. (1368318)
Package Manager: Fixed the issue where the ads packages license link takes you to a wrong url. (1350621)
Package Manager: Fixed the issue where
Open in Unityfrom the asset store website does not always work the first time. (1355418)
Package Manager: Fixed the UX issue where after a user installs a version of a package that does not match the version used after dependency resolution, it looks like the install did not happen. (1271576)
Package Manager: Hide reset feature button if the installed version is a patch of the lifecycle version. (1360446)
Package Manager: If user install same version as feature set required locally, we don't show it as customized. (1342339)
Package Manager: Implemented undo/redo in Scoped Registries Management. (1285075): Show package update icon and update button when there is a latest update available. Remove update icon and update button when there is a recommended update available that is not the latest update.
Package Manager: Show proper error message when user try to download an asset he doesn't have a paid for (free asset, wrong account) (1299159)
Package Manager: The expanded/collapsed state of Packages folder in Project Browser will now persist on Editor restart. (1307883)
Package Manager: There are no more missing packages in a Feature's dependencies list. (1344819)
Particles: Add texel size and mask interaction shader properties to particle system renderer. (1296392)
Particles: Added missing tooltips to + and - buttons. (1332732)
Particles: Apply Start Delay to Rate over Distance emission. (1314672)
Particles: Disable error dialog for incorrect Vertex Stream setup, because it is not possible to provide accurate information. (1304215)
Particles: Ensure ParticleSystem.Particle.angularVelocity gets applied properly when set via script. (1322645)
Particles: Fix incorrect evaluation of the end of stepped curves in Particle Systems. (1314389)
Particles: Fixed culling issue if a Particle System mesh changes. (1329097)
Particles: Fixed Mesh (+) button overlapping in Renderer Material field. (1332484)
Particles: Fixed misplaced object selector buttons under Trigger, Collision, and External Force modules. (1331893)
Particles: Fixed orientation of GPU instanced mesh particle shadows. (1036174)
Particles: Fixed shortcuts when particle overlay is collapsed. (1340675)
Particles: Keep Particle System curve editor labels within the GUI box, to avoid any clipping. (1323617)
Particles: Prevent errors and invalid particle simulations, when using an Arc of 0 and Ping-Pong mode in the Shape module. (1278594)
Particles: Run the Trigger Module on newly spawned particles in case they need removing, or a callback firing. (1328141)
Particles: The create button("+") is now hidden in the Particles Window when editing a Prefab Asset. (1287185)
Particles: Updated emitter velocity as soon as the property is changed, not on the next frame. (1342626)
Physics: Added better error message when trying to set an invalid mesh on a MeshCollider (1282750)
Physics: Anchored prismatic limit handles in screen space to avoid them occluding small Articulation Body links.
Physics: Changed the PhysX PVD connection code so that it can connect to a running PVD instance, and does set the visualization flags correctly.
Physics: Ensure that a Rigidbody2D never alters current Transform Z position at the point we perform interpolation write-back to the Transform position. (1362840)
Physics: Fixed a crash when accessing RaycastHit.lightmapCoord of a hit agains a Mesh that does not have texture channel 1 (1361884)
Physics: Fixed a crash when adding more than 65534 Colliders to a Rigidbody or an Articulation Body. (1325575)
Physics: Fixed an issue where, when using "Collision2D.GetContacts(List<ContactPoint2D>)", the list size was incorrectly set the same as the list capacity rather than the number of results returned. (1352777)
Physics: Fixed Articulation Bodies not changing collision detection mode via Script. (1330429)
Physics: Fixed Character Controller returning incorrect closest points when the center is offset. (1339454)
Physics: Fixed crash when trying to read texture coordinates from a RaycastHit when the mesh was unreadable. (1006742)
Physics: Fixed Physics Debugger not showing up in Prefab Mode. (1158611)
Physics: Fixed Rigidbody falling asleep when fixed joint is removed (963368)
Physics: Fixed Rigidbody.angularVelocity setter that constrained the velocity wrongly; it's now doing it in mass-space to match what should happen actually. (1083573): A component added to a Prefab in Prefab Mode which is found to conflict with a pre-existing component on an instance is now suppressed so the Inspector displays only the pre-existing component instead of both. (1148404)
Prefabs: Added error message to prevent entering playmode when the scene contains missing Prefabs instances. Added option to Unpack Prefabs when deleting them.
Prefabs: An open ComparisonViewPopup is now closed if undo operations cause selection changes which affect the content of the Overrides Window's tree view and render the popup's visibility redundant. (1325826)
Prefabs: Calling UnloadUnusedAssetsImmediate in OnWillCreateAsset/OnWillSaveAsset during prefab save no longer crashes. (1303644)
Prefabs: Checked that the GameObject pointed by a component, during scene loading, is actually a valid GameObject. (1333374)
Prefabs: Components that are suppressed on a Prefab instance by added components, now reappear when the suppressor is destroyed. (1295687)
Prefabs: Corrected API documentation for SceneManager.activeSceneChanged. (1038093)
Prefabs: Fixed an issue where OnValidate() could be called on an inactive Prefab. (1271820)
Prefabs: Fixed case 1085603: Undoing the 'Create/Replace prefab' action does not revert the Prefab to its previous state'. (1085603)
Prefabs: Fixed Crash when Exiting Play mode with Multiple scenes loaded (1298007)
Prefabs: Fixed FindAllInstancesOfPrefab to filter persistent instances.
Prefabs: Fixed multiple selection of added GameObjects not being applied to Prefabs (or reverted) via the Hierarchy context menu. (1313939)
Prefabs: Fixed prefabs don't visually show their HideFlags.NotEditable state in the Hierarchy window. (1324446)
Prefabs: Fixed Recovery GameObject is created when opening scene with missing Prefab as a child of other GameObject (1299744)
Prefabs: Fixed that changes to Prefab instances in the Scene are not reflected when dragging Materials/Scripts onto the Asset in the Project Window. (1311622)
Prefabs: Fixed that creating a Prefab by dragging did not handle missing scripts gracefully. (1259961)
Prefabs: Fixed that Overrides window does not refresh state after changing GameObject name in the comparison popup. (1300152)
Prefabs: Fixed undo not working when Renaming a Missing Prefab in the Hierarchy. (1165052)
Prefabs: Improved error logging when importing Prefab files. The import errors are now collected and logged as one message instead of individual errors and most importantly the Prefab asset path is made clear to the user. Also clicking the message will frame the Prefab in the Project Browser. Also missing Prefabs will now be logged as an error. (1298338)
Prefabs: PrefabUtility.SavePrefabAsset is now robust against supplied objects being reloaded during AssetModificationProcessor.OnWillSaveAssets. (1276013)
Prefabs: Removed wrong error message. (1287903)
Prefabs: Set as first/last sibling cannot be undone to reorder the Prefab hierarchy. (1296698)
Prefabs: Updated documentation for OnPostprocessPrefab to reflect better the behavior. (1304102)
Prefabs: Updated documentation that scripts using OnValidate can only modify data on itself. (1333052)
Profiler: Enabled GPU profiling in the Editor (1315474)
Profiler: Fixed a material leak in the UISystemProfiler UI (1280162)
Profiler: Fixed an issue where creating a Memory snapshot file writer with a null or empty string would crash the Editor. (1296217)
Profiler: Fixed an issue where we could no longer connect to tethered Android devices after they have been disconnected, without manually calling adb commands from the CLI. (1268987)
Profiler: Fixed Application.targetFrameRate frames visualization in the Editor when profiling Play mode. (1355826)
Profiler: Fixed crash due to Profiler code during Editor Shutdown
Profiler: Fixed gaps in script-driven profiler counters in the Editor when profiling Play mode. (1343692)
Profiler: Fixed HDRP/URP GPU statistics in Editor showing zeroes (1299569) Recorder reporting invalid data after being disabled and re-enabled. (1321813)
Profiler: Fixed selection of RestoreManagedReferences sample and its children in Profiler Window (1330206)
Profiler: Fixed standalone profiler shows editor update check dialogue on start. (1334264)
Profiler: Fixed systrace captures producing corrupted data when started in middle of frame, mismatch begin and end samples. (1270929)
Profiler: Fixed the Thread Selection Dropdown in Hierarchy view showing multiple threads as selected if they all share the same name.
Profiler: Player names are no longer aggresively cut off. (1345540)
Profiler: Profiler charts now correctly render user-defined Profiler Counters in the Network category captured when UNet is disabled. (1360925)
Profiler: Profiler will now clear when entering playmode with the game view maximised. (1307042)
Profiler: Removed Total Time from GI Profiler Module chart as it duplicated time in the stacked chart (1305568)
Profiler: Text is now selectable in a custom Profiler module's default Details View. (1336943)
Profiler: The Profiler window can no longer be shrunk small enough to hide some of the toolbar options. (1354357)
Profiler: Truncated project name in the profiler broadcast to fit within the message buffer. (1322888)
Profiler: When disconnecting from and autoconnected device the profiler will now default back to playmode instead of the previous connection. (1268887)
Scene Manager: Added debouncing to the search field in the Hierarchy and Scene view for better search input experience in large scenes. Is also now consistent with Project Browser searching behavior. (1315731)
Scene Manager: Fix rename overlay in Hierarchy to support hierarchy changes while renaming. (1296235)
Scene Manager: Fix that prevent dragging gameobject in a scene that is not loaded (1291614)
Scene Manager: Fixed "PlayerLoop internal function has been called recursively" error is thrown when calling NewScene() from the Update function. (1258244)
Scene Manager: Fixed focus issue when using a secondary hidden Hierarchy. When creating a new GameObject a visible Hierarchy tab is focused, prioritizing the last interacted Hierarchy if visible. Focusing a hidden Hierarchy tab is therefore now prevented in this situation. (1190664)
Scene Manager: Fixed GameObjects in Hierarchy window not hidden when using HideFlags.HideInHierarchy until next Hierarchy rebuild. (1167675)
Scene Manager: Fixed SceneManager.GetSceneByName() returning null when buildsettings path to the Scene is given as a parameter. (1155473)
Scene Manager: Improved performance of updates on large arrays. (1272309)
Scene Manager: Paste as Child' now supports pasting gameObjects to SubScenes. (1316660)
Scene Manager: Updated dirtiness logic for undoing after save. (1221800)
Scene/Game View: Added a verification of the current mode (edit/play) to only display FPS when in playmode as the measured value does not mean anything in edit mode and can confuse some users. (1285123)
Scene/Game View: Added an option to disable the hide of the gizmos. (1313974)
Scene/Game View: Added check before recursion that the gizmo is not already active. (1326407)
Scene/Game View: Bugfix for Instability in picking tests failure. (1329183)
Scene/Game View: Fix for built-in tool buttons toggling highlighted state when clicked consecutively. (1344813)
Scene/Game View: Fix for translation tools offsetting object when cursor moved off-screen (1360113)
Scene/Game View: Fixed Camera Overlay lost when entering play mode. (1340455)
Scene/Game View: Fixed case where clicking on a popup button while the popup was open would not close the existing window. (1335070)
Scene/Game View: Fixed clicking on a non-prefab child of a prefab instance incorrectly selecting prefab root. (1305433)
Scene/Game View: Fixed custom tool button incorrectly showing a built-in tool in certain cases where EditorToolContext is active. (1339433)
Scene/Game View: Fixed fading between 2D and 3D modes with overlays. (1339978)
Scene/Game View: Fixed for IMGUI content not being displaced by overlay toolbars. (1339990)
Scene/Game View: Fixed for the default Overlays' preset not matching the intended design. (1339964)
Scene/Game View: Fixed Gameview focus inconsistent when re-docking. (1272795)
Scene/Game View: Fixed Grid Axis field in the Grid Settings Overlay not accepting input. (1345036)
Scene/Game View: Fixed Hierarchy context menu showing empty submenu entries. (1319505)
Scene/Game View: Fixed issue where some Overlay windows would throw exception after domain reload. (1345651)
Scene/Game View: Fixed issue with scene view overlay in some packages (1339984)
Scene/Game View: Fixed possible exception when removing a collider component while the collider editor tool is active. (1259502)
Scene/Game View: Fixed Scene Hierarchy context menu not showing separators in submenus. (1247305)
Scene/Game View: Fixed Scene View camera zooming too far when using middle mouse wheel scroll with a very small object framed. (1300336)
Scene/Game View: Fixed shortcut conflict between Scene View "Nudge Grid" and 2D "Flip X Y" axis. (1309850)
Scene/Game View: Fixed some Scene View Overlays incorrectly registering as applicable to any Editor Window. (1337371)
Scene/Game View: Fixing MissingReferenceException in the Scene overlay when deleting a selected Particle System. (1295035)
Scene/Game View: Fixing overlay placement when scene view is resizing (1341038)
Scene/Game View: Fixing selection outline not drawn for SRP objects in scene view (1294951)
Scene/Game View: Fixing selection outline not fading when behind other objects (1119607)
Scene/Game View: Fixing Slider1D Handle
Scene/Game View: Global tool buttons are disabled when not useable. (1310614)
Scene/Game View: Improved performance when opening
Scene Viewwindow. (1343564)
Scene/Game View: Increased maximum Scene Camera fly speed to 1e+5. (1287220)
Scene/Game View: Overlay positioning optimization by using transform position and usageHints (1339973)
Scene/Game View: Scene/Game View: Fixed an exception when loading an Overlay preset with invalid window type. (1337897)
Scene/Game View: The Scene view camera was snapping back or forward when zooming in while holding the Alt key.
This was due to a small error in the formula.
There is no more jump in the zoom with this fix. (1293718)
Scripting: Added support for custom attribute types inheriting from RuntimeInitializeOnLoadMethodAttribute. (1334599)
Scripting: Added type and method information to exceptions raised by the assembly reference checker. (1307194): Avoid GC allocation in UnitySynchronizationContext. (1276456)
Scripting: Build fails when there is a comma in path. (1325976)
Scripting: Change Roslyn Analyzers to be run part of normal Compilation Step.
This will also result in Analyzer errors to be treated as an Compile Error.
Scripting: Disabled native test causing instabilities. (1322841)
Scripting: Enabled IPv6 sockets creation with the ICMP protocol. (1309061)
Scripting: Exclude readonly files from script execution order menu to reduce visual noise. (1281215)
Scripting: Fix a stack overflow occurring when a custom
ILogHandleris throwing an exception in LogFormat/LogException methods (1241896)
Scripting: Fix crash on Stackoverflow, when instantiating infinitely in OnEnable (1263270)
Scripting: Fix not being able to select external script editors that don't have the '.app' extension on OSX. (1250634)
Scripting: Fix possible exception after changing Roslyn Analyzers to be part of the initial compilation (1313026)
Scripting: Fixed a bug where adding a script to the list of custom-ordered scripts in "Project Settings" -> "Script Ordering" would pick up a totally unrelated script. (1327742)
Scripting: Fixed a crash happening when trying to enter an unsupported character (such as an emoji) in a field in a game built with IL2Cpp backend. (1314163)
Scripting: Fixed a memory corruption crash related to the incremental GC incorrectly freeing objects.
Scripting: Fixed a memory leak happening when removing listeners from a UnityEvent that is never raised afterwards. (1303095)
Scripting: Fixed ApiUpdater incorrectly removing usings in some scenarios
Scripting: Fixed assemblies in non local/embedded packages triggering updater consent (1171778)
Scripting: Fixed condition on accessing a game object from a callback while it was being constructed that was leaving the original GameObject managed wrapper in a detached state. (1295939)
Scripting: Fixed crash in ScriptUpdater.exe when editor is installed in a path containing commas. (1341703)
Scripting: Fixed editor crash when an animation tries to pass an invalid scriptable object as a method parameter. (1252134)
Scripting: Fixed invalid memory write in MonoBehaviour when it destroys its own instance within the OnDisable call. (1286736)
Scripting: Fixed nested aliases handling in ApiUpdater (1251569)
Scripting: Fixed NVidia native libraries being included in player build when the Module is disabled. (1332465)
Scripting: Fixed potential error due to ApiUpdater not handling extension methods correctly.
Scripting: Fixed rare deadlock case in GC. (1229776)
Scripting: Fixed undoing changes to a prefab instance not triggering object change events. (1308479)
Scripting: Fixed XmlSerializer not working with managed code stripping when using the mono backend. (1331829)
Scripting: Fixes HttpUtility.UrlEncode runtime exceptions (1296177)
Scripting: Ignore Visual Scripting assemblies when building for mobile devices, so no MouseEvent handler warning is raised.
Scripting: Improved diagnostic messages for unsupported cases of RuntimeInitializeOnLoadMethod usage. (1246081)
Scripting: Interpolated verbatim strings can start with both $@"..." and @$"..." as specified in C# 8.0. (1304285)
Scripting: NativeArray<T>.ReadOnly now implements IEnumerable<T>. (1319358)
Scripting: Performance improvements in ApiUpdater.
Scripting: Provide Visual Studio 2019 installers rather than Visual Studio 2017
Scripting: Self-referencing assemblies will give error during assembly validation.
Scripting: UnityWebRequest no longer re-uses previous connections when custom certificate handler is changed. (1337019)
Scripting: When an editor-only precompiled assembly reference is ignored by the script compiler because the asmdef referencing it is not editor-only, a warning now appears in the console. (1220959)
Scripting: When inspecting a readonly asmdef file, the inspector fields are disabled and a small text indicates that the asmdef file is read-only. (1255405)
Search: Added a tree view to list and browse save searches.
Search: Added support to search references using an UnityEngine.Object instance ID, i.e.
ref:-2458, -2458 can be a component instance ID. (1331890)
Search: Fix editor stall when the asset worker try to resolve a message log with an UnityEngine.Object in a non-main thread. (1316768)
Search: Fixed degenerative indexing issue with VFX assets.
Search: Fixed errors are thrown on enabling 'UI Toolkit Live Reload' in 'Search asset store' window. (1299600)
Search: Fixed floating point search expression parsing for non US locales.
Search: Fixed indexed prefab objects cannot be resolved using the search result global object id. (1328618)
Search: Fixed prefab subtypes not available in default project index. (1332948)
Search: Fixed search index incremental update merge issue.
Search: Fixed search index should discard long serialized property string (i.e. embedded JSON string). (1362623)
Search: Fixed search item label and description in compact mode.
Search: Fixed search tab showing HTML <a> tags for the item count. (1301148)
Search: Fixed search view error reporting in status bar when no group is selected.
Search: Fixed SearchService.Request when used with a non-asynchronous queries.
See.
Search: Fixed too many objects being indexed and out of memory exception
Reported on the forums:.
Serialization: Added pruning when an object with [SerializeReference] contains missing types. (1338952)
Serialization: Binary2Text adding support for SerializeReference/ReferenceRegistry. (1284031)
Serialization: Fix SerializeReference object missing in certain situation.
Serialization: fix uninitialized member field reported by static analysis (1305542)
Serialization: Fixed an issue where a SerializedObject is created with empty Objects. (1279513)
Serialization: Fixed clone/load/save performance regression. (1338713)
Serialization: Fixed for ability to find assets derived from a Generic type in the object selector window. (1288312)
Serialization: Fixed for name conversion not working in specific code path. (1352617)
Serialization: Fixed regression where prefab override would not show up on SerializeReference instance and you could no longer remove contextually a single Override. (1348031)
Serialization: Fixed SerializedProperty to properly set a value across all selected objects when the value already present on the first object matches the specified value. (1228004)
Serialization: Missing types from managed referenced object are properly stripped when creating a player build.
Services: Analytics no longer auto-activates on new project link.
Services: Fixed Android app_build is 0 for reports in Cloud Diagnostics. (1217055)
Services: Fixed Cloud Diagnostics native crash reporting on tvOS. (1332243)
Services: Fixed Cloud Diagnostics sometimes failing to provide line numbers for Windows Native crashes. (1329172)
Services: Fixed iOS builds may fail in Xcode with error about USYM_UPLOAD_AUTH_TOKEN. (1329176)
Services: Fixed the "Latest Version" section of the In-App Purchasing Settings when com.unity.purchasing version of 2 or less is installed. It now always offers the verified version, but adds migration warning messages about moving to newer versions which do not use the IAP Asset Store plugin.
Services: Fixed upload of symbols to Cloud Diagnostics for Windows builds. (1329220))
Shadergraph: - Clean up console error reporting from node shader compilation so errors are reported in the graph rather than the Editor console (1296291)
Shadergraph: Added padding to the blackboard window to prevent overlapping of resize region and scrollbars interfering with user interaction.
Shadergraph: Blackboard now properly handles selection persistence of items between undo and redos.
Shadergraph: Disconnected nodes with errors in ShaderGraph no longer cause the imports to fail. (1349311)
Shadergraph: Fixed "Disconnect All" option being grayed out on stack blocks. (1313201)
Shadergraph: Fixed a bug when a node was both vertex and fragment exclusive but could still be used causing a shader compiler error. (1316128)
Shadergraph: Fixed a bug where changing a Target setting would switch the inspector view to the Node Settings tab if any nodes were selected.
Shadergraph: Fixed a bug where old preview property values would be used for node previews after an undo operation.
Shadergraph: Fixed a selection bug with block nodes after changing tabs (1312222)
Shadergraph: Fixed a serialization bug wrt PVT property flags when using subgraphs. This fixes SRP batcher compatibility.
Shadergraph: Fixed a Shader Graph issue where property auto generated reference names were not consistent across all property types. (1336937)
Shadergraph: Fixed a ShaderGraph issue where a material inspector could contain an extra set of render queue, GPU instancing, and double-sided GI controls.
Shadergraph: Fixed a ShaderGraph issue where a warning about an uninitialized value was being displayed on newly created graphs. (1331377)
Shadergraph: Fixed a ShaderGraph issue where Float properties in Integer mode would not be cast properly in graph previews. (1330302)
Shadergraph: Fixed a ShaderGraph issue where hovering over a context block but not its node stack would not bring up the incorrect add menu.. (1351733)
Shadergraph: Fixed a ShaderGraph issue where keyword properties could get stuck highlighted when deleted. (1333738)
Shadergraph: Fixed a ShaderGraph issue where ObjectField focus and Node selections would both capture deletion commands. (1313943)
Shadergraph: Fixed a ShaderGraph issue where resize handles on blackboard and graph inspector were too small. (1329247)
Shadergraph: Fixed a ShaderGraph issue where selecting a keyword property in the blackboard would invalidate all previews, causing them to recompile. (1347666)
Shadergraph: Fixed a ShaderGraph issue where the right click menu doesn't work when a stack block node is selected. (1320212)
Shadergraph: Fixed a warning in ShaderGraph about BuiltIn Shader Library assembly having no scripts.
Shadergraph: Fixed an issue where a requirement was placed on a fixed-function emission property. (1319637)
Shadergraph: Fixed an issue where an informational message could cause some UI controls on the graph inspector to be pushed outside the window. (1343124)
Shadergraph: Fixed an issue where an integer property would be exposed in the material inspector as a float. (1330302)
Shadergraph: Fixed an issue where fog node density was incorrectly calculated.
Shadergraph: Fixed an issue where generated property reference names could conflict with Shader Graph reserved keywords. (1328762)
Shadergraph: Fixed an issue where ShaderGraph "view shader" commands were opening in individual windows, and blocking Unity from closing.
Shadergraph: Fixed an issue where the Rectangle Node could lose detail at a distance. New control offers additional method that preserves detail better (1156801)
Shadergraph: Fixed an issue where the ShaderGraph transform node would generate incorrect results when transforming a direction from view space to object space. (1333781)
Shadergraph: Fixed an issue where users can't create multiple Boolean or Enum keywords on the blackboard. (1329021)
Shadergraph: Fixed an issue with how the transform node handled direction transforms from absolute world space in camera relative SRPs. (1323726)
Shadergraph: Fixed an unhelpful error message when custom function nodes didn't have a valid file. (1323493)
Shadergraph: Fixed bug where an exception was thrown on undo operation after adding properties to a category (1348910)
Shadergraph: Fixed bug where it was not possible to switch to Graph Settings tab in Inspector if multiple nodes and an edge was selected. (1357648)
Shadergraph: Fixed compilation problems on preview shader when using hybrid renderer v2 and property desc override Hybrid Per Instance.
Shadergraph: Fixed default shadergraph precision so it matches what is displayed in the graph settings UI (single). (1325934)
Shadergraph: Fixed divide by zero warnings when using the Sample Gradient Node.
Shadergraph: Fixed how shadergraph's prompt for "unsaved changes" was handled to fix double messages and incorrect window sizes. (1319623)
Shadergraph: Fixed incorrect warning while using VFXTarget.
Shadergraph: Fixed indent level in shader graph target foldout. (1339025)
Shadergraph: Fixed inspector property header styling.
Shadergraph: Fixed issue where vertex generation was incorrect when only custom blocks were present. (1320695) Parallax Occlusion Mapping node to handle non-uniformly scaled UVs such as HDRP/Lit POM. (1347008)
Shadergraph: Fixed ParallaxMapping node compile issue on GLES2
Shadergraph: Fixed ParallaxOcclusionMapping node to clamp very large step counts that could crash GPUs (max set to 256). (1329025)
Shadergraph: Fixed reordering when renaming enum keywords. (1328761)
Shadergraph: Fixed ShaderGraph BuiltIn target not having collapsible foldouts in the material inspector. (1339256)
Shadergraph: Fixed ShaderGraph BuiltIn target not to apply emission in the ForwardAdd pass to match surface shader results. (1345574)
Shadergraph: Fixed ShaderGraph exception when trying to set a texture to "main texture". (1350573)
Shadergraph: Fixed ShaderGraph HDRP master preview disappearing for a few seconds when graph is modified.
Shadergraph: Fixed ShaderGraph isNaN node, which was always returning false on Vulkan and Metal platforms.
Shadergraph: Fixed ShaderGraph sub-graph stage limitations to be per slot instead of per sub-graph node. (1337137)
Shadergraph: Fixed some shader graph compiler errors not being logged (1304162)
Shadergraph: Fixed the appearance (wrong text color, and not wrapped) of a warning in Node Settings. (1356725)
Shadergraph: Fixed the BuiltIn Target to perform shader variant stripping. (1345580)
Shadergraph: Fixed the Custom Editor GUI field in the Graph settings that was ignored.
Shadergraph: Fixed the default dimension (1) for vector material slots so that it is consistent with other nodes. (1328756)
Shadergraph: Fixed the incorrect value written to the VT feedback buffer when VT is not used.
Shadergraph: Fixed the InputNodes tests that were never correct. These were incorrect tests, no nodes needed tochange.
Shadergraph: Fixed the node searcher results to prefer names over synonyms. (1366058)
Shadergraph: Fixed the ordering of inputs on a SubGraph node to match the properties on the blackboard of the subgraph itself. (1354463)
Shadergraph: Fixed treatment of node precision in subgraphs, now allows subgraphs to switch precisions based on the subgraph node (1304050)
Shadergraph: Fixed unhandled exception when loading a subgraph with duplicate slots. (1366200)
Shadergraph: Fixed virtual texture layer reference names allowing invalid characters (1304146)
Shadergraph: Node included HLSL files are now tracked more robustly, so they work after file moves and renames (1301915)
Shadergraph: ShaderGraph SubGraphs now report node warnings in the same way ShaderGraphs do. (1350282)
Shadergraph: Updated the searcher package to 4.9.0 and bump ShaderGraph dependency to remove some unwanted DLLs from editor.
Shadergraph: Updated the ShaderGraph searcher package dependency to be in sync with the latest searcher package version i.e. 4.8.0.
Shaders: Added a LevelOfDetail read-only property to ShaderData.Subshader. (1352774)
Shaders: Added compute shader compilation logging during project builds or on "Compile and show code" usage. Added raytracing shader (.raytracing file) compile-time logging on import. (1321684)
Shaders: Added missing PDB data output for DXC DX12 debug shaders for use in external analysis tools. (1333848)
Shaders: Added tests for custom BFI and UBFE implementations in HLSLcc (1305543)
Shaders: Fixed ddx_fine and ddx_coarse on Vulkan and capable GLCore targets. (1323892)
Shaders: Fixed DXC validator error on Windows that would result from using '#pragma require WaveMultiPrefix'. (1333695)
Shaders: Fixed editor crash if shader error db fails to open. (1327429)
Shaders: Fixed inaccurate shader_feature usage tracking with shader stage specific keywords. (1255901)
Shaders: Fixed memory leaks in the shader compiler process when compiling ray tracing shaders. (1352198)
Shaders: Fixed PassIdentifier being erroneously invalid in some cases. (1348023)
Shaders: Fixed the "end of shader compiler log" printing to the editor log at compiler crashes if the compiler log is very small.
Shaders: It is now possible to determine which keywords are disabled in a given ShaderKeywordSet by using ShaderUtil.GetPassKeywords (1338833)
Shaders: Prevent editor crash on unlucky timing when a shader compiler process is being killed.
Shaders: ShaderUtil now has an API to retrieve an array of local keywords valid for a particular shader stage of a Pass and an API to check whether a given keyword is valid for a Pass or a shader stage of a Pass. (1347380)
Shaders: To fix Linux Editor SPIR-V validation error when returning OpArrayLength in Compute shader, by generating OpArrayLength with no signedness. (1302657)
Stadia: Building project with Stadia was broken. Fixed now. (1356829)
Stadia: Running a project on Stadia kit got broken. Fixed now. (1356798)
Terrain: Added compatibility tags used to distinguish terrain supported shaders. (1283086)
Terrain: Fixed Brush Preview rendering when a different hot control is active (1276282)
Terrain: Fixed Brush Preview rendering when mouse is outside of SceneView bounds (1281231)
Terrain: Fixed incorrect conditional statement when validating terrain material shaders. (1307951)
Terrain: Fixed Terrain Tools package discoverability in Package Manager.
Terrain: Fixed texture atlas bleeding that occurred on details. Added half pixel offset for grass texture UVs, and collapsed uvs for detail meshes without textures. (1268510)
Terrain: Moved TerrainCallbacks from UnityEngine.TerrainAPI to UnityEngine namespace.)
Terrain:
Nature/Tree Soft Occlusion LeavesShader correctly supports the render paths scene view mode. (1303714)
Tests: Fixed test failure EditorGUI.CanCloseNextWindowDuringOnDestroy on mac and linux platform. (992239)
Timeline: Fixed an issue where ScriptableObjects are not recognized as valid INotificationReceivers
Timeline: Make audio tracks respect audio listener pause state. (1313186)
Timeline: skip unstable test FrameLock_FollowsGameTimeClock for UWP (1342169)
TLS: MacOS TLS/SSL verification now checks also against (un)trusted certificates that were manually added to the keychain. (1306147)
uGUI: Fixed some UGUI editor fields not showing prefab override indicators. (1290387)
uGUI: Fixed UI Components being selected when their child is selected. (1067993)
uGUI: Fixed undesired values for the Scrollbar by clamping and rounding it. (1312723)
uGUI: UGUI: This PR fixes a CPU performance issue, were the UI re-renders from scratch every time we select a delayed floating point component (on the IMGUI stack).
The reason is quite simple, a floating point comparison between the string representation and the double value is incorrect, causing to always be false.
This in turn causes the UI to always be tagged as dirty, and unleashes draw calls / flushing and many bad performance issues on the Editor loop. (1353802)
UI: Fixed an issue where a Unity GUI element would be stuck in the pressed state on resume. (1236185)
UI: Fixed duplicate USS files in the StyleSheets pane when editing-in-place (1299314)
UI: Fixed error in console when toggling isOn in inspector during playmode. (1307257): Fixed issue where wrong aspect was used when a explicit aspect is assigned.: 2021.2 Backport to UI Toolkit package of fix for 1351667: [UI Builder] Text Shadow Color resets to clear after reopening the project. (1351667)
UI Toolkit: Added importedWithErrors and importedWithWarnings APIs to USS assets and UXML assets to differentiate between a failed import and an import of an empty file. (1320914)
UI Toolkit: Added missing asset conversion information for Text Core generated assets. (1348577)
UI Toolkit: Added missing styling definition for dropdown in runtime stylesheet. (1314322)
UI Toolkit: Added support for RenderTexture for background-image in the UI Builder inspector. (1320359)
UI Toolkit: Added tooltip in StyleSheets pane when hovering a class selector that is trimmed (UI Builder) (1313198)
UI Toolkit: Allowed UIElements Debugger to pick elements from Game View outside of play mode: Child's reference is not renamed in parent UXML after renaming the child UXML file. (1319903)
UI Toolkit: Cleaned up the Theme menu. (1318600)
UI Toolkit: Elements hover and focused states are not properly reset when attaching to a new hierarchy (1287198)
UI Toolkit: Ensured that resizing by dragging Resize handles always produces dimensions with rounded values. (1322550)
UI Toolkit: Ensures that only modified files are saved to disk (1355591)
UI Toolkit: Ensures the UI Builder no longer populate VisualElementAsset.stylesheetPaths when adding stylesheets. (1299464): Fix VisualElement doesn't render instantly after setting display property to flex through C# (1359661)
UI Toolkit: Fixed a bug where users were able to drag slider outside its container when a text field was present
UI Toolkit: Fixed a few problems with the showMixedValue property for BaseCompositeField:
- Subfield's callback registration needs to be applied to the bind event, not the attach/detach panel event (see constructor code) --> this fixes a problem with field recycling that did not respond after being recycled
- Remove boxing and closure issues on the current implementation of the callback in question
- We should not use ve.userData to store the SerializedProperty (userData should be reserved for the user, not the field), use SetProperty() instead
- showMixedValue does not need to be set on BaseField.SetValueWithoutNotify; it is set later by binding styles code.
UI Toolkit: Fixed a logic error when deciding whether styles should be updated when the pseudo states change. (1348866)
UI Toolkit: Fixed adding templates in an empty document (UI Builder). (1322904)
UI Toolkit: Fixed an Error in the asset management of the StyleSheet that would show up in a build
UI Toolkit: Fixed an issue causing ListView's reordering to stop working after docking its parent window to a new pane. (1345142)
UI Toolkit: Fixed animated reorderable ListView that was having broken item heights when they were dragged too fast. (1361734)
UI Toolkit: Fixed API inconsistency of the ScrollView class not exposing the "mode" as a property. (1328093)
UI Toolkit: Fixed bad bounding box calculation causing clicking on rotated elements sometimes resulting in no click. (1345300) bug where runtime cursor should not be reset unless it was overridden. (1292577)
UI Toolkit: Fixed ClickEvent unpredictably not being sent on Android and iOS when the touch sequence spans over multiple update frames. (1359485)
UI Toolkit: Fixed clicking in empty space in Hierarchy pane not deselecting a USS Selector and vice-versa for the StyleSheets pane. (UI Builder)
UI Toolkit: Fixed clipping issue with nested scrollviews. (1335094)
UI Toolkit: Fixed clipping with large rects when under a group transform. (1296815)
UI Toolkit: Fixed contentHash not being updated in UI Builder. (1336924) corrupted atlas for Inter. (1330758)
UI Toolkit: Fixed cursor not updating when changing it using a variable which has hotspot data. (1340471)
UI Toolkit: Fixed custom editor not showing as disabled for read-only inspectors (1299346)
UI Toolkit: Fixed custom element UXML factory not picked up in pre-compiled DLL. (1316913)
UI Toolkit: Fixed default clicking scroll amount in ScrollView. (1306562)
UI Toolkit: Fixed directional navigation bug where some elements could be skipped during horizontal navigation. Improved choice of next element when multiple candidates are valid. (1298017) EventSystem using InputSystem package sometimes sending large amounts of PointerMoveEvents during a single frame. (1295751)
UI Toolkit: Fixed exception in ListView when pressing page up key after hitting navigation keys. (1324806)
UI Toolkit: Fixed exception on Text Settings coming from uninitialized Line Breaking Rules when text wrap is enabled. (1305483)
UI Toolkit: Fixed exception thrown when repainting a panel that uses a destroyed texture. (1364578)
UI Toolkit: Fixed exception thrown when selecting the currently open UXML asset in the Project Window. (UI Builder)
UI Toolkit: Fixed ExecutiveCommand commandName being stripped when a sent event require a new layout pass. (1329303)
UI Toolkit: Fixed float and double attributes not being stored correctly in UXML as CultureInvariant. (UI Builder)
UI Toolkit: Fixed focus outline for the following controls: CurveField, GradientField, EnumField/PopupField (and derivatives), RadioButton (choice), ObjectField (when hovered). (1324381)
UI Toolkit: Fixed for unsetting min and max size in UI Builder. (1322457)
UI Toolkit: Fixed from UI Builder package 1.0.0-preview.13 to the editor.
UI Toolkit: Fixed GraphView shader not compiling on Shader Model < 3.0. (1348285)
UI Toolkit: Fixed handling of a deleted StyleSheet that is being used by the currently open UXML UI Document. (UI Builder)
UI Toolkit: Fixed hover state remaining after a touch event on iOS and Android. UI Toolkit runtime now ignores simulated mouse events if no mouse is present on the device. (1326493)
UI Toolkit: Fixed how the USS styles are applied: we now skip unknown properties instead of breaking when applying a style. (1312500)
UI Toolkit: Fixed inconsistencies in the visuals of text underline/strikethrough. (1349202)
UI Toolkit: Fixed infinite loop in CreateInspectorGUI if Editor contains a child InspectorElement targetting a different SerializedObject. (1336093) the layout of a document was getting corrupted after saved the document. (1348002)
UI Toolkit: Fixed issue where user could not override the text color and font-size of the Unity Default Runtime Theme using the ":root" or "VisualElement" selectors. (1335507)
UI Toolkit: Fixed Label Element is not resized when Display Style is changed from None to Flex (1293761)
UI Toolkit: Fixed left-click not opening EnumField after using ContextMenu on MacOS. (1311011)
UI Toolkit: Fixed ListView item selection through PointerMoveEvent, for example when holding right-click down while clicking. (1287031)
UI Toolkit: Fixed non-uniform scaling issues with UI Toolkit elements. (1336053) thrown when reimporting the uss while an unsaved uxml file is in the Builder. (1298490) NullReferenceException with using TrackPropertyValue on BindingExtensions for ExposedReference and Generic serialized property types (1312147)
UI Toolkit: Fixed older GraphView still relying on old idsFlags field
UI Toolkit: Fixed pointer events not working correctly when multiple UI Documents have different Screen Match values. (1341135)
UI Toolkit: Fixed RadioButtonGroup and DropdownField choice attribute parsing in the UI Builder, when first added.
UI Toolkit: Fixed rebuild logic on inspectors with culled elements. (1324058)
UI Toolkit: Fixed regression on the styling of the ProgressWindow's progress bars, removed usage of images in progress bar. (1297045)
UI Toolkit: Fixed scroll wheel event modifiers not assigned when using EventSystem and StandaloneInputModule combination in UI Toolkit. (1347855)
UI Toolkit: Fixed scrollbar showing for no meaningful reason when the content of a scrollview is almost equal to the size of the scrollview. (1297053)
UI Toolkit: Fixed scrollview offset when Reordering the contents in hierarchy is not visible. (1341758)
UI Toolkit: Fixed selection on pointer up on mobile to allow touch scrolling. (1312139)
UI Toolkit: Fixed settings search not working for UI Builder settings in the Project Settings window. (UI Builder)
UI Toolkit: Fixed SVG triangle clipping issue (1288416)
UI Toolkit: Fixed TextCore visual artifacts caused by ddx/ddy glitches on AMD Radeon hardware. (1317114) the focus handling so elements not displayed in the hierarchy cannot be focused. (1324376)
UI Toolkit: Fixed the hover and pressed color of buttons in the Runtime theme (1316380)
UI Toolkit: Fixed the InvalidCastException thrown when assigning a built-in image asset to a Texture2D in the UI Builder inspector. (UI Builder)
UI Toolkit: Fixed the lack of feedback to users about problems with the UIDocument and PanelSettings inspectors. (1351792)
UI Toolkit: Fixed the missing Unicode arrow on ShaderGraph Transform Node. (1333774)
UI Toolkit: Fixed the refresh of the canvas's content when modifying properties of style selectors. (1323665)
UI Toolkit: Fixed the source path of template referenced in UXML getting cleared after saving (1288918)
UI Toolkit: Fixed Toolbar shrinking when there is another element filling the parent container. (1330415)
UI Toolkit: Fixed tooltip not showing in the UI Builder inspector. (1346433)
UI Toolkit: Fixed ui builder layout broken when restarting unity. (1340218)
UI Toolkit: Fixed UIT-1340195 and UIT-1340189
UI Toolkit: Fixed view data persistence not working inside custom inspectors that use UI Toolkit. (1311181)
UI Toolkit: Fixed VisualElement contains "null" stylesheet after deleted uss file from project (1290271)
UI Toolkit: Fixed window creator on linux caused a null reference and the new editor window was displayed corrupted. (1295140)
UI Toolkit: Fixed wrong mouse position on events when a UI Toolkit element has mouse capture, the mouse is outside the element's editor window and that window doesn't have the active mouse capture from the OS. (1342115)
UI Toolkit: Fixed wrong runtime touch event coordinates on panels with scaling
UI Toolkit: Fixes 1304581: [UIElements] ArgumentException is thrown when the PropertyField is bind to the BuildTarget enum
Popup/Dropdown (Enum-compatible) fields now gracefully handle unselected/invalid values. (1304581)
UI Toolkit: Fixes undo of a change of Sort Order field value for a UI Document. (1337070)
UI Toolkit: Fixes [UI Builder] Saving a document that contains a one character name template fails. (1313509)
UI Toolkit: Implemented the missing support in the UI Builder for Rich Text' spacing USS properties.
UI Toolkit: Improved readability of USS import error/warnings. (1295682)
UI Toolkit: InputSystem fails to store ElementUnderPointer when a VisualElement is moving, creating flickering hover feedback. (1306526)
UI Toolkit: InspectorElement now correctly supports rebinding when used outside of the InspectorWindow (1299036)
UI Toolkit: Match text colors of UITK label and UITK field label with IMGUI label and IMGUI prefix label respectively. It also fixes the text color of buttons. (1310581)
UI Toolkit: Missing theme style sheet on PanelSettings now gets logged to console.
UI Toolkit: Nested UI Document allowed changing the Panel Settings once. (1315242)
UI Toolkit: On Windows, the position obtained by the mouse event callbacks is as expected while the mouse is outside of the view even if the mouse down was applied while a temporary window such as a pulldown menu was already opened. (1324369)
UI Toolkit: Overlay, "Show Layout" now works for Runtime panels in Edit Mode (and not only in Play Mode like before)
UI Toolkit: Panels instantiated by PanelSettings assets now ordered deterministically when their sort order have the exact same value.
UI Toolkit: Prevented clicks from passing through runtime panels if they weren't used. (1314140)
UI Toolkit: Removed an extra step from the RadioButtonGroup focus navigation. (1324373)
UI Toolkit: Scroll bars now use display instead of visibility to avoid scroll bars being visible when parent visibility is set to false. (1297886)
UI Toolkit: Submit event on a ListView focuses in the content to allow keyboard navigation. (1311688)
UI Toolkit: Template references get deleted when the assets are moved. (1337112)
UI Toolkit: TextField text selection area is displayed incorrectly. (1347904)
UI Toolkit: The label of a focused Foldout now has its color changed. (1311200)
UI Toolkit: The UI Builder window's layout will not reset anymore when the window is reloaded (UI Builder).
UI Toolkit: This completes the MVP list of improvements to the UI Toolkit Event Debugger.
UI Toolkit: UQuery: Enumerator support allows for foreach iteration with no or minimal gc allocations
UI Toolkit: Value Change Callbacks for bound fields now happen after the value is applied to the target object. (1321156)
Undo System: Ensure interested systems are updated after undoing RectTransform changes. (1116058)
Undo System: Improved performance when overwriting the redo stack
Undo System: Prevent crashing when attempting to finalize an undo that is already being finalized. (1352394)
Undo System: Reduced register undo log. (1342970)
UNET: Fixed Multiplayer hlapi package is missing a declaration of its dependency on Physics2D. (1324449)
Universal: Fixed a case where shadow fade was clipped too early.
Universal: Fixed an issue in shaderGraph target where the ShaderPass.hlsl was being included after SHADERPASS was defined
Universal: Fixed an issue that caused shader compilation error when building to Android and using GLES2 API. (1343061)
Universal: Fixed an issue that caused shader compilation error when switching to WebGL 1 target. (1343443)
Universal: Fixed an issue that that caused a null error when creating a Sprite Light. (1295244)
Universal: Fixed an issue where 2D lighting was incorrectly calculated when using a perspective camera.
Universal: Fixed an issue where Depth Prepass was not run when SSAO was set to Depth Mode.
Universal: Fixed an issue where motion blur would allocate memory each frame. (1314613)
Universal: Fixed an issue where SmoothnessSource would be upgraded to the wrong value in the material upgrader.
Universal: Fixed an issue where soft particles did not work with orthographic projection. (1294607)
Universal: Fixed an issue where SSAO would sometimes not render with a recently imported renderer.
Universal: Fixed an issue where the inspector of Renderer Data would break after adding RenderObjects renderer feature and then adding another renderer feature.
Universal: Fixed an issue where transparent objects sampled SSAO.
Universal: Fixed an issue where using Camera.targetTexture with Linear Color Space on an Android device that does not support sRGB backbuffer results in a RenderTexture that is too bright. (1307710)
Universal: Fixed an issue with backbuffer MSAA on Vulkan desktop platforms. Inspector Stack list issues.
Universal: Fixed camera stack UI correctly work with prefabs. (1308717)
Universal: Fixed double sided and clear coat multi editing shader.
Universal: Fixed issue causing missing shaders on DirectX 11 feature level 10 GPUs.
Universal: Fixed issue where copy depth depth pass for gizmos was being skipped in game view. (1302504)
Universal: Fixed lit shader property duplication issue. (1315032)
Universal: Fixed materials being constantly dirty.
Universal: Fixed multi editing of Bias property on lights. (1289620)
Universal: Fixed render pass reusage with camera stack on vulkan. (1226940)
Universal: Fixed SafeNormalize returning invalid vector when using half with zero length. (1315956)
Universal: Fixed shadow cascade blend culling factor.
Universal: Fixed shadowCoord error when main light shadow defined in unlit shader graph.
Universal: Fixed undo issues for the additional light property on the UniversalRenderPipeline Asset. (1300367)
Universal: Normalized the view direction in Shader Graph to be consistent across Scriptable Render Pieplines.
Universal: SMAA post-filter only clear stencil buffer instead of depth and stencil buffers.
Universal Windows Platform: Fixed black square appearing with custom cursors in Executable Only build. (1299579)
Universal Windows Platform: Fixed CultureInfo.CurrentCulture and CultureInfo.CurrentUICulture to return languages from the preferred UWP language list in system settings. It now matches .NET Native behavior. (1170029)
Universal Windows Platform: Updated UWP PlayerSettings API documentation. (1325420)
URP: Fixed a case where camera dimension can be zero. (1321168)
URP: Fixed a Universal Targets in ShaderGraph not rendering correctly in game view
URP: Fixed additional camera data help url.
URP: Fixed additional light data help url.
URP: Fixed an issue in PostProcessPass causing OnGUI draws to not show on screen. (1348882) an issue where TerrainLit was rendering color lighter than Lit. (1340751)
URP: Fixed an issue where the 2D Renderer was not rendering depth and stencil in the normal rendering pass.
URP: Fixed an issue where _AfterPostProcessTexture was no longer being assigned in UniversalRenderer.
URP: Fixed an issue with the blend mode in Sprite-Lit-Default shader causing alpha to overwrite the framebuffer. (1331392)
URP: Fixed Camera rendering when capture action and post processing present. (1350313)
URP: Fixed CopyDepthPass incorrectly always enqueued when deferred rendering mode was enabled when it should depends on the pipeline asset settings.
URP: Fixed gizmos no longer allocate memory in game view. (1328852)
URP: Fixed graphical artefact when terrain height map is used with rendering layer mask for lighting.
URP: Fixed indentation of Emission map on material editor.
URP: Fixed issue where it will clear camera color if post processing is happening on XR. (1324451)
URP: Fixed issue with legacy stereo matrices with XR multipass. (1342416)
URP: Fixed memory leak with XR combined occlusion meshes.
URP: Fixed pixel perfect camera rect not being correctly initialized. (1312646)
URP: Fixed post processing to be enabled by default in the renderer when creating URP asset option. (1333461)
URP: Fixed remove of the Additional Camera Data when removing the Camera Component.
URP: Fixed remove of the Additional Light Data when removing the Light Component.
URP: Fixed renderer creation in playmode to have its property reloaded. (1333463)
URP: Fixed renderer post processing option to work with asset selector re-assing. (1319454)
URP: Fixed return values from GetStereoProjectionMatrix() and SetStereoViewMatrix(). (1312813)
URP: Fixed ShaderGraph materials to select render queue in the same way as handwritten shader materials by default, but allows for a user override for custom behavior. (1335795)
URP: Fixed shaderGraph shaders to render into correct depthNormals passes when deferred rendering mode and SSAO are enabled.
URP: Fixed soft shadows shader variants not set to multi_compile_fragment on some shaders (gbuffer pass, speedtree shaders, WavingGrass shader).
URP: Fixed sporadic NaN when using normal maps with XYZ-encoding. (1351020)
URP: Fixed UniversalRenderPipelineAsset now being able to use multiedit.
URP: Fixed unlit shader function name ambiguity.
URP: MaterialReimporter.ReimportAllMaterials and MaterialReimporter.ReimportAllHDShaderGraphs now batch the asset database changes to improve performance.
URP: Removed unsupported fields from Presets of Light and Camera. (1335979)
URP: Support undo of URP Global Settings asset assignation.
URP: URP Global Settings can now be unassigned in the Graphics tab. (1343570)
URP: VFX: Compilation issue with ShaderGraph and planar lit outputs. (1349894)
URP: VFX: Fixed OpenGL soft particles fallback when depth texture isn't available.
URP: VFX: Fixed soft particles when HDR or Opaque texture isn't enabled..
VFX Graph: Added a missing paste option in the context menu for VFX contexts. Also the paste options is now disabled when uneffective.
VFX Graph: An existing link can be remade.
VFX Graph: Blackboard fields can now be duplicated either with a shortcut (Ctrl+D) or with a contextual menu option.
VFX Graph: Compilation error undeclared identifier 'Infinity'. (1328592)
VFX Graph: Compilation issue when normal is used in shadergraph for opacity with unlit output.
VFX Graph: Compilation issue while using new SG integration and SampleTexture/SampleMesh. (1359391)
VFX Graph: Deleting a context node and a block while both are selected throws a null ref exception.
VFX Graph: Don't open an empty VFX Graph Editor when assigning a VFX Asset to a Visual Effect GameObject from the inspector. (1347399)
VFX Graph: Enabled an optimization for motion vectors, storing projected positions for vertices instead of the transform matrix.
VFX Graph: Exception using gizmo on exposed properties. (1340818)
VFX Graph: Exposed Camera property fails to upgrade and is converted to a float type. (1357685)
VFX Graph: Exposed Parameter placement can be moved after sanitize.
VFX Graph: Eye dropper in the color fields kept updating after pressing the Esc key.
VFX Graph: Eye dropper in the color fields kept updating after pressing the Esc key.
VFX Graph: Eye dropper in the color fields kept updating after pressing the Esc key.
VFX Graph: Fix CameraFade for shadow maps (1294073)
VFX Graph: Fix incorrect buffer type for strips.
VFX Graph: Fix unexpected Spawn context execution ordering.
VFX Graph: Fixed collision with depth buffer when using an orthographic camera (1309958)
VFX Graph: Fixed Collision with Depth Buffer when using Orthographic camera. (1309958)
VFX Graph: Fixed compilation failure on OpenGLES. (1348666)
VFX Graph: Fixed crash when loading SDF Baker settings holding a mesh prefab. (1343898)
VFX Graph: Fixed culling of point output. (1225764)
VFX Graph: Fixed Exception on trying to invert a degenerate TRS matrix. (1307068)
VFX Graph: Fixed IsFrontFace shader graph node for VFX.
VFX Graph: Fixed issue with VFX using incorrect buffer type for strip data.
VFX Graph: Fixed rendering artifacts on some mobile devices. (1149057)
VFX Graph: Fixed SDF Baker fail on PS4 & PS5. (1351595)
VFX Graph: Fixed undetermitism in space with LocalToWorld and WorldToLocal operators. (1355820)
VFX Graph: GPU hang on some initialize dispatch during dichotomy (platform specific).
VFX Graph: In the Gradient editor undo will now properly refresh the gradient preview (color swatches).
VFX Graph: Inspector group headers now have a better indentation and alignment.
VFX Graph: Modified state in the VFX tab has now a correct state.
VFX Graph: Motion Vector map sampling for flipbooks were not using correct mips.
VFX Graph: Prevent out of sync serialization of VFX assets that could cause the asset to be dirtied without reason.
VFX Graph: Prevent vector truncation error in HDRP Decal template.
VFX Graph: Prevent VFX Graph compilation each time a property's min/max value is changed.
VFX Graph: Prevent vfx re-compilation in some cases when a value has not changed.
VFX Graph: Properties labels do not overlap anymore.
VFX Graph: Property Binder : Allow copy/past from a game object to another.
VFX Graph: Random crash using subgraph. (1345426)
VFX Graph: Removed some useless compilation triggers (modifying not connected or disabled nodes for instance).
VFX Graph: Rename "Material Offset" to "Sorting Priority" in output render state settings (1365257)
VFX Graph: Sample Mesh Color when value is stored as float.
VFX Graph: Sticky notes can now be deleted through contextual manual menu.
VFX Graph: Tidy up of platform abstraction code for random number generation, requires a dependency on com.unity.render-pipelines.core for those abstractions.
VFX Graph: Unexpected compilation error while modifying ShaderGraph exposed properties. (1361601)
VFX Graph: Unexpected operator and block removal during migration. (1344645)
VFX Graph: VFX Graph operators keep the same width when expanded or collpased so that the button does not change position.
VFX Graph: VFXEventBinderBase throwing a null reference exception in runtime
VFX Graph: VFXEventBinderBase throwing a null reference exception in runtime
VFX Graph: Visual Effect inspector input fields don't lose focus anymore while typing (Random seed).
VFX Graph: When adding a new node/operator in the graph editor and using the search field, the search results are sorted in a smarter way.
VFX Graph: Zoom and warning icons were blurry in the "Play Controls" and "Visual Effect Model" scene overlays.
Video: Fixed an issue where undoing a property in the Video Clip Import Settings also undoes the parent Transcode checkbox. (1314433)
Video: Fixed audio bitrate value for medium quality in MediaEncoder when channel count is more than 2.
Video: Fixed regression in applying standalone platform override settings for video clips. (1360821)
Video: Fixed source Info text of the video asset that was barely visible. (1328269)
Video: Fixed Video Player's 5.1 audio channel layout being incorrect when outputting to Audio Source. (1318983)
Video: Increased VideoClipImporter version following a fix that adds missing platform dependencies in this importer.: Completed requests now won't be incorrectly canceled if the last
InvalidateRegioncall is made before
PopRequests.
WebGL: Added handling for Norwegian Bokmal and Nynorsk in SystemInfo for macOS and Linux, and to SystemInfo in Runtime/Misc used by WebGL and MetroPlayer.
WebGL: Bug fix for URP scene being rendered incorrectly with the new texture subtarget options in the build settings (1343208)
WebGL: Enabled the URP feature SRP Batcher for WebGL 2. (1344614)
WebGL: Fixed a regression with building WebGL on Windows 7. (1340260)
WebGL: Fixed audio restarting when paused and resumed on timeline. (1204018)
WebGL: Fixed FMOD related error messages showing up in console when audio is played on Timeline. (1270635)
WebGL: Fixed handling of touch events. (1349226)
WebGL: Fixed hang on quit of the Unity Editor after a Build And Run of a WebGL project. (1352715)
WebGL: Fixed incorrect loading progress shown in Player when Brotli build compression is used. (1288367)
WebGL: Fixed input coordinates when config.matchWebGLToCanvasSize is false. (1325989)
WebGL: Fixed layout of WebGL template custom properties in the project settings. (1316210)
WebGL: Fixed Permission denied error during build (1345412)
WebGL: Fixed Unity profiler auto-connect for WebGL builds. (1360399)
WebGL: Fixed video playback to be muted when Audio Output Mode is set to Audio Source and the selected Audio Source is Muted. Also fixed another issue where video clips that browser blocked from autoplaying would not start playing after user interacts with the web page. (1241582)
WebGL: Fixed video textures not getting converted to linear color space. (1314263)
WebGL: Fixed WebGL pages on macOS with Safari 11. (1318420)
WebGL: Fixed WebGL project build when Exception support is None. (1343976): Show warnings when an embedded VideoClip is used with WebGL builds. Use Video Player component's URL option instead. (1241263)
WebGL: Support mobile WebGL touch events to Immediate Mode GUI when the New Input System package is used. (1300223)
WebGL: The WebGL 1 Graphics API is now marked as deprecated and will be removed in a future release of Unity once all major browser vendors have released browser versions with WebGL 2 enabled by default. (1345140)
WebGL: WebGL1 shaders can fail to compile if they have large arrays. (1298096)
Windows: Fixed absolute mouse position when mouse acceleration is enabled. (1221634)
Windows: Fixed input latency increasing by 1 frame when switching between exclusive fuilscreen and other fullscreen modes on D3D11 and D3D12 graphics APIs.
Windows: Fixed mouse position being off-by-1 pixel when rendering at lower than native resolution in certain cases.
Windows: Fixed performance issues caused by using a high input polling rate mouse (8000 Hz+). (1336740)
Windows: Fixed resolution resetting to native resolution on primary window when trying to move the secondary window use Display.SetParams.
Windows: Fixed Windows player infrequently deadlocking when changing fullscreen modes on D3D11 and D3D12 graphics APIs.
Windows: Fixed Windows standalone player misdetecting whether it's running at native resolution and as a result reopening in wrong resolution if the native resolution changes between runs.
Windows: Round Resolution.refreshRate. (1318053)
XR: Enabled Vulkan lazy allocation for memory savings on Oculus Quest.
XR: Fix single-pass stereo state after shadow map rendering (1335518)
XR: Fixed an issue where terrain tree shadows would be culled when they are still in view. (1234785)
XR: Fixed differences between single pass instancing on Vulkan and D3D11 when using the MockHMD. (1287075)
XR: Fixed issue that XRSettings.gameViewRenderMode did not work when using a Scriptable Render Pipeline. (1279033)
XR: Fixed issue when enabling multiview when using single pass instanced rendering mode on PC.
XR: Fixed occlusion mesh displaying when stereo is disabled. (1307273)
XR: Fixed subsystem manifest json files not being found when using patch and run on Android. (1349953)
XR: Fixed XR Interaction Toolkit not appearing in the Package Manager window even when pre-release packages are enabled.
XR: Fixed XRDevice.fovZoomFactor not working in URP and HDRP. (1278072)
XR: Fixed XRSettings.occlusionMaskScale not working in SRPs.
XR: Fixed XRSettings.showDeviceView not working in SRPs.
XR: Fixed XRSettings.useOcclusionMesh not working in SRPs.
XR: Updated OpenXR package to 1.2.3.
XR: Updated the verified AR Foundation related packages to 4.1.3. Please see the AR Foundation package changelog for details.
XR: Updated the verified AR Foundation related packages to 4.2.0-pre.12. Please see the AR Foundation package changelog for details.
XR: Updated the verified AR Foundation related packages to 4.2.0-pre.5 and 4.2.0-pre.4. Please see the AR Foundation package changelog for details.
XR: Updated the verified AR Foundation related packages to 4.2.0. Please see the AR Foundation package changelog for details.
XR: Updated XR Interaction Toolkit to 1.0.0-pre.2.
XR: Updated XR Interaction Toolkit to 1.0.0-pre.3.
XR: Updated XR Interaction Toolkit to 1.0.0-pre.4.
XR: Updated XR Interaction Toolkit to 1.0.0-pre.5.
XR: Updated XR Legacy Input Helpers to 2.1.8.
チェンジセット: edbc0738c91b | https://unity3d.com/jp/beta/2021.2a | CC-MAIN-2022-33 | en | refinedweb |
Talk:Banana Pi the Gentoo.
Article title
Why not call the article "Banana Pi" instead? I'd wonder if it ain't Gentoo Way, when it's written down on wiki.gentoo.org --Mrueg (talk) 12:08, 30 July 2015 (UTC)
- I'd would rename it "ze Gentoo Way" as far as I'm concerned. --Monsieurp (talk) 17:24, 30 July 2015 (UTC)
- Currently we have no way of organizing guides that are related to but different than our "official" guides. How to name or store alternative instructions might be something we should put a little thought towards. Should users put them in their user space instead on the main namespace? Should they have a certain naming scheme? --Maffblaster (talk) 17:36, 30 July 2015 (UTC)
- Even better, the Banana Pi article can be an introduction, this article can be the "Guide." --Maffblaster (talk) 19:08, 30 July 2015 (UTC)
SinOjos (talk) - The reason I named it The Gentoo Way, is that at the time, the competing groups involved in litigation over the Banana Pi, were all posting releases, and spamming forums to get traffic. Some were outright attempts to discredit Banana Pi, while some intentionally contained malicious code taking advantage of the situation. Even the more legitimate (litigation is not completed) releases, by the various groups involved, were not done correctly, some were better than others.
Some were terrible, and were modified raspberry pi releases, which were all wrong configuration wise, kernel, and arm 6 not even arm 7, simply terrible performance, incorrect network interface modules and configurations, etc. Not one of the so called developers, had a clue as to how Linux really worked, ideology of the software, or possessed any business acumen.
The majority were students in china, that spoke little or no English, who were not competent at translating English documents, hence not fully understanding what they were doing, a big security risk. The great majority had no prior experience with Linux or business. Every thing from copyrights, to patent infringement, and false advertisement through claims of open source, were used to market the Banana Pi. Some of the components of the Banana Pi are closed source, and patented, yet advertised as open source. It was a case of some with money, and access to component designs and manufacturing facilities getting students involved and taking advantage of the Raspberry Pi momentum. Hence the name Banana Pi, typical Chinese knock off. Don't get me wrong, it is nice hardware, but Not Open Source. Good luck taking full advantage of the GPU!
Since there were many unsuspecting new people to Linux, who were getting roped into using possibly malicious releases of Gentoo. I wanted to clearly differentiate the difference between all the other Gentoo releases for Banana Pi. While also providing the instructions on how to roll your own using a Gentoo Arm stage3 release. I felt it was better that people learn how to do it the right way, rather than use pre-made releases, that may contain malicious code, and or incorrectly configured. Not to mention, that users will learn a lot more about how Linux works by building their own Gentoo system rather than something pre-made.
Since there was an existing Banana Pi page already pointing to one of the aforementioned pre-made releases. I simply wanted to stay out of the war zone. So I did Banana Pi The Gentoo Way. - SinOjos (talk) | https://wiki.gentoo.org/wiki/Talk:Banana_Pi_the_Gentoo_Way | CC-MAIN-2022-33 | en | refinedweb |
XmCascadeButton - The CascadeButton widget class
#include <Xm/CascadeB.h>
CascadeButton links two MenuPanes or a MenuBar to a MenuPane. MenuPane MenuPane; or, it can include only a label or a pixmap when it is in a MenuBar..
A CascadeButtonButton within a Pulldown or Popup MenuPane via via the keyboard interface.
If in a Pulldown or Popup MenuPane's default, which is 2.
CascadeButton inherits behavior and resources from Core, XmPrimitive, and XmLabel classes.
The class pointer is xmCascadeButtonWidgetClass.
The class name is XmCascadeButton.Button inherits behavior and resources from the following superclasses. For a complete description of each resource, refer to the man page for that superclass.
A pointer to the following structure is passed to each callback:
typedef struct { int reason; XEvent * event; } XmAnyCallbackStruct;
XmCascadeButton includes translations from Primitive. XmCascadeButton includes the menu traversal translations from XmLabel. These translations may not directly correspond to a translation table.
Note that altering translations in #override or #augment mode is undefined.
The translations for a CascadeButton in a MenuBar are listed below. These translations may not directly correspond to a translation table.
BSelect Press: MenuBarSelect() BSelect Release: DoSelect() KActivate: KeySelect() KSelect: KeySelect() KHelp: Help() MAny KCancel: CleanupMenuBar()
The translations for a CascadeButton in a PullDown or Popup MenuPane are listed below. In a Popup menu system, BMenu also performs the BSelect actions. These translations may not directly correspond to a translation table.
BSelect Press: StartDrag() BSelect Release: DoSelect() KActivate: KeySelect() KSelect: KeySelect() KHelp: Help() MAny KCancel: CleanupMenuBar()
The XmCascadeButton action routines are described below:
In a toplevel Pulldown MenuPane from a MenuBar, unposts the menu, disarms the MenuBar CascadeButton and the MenuBar, and, when the shell's keyboard focus policy is XmEXPLICT, restores keyboard focus to the widget that had the focus before the MenuBar was entered. In other Pulldown MenuPanes, unposts the menu.
In a Popup MenuPane, unposts the menu and, when the shell's keyboard focus policy is XmEXPLICT, restores keyboard focus to the widget from which the menu was posted.
Posting a submenu calls the XmNcascadingCallback callbacks. This widget has the additional behavior described below:
In other menus, if the pointer moves anywhere except into a submenu associated with the CascadeButton, the CascadeButton is disarmed and its submenu is unposted.
The bindings for virtual keys are vendor specific. For information about bindings for virtual buttons and keys, see VirtualBindings(3X).
Core(3X), XmCascadeButtonHighlight(3X), XmCreateCascadeButton(3X),XmCreateMenuBar(3X), XmCreatePulldownMenu(3X), XmCreatePopupMenu(3X), XmLabel(3X), XmPrimitive(3X), and XmRowColumn(3X). | http://www.vaxination.ca/motif/XmCascadeBA_3X.html | CC-MAIN-2022-33 | en | refinedweb |
XmCreateSimplePulldownMenu - A RowColumn widget convenience creation function
#include <Xm/RowColumn.h> Widget XmCreateSimplePulldownMenu (parent, name, arglist, argcount) Widget parent; String name; ArgList arglist; Cardinal argcount;
XmCreateSimplePulldownMenu creates an instance of a RowColumn widget of type XmMENU_PULLDOWN and returns the associated widget ID.
This routine creates a Pulldown MenuPane and its button. The name of each title is label_n, where n is an integer from 0 to one minus the number of titles in the menu.X).
Returns the RowColumn widget ID.
XmCreatePulldownMenu(3X), XmCreateRowColumn(3X), XmRowColumn(3X), and XmVaCreateSimplePulldownMenu(3X). | http://www.vaxination.ca/motif/XmCreateSiE_3X.html | CC-MAIN-2022-33 | en | refinedweb |
Question :
I’m trying to parse the result of a HEAD request done using the Python Requests library, but can’t seem to access the response content.
According to the docs, I should be able to access the content from requests.Response.text. This works fine for me on GET requests, but returns None on HEAD requests.
GET request (works)
import requests response = requests.get(url) content = response.text
content =
<html>...</html>
HEAD request (no content)
import requests response = requests.head(url) content = response.text
content =
None
EDIT
OK I’ve quickly realized form the answers that the HEAD request is not supposed to return content- only headers. But does that mean that, to access things found IN the
<head> tag of a page, like
<link> and
<meta> tags, that one must GET the whole document?
Answer #1:
By definition, the responses to HEAD requests do not contain a message-body.
Send a GET request if you want to, well, get a response body. Send a HEAD request iff you are only interested in the response status code and headers.
HTTP transfers arbitrary content; the HTTP term header is completely unrelated to an HTML
<head>. However, HTTP can be advised to download only a part of the document. If you know the length of the HTML
<head> code (or an upper boundary therefor), you can include an HTTP Range header in your request that advises the remote server to only return a certain number of bytes. If the remote server supports HTTP ranges, it will then serve the reduced answer.
Answer #2:
A HEAD doesn’t have any content! Try
response.headers – that’s probably where the action is. An HTTP HEAD request doesn’t get the
<head> element of the HTML response you would get from a GET request. I think that’s your mistake.
Answer #3:
HEAD responses have no body. They only return HTTP headers, the same you would get using a GET request. | https://discuss.dizzycoding.com/getting-head-content-with-python-requests/ | CC-MAIN-2022-33 | en | refinedweb |
table of contents
NAME¶
opendir, fdopendir - open a directory
SYNOPSIS¶
#include <sys/types.h> #include <dirent.h>
DIR *opendir(const char *name); DIR *fdopendir(int fd);
fdopendir():
Since glibc 2.10:
_POSIX_C_SOURCE >= 200809L
Before glibc 2.10:
_GNU_SOURCE
DESCRIPTION¶¶
The opendir() and fdopendir() functions return a pointer to the directory stream. On error, NULL is returned, and errno is set to indicate the error.
ERRORS¶
-¶
fdopendir() is available in glibc since version 2.4.
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
opendir() is present on SVr4, 4.3BSD, and specified in POSIX.1-2001. fdopendir() is specified in POSIX.1-2008.
NOTES¶¶
open(2), closedir(3), dirfd(3), readdir(3), rewinddir(3), scandir(3), seekdir(3), telldir(3)
COLOPHON¶
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://dyn.manpages.debian.org/unstable/manpages-dev/fdopendir.3.en.html | CC-MAIN-2022-33 | en | refinedweb |
In this workshop, we will be making a Dice game app with the help of React-Native. You don't require any setup ( yes! no setup required 🤠) to get started with this workshop!
This is what it will look like:
The above is being run on a web emulator. How to use this emulator is included in this workshop.
Prerequisites
The workshop is for anyone familiar with:
You don't need to be a Guru in these topics, a basic understanding of them is more than enough!
Development Environment
We will be using Snack, an online editor provided by Expo to make our app in this workshop!
So before moving forward let's discuss Snack!
Why use Snack?
Snack is the perfect choice for workshops because:
- It requires no setup to get started with development in Snack!
- The device emulators run in the browser itself ( both Android and iOS )!
- You can easily share and export your Snack projects with others.
- You can test your app directly on your phone via the Expo app.
Introduction to the interface
The below image is what Snack's interface looks like.
I have labeled the important parts of the interface. The interface is really intuitive and will not be a big issue for anyone who is not familiar with it.
Should I run the app on my device or on an emulator?
The Snack emulators are really awesome but sometimes they might require you to wait in a queue ( you won't be waiting for too long ) but still if you have an Android or iOS device then use it, as it won't require you to be on a queue.
Another suggestion is to use the web view as it won't require any waiting time and once the app is developed then try it on emulators ✌️
What is this Expo App?
Expo App is available for both Android and iOS and you can test your app made in Snack directly from it in your device ( it also has some other use-cases! ).
You just need to scan the QR code of your Snack project from your Expo app ( In case of iOS users, you can also use your in-built scanner ) to run your project on your device.
Links for Expo:
Setup
I have created a Snack template for you, it contains all the resources for making this app.
You need to open in your browser.
The finished project is available at, you can use it as help in this workshop!
This will open a template Snack project for you. Your setup is complete ( I know it feels awesome as it required almost no setup 🕶️ ).
Some Basics first!
What is React-Native?
React-Native is an open-source mobile application framework created by Facebook, Inc. You can implement your interfaces in React and can use web-like CSS to style these interfaces.
All the components are mapped to their native components so that you get native performance in your apps.
All the CSS properties are camelCase meaning padding-top in react-native will be paddingTop. Some features of the web CSS aren't supported in react-native.
What are react-native components
React-Native provides some default components like View, ScrollView, Text, etc.
You need to import them from 'react-native'
example
import { View, Text } from 'react-native'
You can also use third-party or custom components.
What is StyleSheet.create
It is a way of creating styles for your components in react-native. It takes an object configuration as an argument.
Let's take an example to make it clearer.
import { Text, StyleSheet } from 'react-native' export default function Comp() { return <Text style={styles.headerText}>Hack Club is Awesome!</Text> } const styles = StyleSheet.create({ headerText: { color: 'red', backgroundColor: 'yellow' } })
In the following code <Text > has a style prop which expects an object containing styles for the component.
In react-native style properties are camelCase meaning background-color will be backgroundColor.
So here the text color of the component will be red while the background-color will be yellow! ( I know this color choice is really bad (: ).
To know more about these components and their properties visit React-Native documentation
What is special about App.js
App.js is your root file and is the starting point for your app.
So, whatever component you will return from here will be considered as the root component at the time of rendering.
Now let's get started by making a project in it ( I believe projects are the best way to learn stuff ).
Working with App.js
Your App.js will already have some template code in it. Your code will look something like this.
We have made some initial imports here which include React, some react-native components ( View, ScrollView, etc) and Constants from 'expo-constants'.
Constants provides system information that remains constant throughout the lifetime of your app's install. It can be very helpful in designing apps ( will be more clear when I will discuss the stylesheet part ).
Profilecard and Gamecontainer are our two custom Components which will be used to compose our interface ( we are trying to bring modularity in our code ).
Now let's discuss about <View> and <ScrollView>.
What is the difference between <View> and <ScrollView>
<View> are containers for your layout. We use Flexbox to design the layout inside them.
They can be nested within each other. ( They are like the <div> which are used in web development )
So what is this <ScrollView>?
- <ScrollView> is a scrollable container! All its children are in a scrollable container.
When our device goes in Landscape orientation then our display height becomes really short hence **<ScrollView> will allow the content to be still accessible via scrolling in Landscape mode. **
Let's talk about StyleSheet.create
Okay, so we know that react-native uses object based styling for its components.
There is a very strange thing in the StyleSheet.create function, which is:
paddingTop: Platform.os == 'ios' ? Constants.statusBarHeight: null,
Why are we doing this here?
Okay so in iOS devices our app's content starts from the top of the screen ( meaning the status bar is not ignored! ) so Platform.os tell us which operating system the app is running on ( Platform is made available by react-native )
If it is 'ios' then we give padding from the top equal to the height of the status bar. We use Constants.statusBarHeight for it (The padding problem doesn't exist in Android!).
Making the Profilecard!
Okay, now we are going to make the Profilecard.
Step1 - Open the profilecard.js file
Open profilecard.js file which is in components folder. It will already have some template code ready for you.
You should change all of my information with your information ( like name, place, etc ).
Imp: for making your circular profile picture like mine, please use profilepicturemaker.com.
Now make your component and StyleSheet.create function look like this ( don't delete any other part of the file, just change this component and the stylesheet ):
export default function Comp() { return ( <> <LinearGradient colors={['#EC3750', '#A33140']} style={styles.gradient}> <View style={styles.header}> <Image source={require('../assets/icons/backicon.png')} style={{ width: 9.5, height: 16, marginLeft: 15 }} /> <View style={styles.nameContainer}> <Text style={styles.nameText}>Harsh Bajpai</Text> </View> </View> <View style={styles.profilepicContainer}> <Image source={require('../assets/profilepic.png')} style={{ height: 100, width: 100 }} /> <Text style={styles.title}>Developer/Designer</Text> <Text style={{ fontSize: 14, color: '#F3F3F3' }}>Gurugram/India</Text> </View> <View> <View style={styles.iconContainer}> {imgarr.map((image) => ( <Image source={image} style={{ width: 45, height: 45 }} /> ))} </View> </View> </LinearGradient> </> ) } const styles = StyleSheet.create({ gradient: { display: 'flex', flexDirection: 'column' }, header: { marginTop: 10, flexDirection: 'row' }, nameContainer: { position: 'absolute', width: '100%' }, nameText: { color: 'white', textAlign: 'center', paddingRight: 8, fontSize: 15 }, profilepicContainer: { alignItems: 'center', marginTop: 25 }, title: { fontWeight: 'bold', fontSize: 18, color: '#F3F3F3', marginTop: 15 }, iconContainer: { flexDirection: 'row', justifyContent: 'space-around', marginTop: 30, marginBottom: 15, paddingHorizontal: 30 } })
- This code contains a lot of CSS and React which is self-explanatory to anyone familiar with them. So, I will explain things specific to react-native to you.
Here is an explanation of all the important things happening here:
<LinearGradient> is a component from expo-linear-gradient. It is being used to create the red gradient in the background.
Here the styles which are really short are written as inline styles.
Example:
<Text style={{ fontSize: 14, color: '#F3F3F3' }}>Gurugram/India</Text>
{ imgarr.map((image) => ( <Image source={image} style={{ width: 45, height: 45 }} /> )) }
Here we are using imgarr array which contains the four icons sequentially. This array is located at the bottom of the profilecard.js file.
It loook like this:-
const imgarr = [ require('../assets/icons/phoneicon.png'), require('../assets/icons/messageicon.png'), require('../assets/icons/mailicon.png'), require('../assets/icons/likeicon.png') ]
This was all you needed to complete the profile section part!
Working with the game part!
Now open your dicecontainer.js file, it already has some template code written in it.
Now let us take a look at some important imports of the file first:
import Button from './button' import randomNumGenerator from './lib/dicegenerator'
Here Button is a component and you don't need to deal with it, although if you want to see its implementation you can check it.
We are using this button instead of react-native's Button because this Button will give us the same appearance regardless of the OS ( react-native's Button appearance varies with the platform ).
randomNumGenerator will be used by us to generate a random number between 0-5. ( It will be used to generate random dice in the app ).
Now let's look at dicearr Array which is at the bottom of the file.
const dicearr = [ require('../assets/dice/dice-1.png'), require('../assets/dice/dice-2.png'), require('../assets/dice/dice-3.png'), require('../assets/dice/dice-4.png'), require('../assets/dice/dice-5.png'), require('../assets/dice/dice-6.png') ]
It contains an array of dice images from 1-6!
Working with the component
Okay, now change your component and StyleSheet with the following code. ( for short styles we have used inline styles ).
export default function Comp() { const [statearr, setStatearr] = React.useState([2, 5]) return ( <> <View> <View style={styles.diceContainer}> <Image source={dicearr[statearr[0]]} style={{ width: 80, height: 80, marginHorizontal: 15 }} /> <Image source={dicearr[statearr[1]]} style={{ width: 80, height: 80, marginHorizontal: 15 }} /> </View> <View style={styles.buttonContainer}> <Button title="Roll" onPress={function () { setStatearr([randomNumGenerator(), randomNumGenerator()]) }} /> </View> </View> </> ) } const styles = StyleSheet.create({ diceContainer: { display: 'flex', flexDirection: 'row', justifyContent: 'center', paddingTop: 30 }, buttonContainer: { flexDirection: 'row', justifyContent: 'center', paddingTop: 30 } })
Here, the following important things are happening:
const [statearr, setStatearr] = React.useState([2, 5])
- Here we are using [useState]() from React ( it is a concept of [React hooks]() ). - A change in the value of **statearr** will cause re-render, meaning it will update the UI, **setStatearr is used to set a new value for statearr** ( _don't do statearr=[2,4]_ ). Its default value will be [2,5]. Now let's understand this part of the code: ```jsx <Image source={dicearr[statearr[0]]} style={{ width: 80, height: 80, marginHorizontal: 15 }}/> <Image source={dicearr[statearr[1]]} style={{ width: 80, height: 80, marginHorizontal: 15 }}/>
Here the <Image> are being determined by using statearr[0] and statearr[1] to determine the value of the dicearr array.
A change in the value of statearr will result in the changing of the images.
Button Component!
Now let's talk about the <Button>
<Button title="Roll" onPress={function () { setStatearr([randomNumGenerator(), randomNumGenerator()]) }} />
It receives the Text to render as title prop. ( Here it is 'Roll' )
The function passed to the onPress prop will be executed when it will be pressed.
Here, it changes statearr array to hold two random values. ( it is using randomNumGenerator function for it ).
Done!
Yes, this was all you needed to make this app. You have completed the whole programming part.
Run it!
Just click on the Roll button and see the magic! or react-native feel free to reach out to me! | https://workshops.hackclub.com/dicegamereactnative/ | CC-MAIN-2022-33 | en | refinedweb |
20647/difference-between-class-and-instance-attributes
what is the difference between the following codes?
1. class A(object):
foo = 3 #set default value
2. class B(object):
def __init__(self, foo=3):
self.foo = foo
How will it affect the performance or space requirements while creating a lot of instances?
In Python, there is no need to compare to True or False. In the following code, the opposite of dest in smaller.keys() is (dest not in smaller.keys())
So it must be written as:
if(dest not in smaller.keys()):
Apart from the performance, there is a significant semantic difference between the two codes. only one object is referred to in case of class attribute whereas there can be multiple objects referred to in case of instance-attribute-set-at-instantiation.
For example :
1. class A: foo = []
a, b = A(), A()
a.foo.append(3)
b.foo
2. class A:
def __init__(self): self.foo = []
a, b = A(), A() a.foo.append(3)
b.foo
Interfaces
An interface is a contract: The person writing ...READ MORE
show() is just a convenience function for ...READ MORE
Lists are mutable(values can be changed) whereas ...READ MORE
'==' checks for the equality of
The "**" operator is used for 'power ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/20647/difference-between-class-and-instance-attributes?show=20659 | CC-MAIN-2022-33 | en | refinedweb |
Naming Conventions¶
File and Directory Names¶
Our directory tree stripped down looks something like:
statsmodels/ __init__.py api.py discrete/ __init__.py discrete_model.py tests/ results/ tsa/ __init__.py api.py tsatools.py stattools.py arima_model.py arima_process.py vector_ar/ __init__.py var_model.py tests/ results/ tests/ results/ stats/ __init__.py api.py stattools.py tests/ tools/ __init__.py tools.py decorators.py tests/
The submodules are arranged by topic, discrete for discrete choice models, or tsa for time series analysis. The submodules that can be import heavy contain an empty __init__.py, except for some testing code for running tests for the submodules. The namespace to be imported is in api.py. That way, we can import selectively and do not have to import a lot of code that we don’t need. Helper functions are usually put in files named tools.py and statistical functions, such as statistical tests are placed in stattools.py. Everything has directories for tests.
endog & exog¶
Our working definition of a statistical model is an object that has both endogenous and exogenous data defined as well as a statistical relationship. In place of endogenous and exogenous one can often substitute the terms left hand side (LHS) and right hand side (RHS), dependent and independent variables, regressand and regressors, outcome and design, response variable and explanatory variable, respectively. The usage is quite often domain specific; however, we have chosen to use endog and exog almost exclusively, since the principal developers of statsmodels have a background in econometrics, and this feels most natural. This means that all of the models are objects with endog and exog defined, though in some cases exog is None for convenience (for instance, with an autoregressive process). Each object also defines a fit (or similar) method that returns a model-specific results object. In addition there are some functions, e.g. for statistical tests or convenience functions.
See also the related explanation in endog, exog, what’s that?.
Variable Names¶
All of our models assume that data is arranged with variables in columns. Thus, internally the data is all 2d arrays. By convention, we will prepend a k_ to variable names that indicate moving over axis 1 (columns), and n_ to variables that indicate moving over axis 0 (rows). The main exception to the underscore is that nobs should indicate the number of observations. For example, in the time-series ARMA model we have:
`k_ar` - The number of AR lags included in the RHS variables `k_ma` - The number of MA lags included in the RHS variables `k_trend` - The number of trend variables included in the RHS variables `k_exog` - The number of exogenous variables included in the RHS variables excluding the trend terms `n_totobs` - The total number of observations for the LHS variables including the pre-sample values
Options¶
We are using similar options in many classes, methods and functions. They should follow a standardized pattern if they recurr frequently.
`missing` ['none', 'drop', 'raise'] define whether inputs are checked for nans, and how they are treated `alpha` (float in (0, 1)) significance level for hypothesis tests and confidence intervals, e.g. `alpha=0.05`
patterns
`return_xxx` : boolean to indicate optional or different returns (not `ret_xxx`) | https://www.statsmodels.org/v0.10.2/dev/naming_conventions.html | CC-MAIN-2022-33 | en | refinedweb |
Heroku Rails
Easier configuration and deployment of Rails apps on Heroku
Configure your Heroku environment via a YML file (config/heroku.yml) that defines all your environments, addons, and environment variables.
Heroku Rails also handles asset packaging (via jammit), deployment of assets to s3 (via jammit-s3).
Install
Rails 3
Add this to your Gemfile:
group :development do gem 'heroku-rails' end
Rails 2
To install add the following to config/environment.rb:
config.gem 'heroku-rails'
Rake tasks are not automatically loaded from gems, so you’ll need to add the following to your Rakefile:
begin require 'heroku/rails/tasks' rescue LoadError STDERR.puts "Run `rake gems:install` to install heroku-rails" end
Configure
In config/heroku.yml you will need add the Heroku apps that you would like to attach to this project. You can generate this file and edit it by running:
rails generate heroku:config
Example Configuration File
apps: production: awesomeapp staging: awesomeapp-staging legacy: awesomeapp-legacy stacks: all: bamboo-mri-1.9.2 legacy: bamboo-ree-1.8.7 config: all: BUNDLE_WITHOUT: "test:development" production: MONGODB_URI: "mongodb://[username:[email protected]]host1[:port1][/database]" staging: MONGODB_URI: "mongodb://[username:[email protected]]host1[:port1][/database]" collaborators: all: - "[email protected]" - "[email protected]" domains: production: - "awesomeapp.com" - "" addons: all: - newrelic:bronze # add any other addons here production: - ssl:piggyback - cron:daily # list production env specific addons here
Setting up Heroku
To set heroku up (using your heroku.yml), just run.
rake all heroku:setup
This will create the heroku apps you have defined, and create the settings for each.
Run
rake heroku:setup every time you edit the heroku.yml. It will only make incremental changes (based on what you've added/removed). If nothing has changed in the heroku.yml since the last
heroku:setup, then no heroku changes will be sent.
Usage
After configuring your Heroku apps you can use rake tasks to control the apps.
rake production heroku:deploy
A rake task with the shorthand name of each app is now available and adds that server to the list that subsequent commands will execute on. Because this list is additive, you can easily select which servers to run a command on.
rake demo staging heroku:restart
A special rake task 'all' is created that causes any further commands to execute on all heroku apps.
Need to add remotes for each app?
rake all heroku:remotes
A full list of tasks provided:
rake all # Select all Heroku apps for later command rake heroku:deploy # Deploys, migrates and restarts latest code. rake heroku:apps # Lists configured apps rake heroku:info # Queries the heroku status info on each app rake heroku:console # Opens a remote console rake heroku:capture # Captures a bundle on Heroku rake heroku:remotes # Add git remotes for all apps in this project rake heroku:migrate # Migrates and restarts remote servers rake heroku:restart # Restarts remote servers rake heroku:setup # runs all heroku setup scripts rake heroku:setup:addons # sets up the heroku addons rake heroku:setup:collaborators # sets up the heroku collaborators rake heroku:setup:config # sets up the heroku config env variables rake heroku:setup:domains # sets up the heroku domains rake heroku:setup:stacks # sets the correct stack for each heroku app rake heroku:db:setup # Migrates and restarts remote servers
You can easily alias frequently used tasks within your application's Rakefile:
task :deploy => ["heroku:deploy"] task :console => ["heroku:console"] task :capture => ["heroku:capture"]
With this in place, you can be a bit more terse:
rake staging console rake all deploy
Deploy Hooks
You can easily hook into the deploy process by defining any of the following rake tasks.
When you ran
rails generate heroku:config, it created a list of empty rake tasks within lib/tasks/heroku.rake. Edit these rake tasks to provide custom logic for before/after deployment.
namespace :heroku do # runs before all the deploys complete task :before_deploy do end # runs before each push to a particular heroku deploy environment task :before_each_deploy do end # runs after each push to a particular heroku deploy environment task :after_each_deploy do end # runs after all the deploys complete task :after_deploy do end end
About Heroku Rails
Links
Issue Tracker::
License
License:: Copyright (c) 2010 Jacques Crocker [email protected] released under the MIT license.
Forked from Heroku Sans
Heroku Rails is a fork and rewrite/reorganiziation of the heroku_sans gem. Heroku Sans is a simple and elegant set of Rake tasks for managing Heroku environments. Check out that project here:
Heroku Sans Contributors
- Elijah Miller ([email protected])
- Glenn Roberts ([email protected])
- Damien Mathieu ([email protected])
Heroku Sans License
License:: Copyright (c) 2009 Elijah Miller [email protected], released under the MIT license. | https://www.rubydoc.info/gems/heroku-rails/0.4.4 | CC-MAIN-2022-33 | en | refinedweb |
#include <test_vars_constr_cost.h>
Definition at line 104 of file test_vars_constr_cost.h.
Definition at line 106 of file test_vars_constr_cost.h.
Definition at line 110 136 of file test_vars_constr_cost.h.
Returns the "bounds" of this component.
Implements ifopt::Component.
Definition at line 124 of file test_vars_constr_cost.h.
Returns the "values" of whatever this component represents.
Implements ifopt::Component.
Definition at line 113 of file test_vars_constr_cost.h. | https://docs.ros.org/en/kinetic/api/ifopt/html/classifopt_1_1ExConstraint.html | CC-MAIN-2022-33 | en | refinedweb |
7237/there-hadoop-nodes-nodes-namenodes-multiple-volumes-disks
Datanodes can store blocks in multiple directories typically allocated on different local disk drives. In order to setup multiple directories, one needs to specifiy a comma-separated list of pathnames as values under config parameters dfs.data.dir/dfs.datanode.data.dir. Datanodes will attempt to place equal amount of data in each of directories.
Namenode also supports multiple directories, which in the case store the name space image and edit logs. In order to setup multiple directories one needs to specify a comma-separated list of pathnames as values under config parameters dfs.name.dir/dfs.namenode.data.dir .
The namenode directories ae used for the namespace data replication so that image and log could be restored from the remaining disks/volumes if one of the disks fails.
Hope it will answer your query to some extent.
The distributed copy command, distcp, is a ...READ MORE
You can add some more memory by ...READ MORE
You can easily set the number of ...READ MORE
use jps command, It will show all the running ...READ MORE
In your case there is no difference ...READ MORE
Firstly you need to understand the concept ...READ MORE
Well, hadoop is actually a framework that ...READ MORE
Hi,
You can create one directory in HDFS ...READ MORE
First of all, COBOL is a programming ...READ MORE
The generic command i.e used to import ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/7237/there-hadoop-nodes-nodes-namenodes-multiple-volumes-disks?show=7238 | CC-MAIN-2022-33 | en | refinedweb |
Published : April 24, 2022
So! Since my last article on Godot Pipelines, there has been a lot of things, including the start of my PhD (on real-time non-photorealistic rendering!). One of them is the release of Malt 1.0 Preview, which is super useful for me in all kinds of ways, and today we're going to look at how to use it with Godot to quickly create rendering pipelines, and most importantly have the same rendering in both Blender and Godot !
I cannot overstate how useful it is to have both use the same rendering code. Faster iterations, less work, better results, better tools... Non-photorealistic rendering really needs that kind of flexibility as there's no standard, and by definition all renders are going to be different.
Today we're gonna set up a small two pass render pipeline like the last time to see how it's done. You can find the code here.
The main feature we're going to be interested in here is custom render pipelines, which allows us complete control over what is rendered. This is where Malt will create the buffers and basic parameters, and where we can use our OpenGL calls. Let's start by taking the Mini Pipeline as a base, and add stuff one at a time.
The main functions are as follows:
__init__: Sets up our parameters and arguments.
compile_material_from_source: Compiles the shaders.
setup_render_targets: Sets up our buffers and FBOs (Render Targets). This is the Malt equivalent of making new viewports in Godot.
do_render: Will do the actual rendering calls. This would be where we pass parameters and buffers to the passes.
Frame Buffer Objects (FBOs) / Render Targets: This is a collection of buffers that a shader will write to. While in Godot, we can only output to one buffer, in regular rendering we can write to several. This is actually how deferred shading works: write all your parameters in a first pass to several buffer, then sample them all in a second pass to compute our final color.
from os import path from Malt.GL.GL import * from Malt.GL.Mesh import Mesh from Malt.GL.RenderTarget import RenderTarget from Malt.GL.Shader import Shader, UBO from Malt.GL.Texture import Texture from Malt.Pipeline import * from Malt.Render import Lighting class PipelineMaltToGodot(Pipeline): DEFAULT_SHADER = None def __init__(self, plugins=[]): super().__init__(plugins) if PipelineMaltToGodot.DEFAULT_SHADER is None: source = ''' #include "Common.glsl" #ifdef VERTEX_SHADER void main() { DEFAULT_VERTEX_SHADER(); } #endif #ifdef PIXEL_SHADER layout (location = 0) out vec4 RESULT; void main() { PIXEL_SETUP_INPUT(); RESULT = vec4(1); } #endif ''' PipelineMaltToGodot.DEFAULT_SHADER = self.compile_material_from_source('mesh', source) self.default_shader = PipelineMaltToGodot.DEFAULT_SHADER def compile_material_from_source(self, material_type, source, include_paths=[]): return { 'MAIN_PASS' : self.compile_shader_from_source( source, include_paths, ['MAIN_PASS'] ) } def setup_render_targets(self, resolution): self.t_pgbuffer_depth = Texture(resolution, GL_DEPTH_COMPONENT32F) self.t_pgbuffer = Texture(resolution, GL_RGBA32F) self.rt_pgbuffer = RenderTarget([self.t_pgbuffer], self.t_pgbuffer_depth) def do_render(self, resolution, scene, is_final_render, is_new_frame): shader_resources = { 'COMMON_UNIFORMS' : self.common_buffer } self.rt_pgbuffer.clear([(0,0,0,0)], 1) self.draw_scene_pass(self.rt_pgbuffer, scene.batches, 'MAIN_PASS', self.default_shader['MAIN_PASS'], shader_resources) return { 'COLOR' : self.t_pgbuffer} PIPELINE = PipelineMaltToGodot
Finally, here's our mesh shader. The only thing it will do is render some data for the second pass, here by filling the red channel.
void main() { DEFAULT_VERTEX_SHADER(); } layout (location = 0) out vec4 RESULT; void main() { PIXEL_SETUP_INPUT(); RESULT = vec4(1, 0, 0, 1); }
Finally, set the color profile to linear in the film panel (set Display Device to None).
For now we don't have anything to compute lighting, so let's change that. As we're doing low level code, we usually need to pass each light parameter manually, but Malt has a few helpers we're going to use since we don't have anything special with the lights themselves.
The following code will do two things:
__init__function.
Uniform Buffer Objects (UBOs): This is the collection of the uniforms (think of them as parameters for the shader) we will send to the shader. Every shader will define what uniforms it will have, and the pipeline will set their value before rendering.
def __init__(self, plugins=[]): # [...] # Load the lights self.lights_buffer = Lighting.get_lights_buffer() def do_render(self, resolution, scene, is_final_render, is_new_frame): # [...] # Load the lights (Sun CSM Count, Sun CSM Distribution, Sun Max Distance) self.lights_buffer.load(scene, 4, 0.9, 100.0) shader_resources['SCENE_LIGHTS'] = self.lights_buffer self.draw_scene_pass(self.rt_pgbuffer, scene.batches, 'MAIN_PASS', self.default_shader['MAIN_PASS'], shader_resources) return { 'COLOR' : self.t_pgbuffer}
Then we update the shader to take those lights, and with it compute the lighting we will put in the green channel.
void main() { PIXEL_SETUP_INPUT(); LitSurface ls = lit_surface(IO_POSITION, IO_NORMAL, LIGHTS.lights[0], false); vec3 shading = diffuse_lit_surface(ls); float lightCoef = (0.2126*shading.r + 0.7152*shading.g + 0.0722*shading.b); RESULT = vec4(1, lightCoef, 0, 1); }
Next step is creating our buffers for the passes. We'll simply rename the first one and add a second, no need for the depth pass.
def setup_render_targets(self, resolution): self.t_pgbuffer_depth = Texture(resolution, GL_DEPTH_COMPONENT32F) self.t_pgbuffer = Texture(resolution, GL_RGBA32F) self.rt_pgbuffer = RenderTarget([self.t_pgbuffer], self.t_pgbuffer_depth) self.t_secondpass = Texture(resolution, GL_RGBA32F) self.rt_secondpass = RenderTarget([self.t_secondpass])
Now the trickier part. To have it render correctly we will have to both register a new material for the pass in the
__init__ function, and use it in the
do_render function. Then, we will pass the result of the previous render to it as a uniform. Finally, we create the shader for the second pass.
def __init__(self, plugins=[]): # [...] # Add the material to hold the second pass's shader self.parameters.world['Second Pass Material'] = MaterialParameter('', '.screen.glsl') def do_render(self, resolution, scene, is_final_render, is_new_frame): # [...] self.draw_scene_pass(self.rt_pgbuffer, scene.batches, 'MAIN_PASS', self.default_shader['MAIN_PASS'], shader_resources) # **Second Pass** SecondPassMaterial = scene.world_parameters['Second Pass Material'] if SecondPassMaterial and SecondPassMaterial.shader: SecondPassMaterial.shader['MAIN_PASS'].textures['samplerPGBuffer'] = self.t_pgbuffer self.draw_screen_pass(SecondPassMaterial.shader['MAIN_PASS'], self.rt_secondpass, shader_resources) else: return { 'COLOR' : self.t_pgbuffer} return { 'COLOR' : self.t_secondpass }
1,0.2,0.2); uniform vec3 unlitColor = vec3(0.8,0,0); uniform vec3 backgroundColor = vec3(0.7); void main() { DEFAULT_SCREEN_VERTEX_SHADER(); } layout (location = 0) out vec4 RESULT; void main() { PIXEL_SETUP_INPUT(); vec4 pgbufferSample = texture(samplerPGBuffer, UV[0]); RESULT = vec4(mix(backgroundColor, mix(unlitColor, litColor, step(0.2, pgbufferSample.g)), pgbufferSample.r), 1); }uniform sampler2D samplerPGBuffer; uniform vec3 litColor = vec3(
This is what we did in the last article on Godot Pipelines. I prefer doing the tests in Malt since it's faster for prototyping and has full OpenGL support, but since Godot has its own language you should keep its limitations in mind. Oddlib has evolved a bit so I'll give the updated code here:
extends "res://oddlib-shaders/pipeline/OLSPipeline.gd" var secondPassMaterial = preload("res://SecondPassMaterial.tres") func Setup(): AddPGBuffer("First Pass") AddParameterVPTexture("Second Pass/PG Buffer", "bufferPG", "First Pass")
shader_type spatial; void fragment() { ALBEDO = vec3(0.0,0.0,0.0); } void light() { float l = DIFFUSE_LIGHT.g + (clamp(dot(NORMAL, LIGHT), 0.0, 1.0) * vec3(0.,ATTENUATION.g, 0.)).g; DIFFUSE_LIGHT = vec3(1.0,l,0.0); }
shader_type canvas_item; uniform sampler2D bufferPG : hint_black; uniform vec3 backgroundColor = vec3(0.7,0.7,0.7); uniform vec3 unlitColor = vec3(0.8,0.0,0.0); uniform vec3 litColor = vec3(1.0,0.2,0.2); void fragment() { vec4 samplePG = texture(bufferPG, SCREEN_UV); COLOR = vec4(mix(backgroundColor, mix(unlitColor, litColor, step(0.2, samplePG.g)), samplePG.r), 1.0); }
Since this shader does the same things as the GLSL shader, and the pipeline has the same ordering of passes, this gives us identical or near-identical results depending on the parameters we use (don't forget to activate the linear color profile in Blender's film panel).
So, now that we have seen how to set up a simple pipeline, you can apply it to your project! I think we can go even further by using the same shader for both, although that would require a preprocessor and a lot of #IFDEFs.
This has already been super useful for me, so I'll probably continue to dig on the subject. Join the discord if you want to stay up to date! | http://panthavma.com/articles/godot-malt-pipeline-intro/ | CC-MAIN-2022-33 | en | refinedweb |
Question :
Using python regular expression only, how to find and replace nth occurrence of word in a sentence?
For example:
str = 'cat goose mouse horse pig cat cow' new_str = re.sub(r'cat', r'Bull', str) new_str = re.sub(r'cat', r'Bull', str, 1) new_str = re.sub(r'cat', r'Bull', str, 2)
I have a sentence above where the word ‘cat’ appears two times in the sentence. I want 2nd occurence of the ‘cat’ to be changed to ‘Bull’ leaving 1st ‘cat’ word untouched. My final sentence would look like:
“cat goose mouse horse pig Bull cow”. In my code above I tried 3 different times could not get what I wanted.
Answer #1:
Use negative lookahead like below.
"cat goose mouse horse pig cat cow" re.sub(r'^((?:(?!cat).)*cat(?:(?!cat).)*)cat', r'1Bull', s) 'cat goose mouse horse pig Bull cow's =
^Asserts that we are at the start.
(?:(?!cat).)*Matches any character but not of
cat, zero or more times.
catmatches the first
catsubstring.
(?:(?!cat).)*Matches any character but not of
cat, zero or more times.
- Now, enclose all the patterns inside a capturing group like
((?:(?!cat).)*cat(?:(?!cat).)*), so that we could refer those captured chars on later.
catnow the following second
catstring is matched.
OR
"cat goose mouse horse pig cat cow" re.sub(r'^(.*?(cat.*?){1})cat', r'1Bull', s) 'cat goose mouse horse pig Bull cow's =
Change the number inside the
{} to replace the first or second or nth occurrence of the string
cat
To replace the third occurrence of the string
cat, put
2 inside the curly braces ..
r'^(.*?(cat.*?){2})cat', r'1Bull', "cat goose mouse horse pig cat foo cat cow") 'cat goose mouse horse pig cat foo Bull cow're.sub(
Play with the above regex on here …
Answer #2:
I use simple function, which lists all occurrences, picks the nth one’s position and uses it to split original string into two substrings. Then it replaces first occurrence in the second substring and joins substrings back into the new string:
import re def replacenth(string, sub, wanted, n) where = [m.start() for m in re.finditer(sub, string)][n-1] before = string[:where] after = string[where:] after.replace(sub, wanted, 1) newString = before + after print newString
For these variables:
string = 'ababababababababab' sub = 'ab' wanted = 'CD' n = 5
outputs:
ababababCDabababab
Notes:
The
wherevariable actually is a list of matches’ positions, where you pick up the nth one. But list item index starts with
0usually, not with
1. Therefore there is a
n-1index and
nvariable is the actual nth substring. My example finds 5th string. If you use
nindex and want to find 5th position, you’ll need
nto be
4. Which you use usually depends on the function, which generates our
n.
This should be the simplest way, but it isn’t regex only as you originally wanted.
Sources and some links in addition:
-
whereconstruction: Find all occurrences of a substring in Python
- string splitting:
- similar question: Find the nth occurrence of substring in a string
Answer #3:
Here’s a way to do it without a regex:
def replaceNth(s, source, target, n): inds = [i for i in range(len(s) - len(source)+1) if s[i:i+len(source)]==source] if len(inds) < n: return # or maybe raise an error s = list(s) # can't assign to string slices. So, let's listify s[inds[n-1]:inds[n-1]+len(source)] = target # do n-1 because we start from the first occurrence of the string, not the 0-th return ''.join(s)
Usage:
In [278]: s Out[278]: 'cat goose mouse horse pig cat cow' In [279]: replaceNth(s, 'cat', 'Bull', 2) Out[279]: 'cat goose mouse horse pig Bull cow' In [280]: print(replaceNth(s, 'cat', 'Bull', 3)) None
Answer #4:
I would define a function that will work for every regex:
import re def replace_ith_instance(string, pattern, new_str, i = None, pattern_flags = 0): # If i is None - replacing last occurrence match_obj = re.finditer(r'{0}'.format(pattern), string, flags = pattern_flags) matches = [item for item in match_obj] if i == None: i = len(matches) if len(matches) == 0 or len(matches) < i: return string match = matches[i - 1] match_start_index = match.start() match_len = len(match.group()) return '{0}{1}{2}'.format(string[0:match_start_index], new_str, string[match_start_index + match_len:])
A working example:
str = 'cat goose mouse horse pig cat cow' ns = replace_ith_instance(str, 'cat', 'Bull', 2) print(ns)
The output:
cat goose mouse horse pig Bull cow
Another example:
str2 = 'abc abc def abc abc' ns = replace_ith_instance(str2, 'abcs*abc', '666') print(ns)
The output:
abc abc def 666
Answer #5:
How to replace the
nth
needle with
word:
s.replace(needle,'$$$',n-1).replace(needle,word,1).replace('$$$',needle)
Answer #6:
You can match the two occurrences of “cat”, keep everything before the second occurrence (
1) and add “Bull”:
new_str = re.sub(r'(cat.*?)cat', r'1Bull', str, 1)
We do only one substitution to avoid replacing the fourth, sixth, etc. occurrence of “cat” (when there are at least four occurrences), as pointed out by Avinash Raj comment.
If you want to replace the
n-th occurrence and not the second, use:
n = 2 new_str = re.sub('(cat.*?){%d}' % (n - 1) + 'cat', r'1Bull', str, 1)
BTW you should not use
str as a variable name since it is a Python reserved keyword.
Answer #7:
Create a repl function to pass into
re.sub(). Except… the trick is to make it a class so you can track the call count.
class ReplWrapper(object): def __init__(self, replacement, occurrence): self.count = 0 self.replacement = replacement self.occurrence = occurrence def repl(self, match): self.count += 1 if self.occurrence == 0 or self.occurrence == self.count: return match.expand(self.replacement) else: try: return match.group(0) except IndexError: return match.group(0)
Then use it like this:
myrepl = ReplWrapper(r'Bull', 0) # replaces all instances in a string new_str = re.sub(r'cat', myrepl.repl, str) myrepl = ReplWrapper(r'Bull', 1) # replaces 1st instance in a string new_str = re.sub(r'cat', myrepl.repl, str) myrepl = ReplWrapper(r'Bull', 2) # replaces 2nd instance in a string new_str = re.sub(r'cat', myrepl.repl, str)
I’m sure there is a more clever way to avoid using a class, but this seemed straight-forward enough to explain. Also, be sure to return
match.expand() as just returning the replacement value is not technically correct of someone decides to use
1 type templates. | https://discuss.dizzycoding.com/how-to-find-and-replace-nth-occurrence-of-word-in-a-sentence-using-python-regular-expression/ | CC-MAIN-2022-33 | en | refinedweb |
Terraform Bridge Provider Boilerplate
This repository contains boilerplate code for building a new Pulumi provider which wraps an existing Terraform provider.
Background
This repository is part of the guide for authoring and publishing a Pulumi Package.
Creating a Pulumi Terraform Bridge Provider
The following instructions cover:
- providers maintained by Pulumi (denoted with a “Pulumi Official” checkmark on the Pulumi registry)
- providers published and maintained by the Pulumi community, referred to as “third-party” providers
We showcase a Pulumi-owned provider based on an upstream provider named
terraform-provider-foo. Substitute appropriate values below for your use case.
Note: If the name of the desired Pulumi provider differs from the name of the Terraform provider, you will need to carefully distinguish between the references – see for an example.
Prerequisites
Ensure the following tools are installed and present in your
$PATH:
pulumictl
- Go 1.17 or 1.latest
- NodeJS 14.x. We recommend using nvm to manage NodeJS installations.
- Yarn
- TypeScript
- Python (called as
python3). For recent versions of MacOS, the system-installed version is fine.
- .NET
Creating and Initializing the Repository
Pulumi offers this repository as a GitHub template repository for convenience. From this repository:
- Click “Use this template”.
- Set the following options:
- Owner: pulumi (third-party: your GitHub organization/username)
- Repository name: pulumi-foo (third-party: preface your repo name with “pulumi” as standard practice)
- Description: Pulumi provider for Foo
- Repository type: Public
- Clone the generated repository.
From the templated repository:
Run the following command to update files to use the name of your provider (third-party: use your GitHub organization/username):
make prepare NAME=foo REPOSITORY=github.com/pulumi/pulumi-foo
This will do the following:
- rename folders in
provider/cmdto
pulumi-resource-fooand
pulumi-tfgen-foo
- replace dependencies in
provider/go.modto reflect your repository name
- find and replace all instances of the boilerplate
xyzwith the
NAMEof your provider.
Note for third-party providers:
- Make sure to set the correct GitHub organization/username in all files referencing your provider as a dependency:
examples/go.mod
provider/resources.go
sdk/go.mod
provider/cmd/pulumi-resource-foo/main.go
provider/cmd/pulumi-tfgen-foo/main.go
Modify
README-PROVIDER.mdto include the following (we’ll rename it to
README.mdtoward the end of this guide):
- Any desired build status badges.
- An introductory paragraph describing the type of resources the provider manages, e.g. “The Foo provider for Pulumi manages resources for Foo.
- In the “Installing” section, correct package names for the various SDK libraries in the languages Pulumi supports.
- In the “Configuration” section, any configurable options for the provider. These may include, but are not limited to, environment variables or options that can be set via
pulumi config set.
- In the “Reference” section, provide a link to the to-be-published documentation.
- Feel free to refer to the Pulumi AWS provider README as an example.
Composing the Provider Code – Prerequisites
Pulumi provider repositories have the following general structure:
examples/contains sample code which may optionally be included as integration tests to be run as part of a CI/CD pipeline.
provider/contains the Go code used to create the provider as well as generate the SDKs in the various languages that Pulumi supports.
provider/cmd/pulumi-tfgen-foogenerates the Pulumi resource schema (
schema.json), based on the Terraform provider’s resources.
provider/cmd/pulumi-resource-foogenerates the SDKs in all supported languages from the schema, placing them in the
sdk/folder.
provider/pkg/resources.gois the location where we will define the Terraform-to-Pulumi mappings for resources.
sdk/contains the generated SDK code for each of the language platforms that Pulumi supports, with each supported platform in a separate subfolder.
In
provider/go.mod, add a reference to the upstream Terraform provider in the
requiresection, e.g.
github.com/foo/terraform-provider-foo v0.4.0
In
provider/resources.go, ensure the reference in the
importsection uses the correct Go module path, e.g.:
github.com/foo/terraform-provider-foo/foo
Download the dependencies:
cd provider && go mod tidy && cd -
Create the schema by running the following command:
make tfgen
Note warnings about unmapped resources and data sources in the command’s output. We map these in the next section, e.g.:
warning: resource foo_something not found in provider map; skipping warning: resource foo_something_else not found in provider map; skipping warning: data source foo_something not found in provider map; skipping warning: data source foo_something_else not found in provider map; skipping
Adding Mappings, Building the Provider and SDKs
In this section we will add the mappings that allow the interoperation between the Pulumi provider and the Terraform provider. Terraform resources map to an identically named concept in Pulumi. Terraform data sources map to plain old functions in your supported programming language of choice. Pulumi also allows provider functions and resources to be grouped into namespaces to improve the cohesion of a provider’s code, thereby making it easier for developers to use. If your provider has a large number of resources, consider using namespaces to improve usability.
The following instructions all pertain to
provider/resources.go, in the section of the code where we construct a
tfbridge.ProviderInfo object:
Add resource mappings: For each resource in the provider, add an entry in the
Resourcesproperty of the
tfbridge.ProviderInfo, e.g.:
// Most providers will have all resources (and data sources) in the main module. // Note the mapping from snake_case HCL naming conventions to UpperCamelCase Pulumi SDK naming conventions. // The name of the provider is omitted from the mapped name due to the presence of namespaces in all supported Pulumi languages. "foo_something": {Tok: tfbridge.MakeResource(mainPkg, mainMod, "Something")}, "foo_something_else": {Tok: tfbridge.MakeResource(mainPkg, mainMod, "SomethingElse")},
Add CSharpName (if necessary): Dotnet does not allow for fields named the same as the enclosing type, which sometimes results in errors during the dotnet SDK build. If you see something like
error CS0542: 'ApiKey': member names cannot be the same as their enclosing type [/Users/guin/go/src/github.com/pulumi/pulumi-artifactory/sdk/dotnet/Pulumi.Artifactory.csproj]
you’ll want to give your Resource a CSharpName, which can have any value that makes sense:
"foo_something_dotnet": { Tok: tfbridge.MakeResource(mainPkg, mainMod, "SomethingDotnet"), Fields: map[string]*tfbridge.SchemaInfo{ "something_dotnet": { CSharpName: "SpecialName", }, }, },
See the underlying terraform-bridge code here.
Add data source mappings: For each data source in the provider, add an entry in the
DataSourcesproperty of the
tfbridge.ProviderInfo, e.g.:
// Note the 'get' prefix for data sources "foo_something": {Tok: tfbridge.MakeDataSource(mainPkg, mainMod, "getSomething")}, "foo_something_else": {Tok: tfbridge.MakeDataSource(mainPkg, mainMod, "getSomethingElse")},
Add documentation mapping (sometimes needed): If the upstream provider’s repo is not a part of the
terraform-providersGitHub organization, specify the
GitHubOrgproperty of
tfbridge.ProviderInfoto ensure that documentation is picked up by the codegen process, and that attribution for the upstream provider is correct, e.g.:
GitHubOrg: "foo",
Add provider configuration overrides (not typically needed): Pulumi’s Terraform bridge automatically detects configuration options for the upstream provider. However, in rare cases these settings may need to be overridden, e.g. if we want to change an environment variable default from
API_KEYto
FOO_API_KEY. Examples of common uses cases:
"additional_required_parameter": {}, "additional_optional_string_parameter": { Default: &tfbridge.DefaultInfo{ Value: "default_value", }, "additional_optional_boolean_parameter": { Default: &tfbridge.DefaultInfo{ Value: true, }, // Renamed environment variables can be accounted for like so: "apikey": { Default: &tfbridge.DefaultInfo{ EnvVars: []string{"FOO_API_KEY"}, },
Build the provider binary and ensure there are no warnings about unmapped resources and no warnings about unmapped data sources:
make provider
You may see warnings about documentation and examples, including “unexpected code snippets”. These can be safely ignored for now. Pulumi will add additional documentation on mapping docs in a future revision of this guide.
Build the SDKs in the various languages Pulumi supports:
make build_sdks
Ensure the Golang SDK is a proper go module:
cd sdk && go mod tidy && cd -
This will pull in the correct dependencies in
sdk/go.modas well as setting the dependency tree in
sdk/go.sum.
Finally, ensure the provider code conforms to Go standards:
make lint_provider
Fix any issues found by the linter.
Note: If you make revisions to code in
resources.go, you must re-run the
make tfgen target to regenerate the schema.
The
make tfgen target will take the file
schema.json and serialize it to a byte array so that it can be included in the build output.
(This is a holdover from Go 1.16, which does not have the ability to directly embed text files. We are working on removing the need for this step.)
Sample Program
In this section, we will create a Pulumi program in TypeScript that utilizes the provider we created to ensure everything is working properly.
Create an account with the provider’s service and generate any necessary credentials, e.g. API keys.
- Password: (Create a random password in 1Password with the maximum length and complexity allowed by the provider.)
- Ensure all secrets (passwords, generated API keys) are stored in Pulumi’s 1Password vault.
Copy the
pulumi-resource-foobinary generated by
make providerand place it in your
$PATH(
$GOPATH/binis a convenient choice), e.g.:
cp bin/pulumi-resource-foo $GOPATH/bin
Tell Yarn to use your local copy of the SDK:
make install_nodejs_sdk
Create a new Pulumi program in the
examples/directory, e.g.:
mkdir examples/my-example/ts # Change "my-example" to something more meaningful. cd examples/my-example/ts pulumi new typescript # (Go through the prompts with the default values) npm install yarn link @pulumi/foo
Create a minimal program for the provider, i.e. one that creates the smallest-footprint resource. Place this code in
index.ts.
Configure any necessary environment variables for authentication, e.g
$FOO_USERNAME,
$FOO_TOKEN, in your local environment.
Ensure the program runs successfully via
pulumi up.
Once the program completes successfully, verify the resource was created in the provider’s UI.
Destroy any resources created by the program via
pulumi destroy.
Optionally, you may create additional examples for SDKs in other languages supported by Pulumi:
Python:
mkdir examples/my-example/py cd examples/my-example/py pulumi new python # (Go through the prompts with the default values) source venv/bin/activate # use the virtual Python env that Pulumi sets up for you pip install pulumi_foo
Follow the steps above to verify the program runs successfully.
Add End-to-end Testing
We can run integration tests on our examples using the
*_test.go files in the
examples/ folder.
Add code to
examples_nodejs_test.goto call the example you created, e.g.:
// Swap out MyExample and "my-example" below with the name of your integration test. func TestAccMyExampleTs(t *testing.T) { test := getJSBaseOptions(t). With(integration.ProgramTestOptions{ Dir: filepath.Join(getCwd(t), "my-example", "ts"), }) integration.ProgramTest(t, &test) }
Add a similar function for each example that you want to run in an integration test. For examples written in other languages, create similar files for
examples_${LANGUAGE}_test.go.
You can run these tests locally via Make:
make test
You can also run each test file separately via test tags:
cd examples && go test -v -tags=nodejs
Configuring CI with GitHub Actions
Third-party providers
- Follow the instructions laid out in the deployment templates.
Pulumi Internal
In this section, we’ll add the necessary configuration to work with GitHub Actions for Pulumi’s standard CI/CD workflows for providers.
Generate GitHub workflows per the instructions in the ci-mgmt repository and copy to
.github/in this repository.
Ensure that any required secrets are present as repository-level secrets in GitHub. These will be used by the integration tests during the CI/CD process.
Repository settings: Toggle
Allow auto-mergeon in your provider repo to automate GitHub Actions workflow updates.
Final Steps
Ensure all required configurations (API keys, etc.) are documented in README-PROVIDER.md.
Replace this file with the README for the provider and push your changes:
mv README-PROVIDER.md README.md
If publishing the npm package fails during the “Publish SDKs” Action, perform the following steps:
- Go to NPM Packages and sign in as pulumi-bot.
- Click on the bot’s profile pic and navigate to “Packages”.
- On the left, under “Organizations, click on the Pulumi organization.
- On the last page of the listed packages, you should see the new package.
- Under “Settings”, set the Package Status to “public”.
Now you are ready to use the provider, cut releases, and have some well-deserved 🍨! | https://golangexample.com/pulumi-provider-for-volcengine/ | CC-MAIN-2022-33 | en | refinedweb |
Available with Spatial Analyst license.
Summary
Identifies the best regions, or groups of contiguous cells, from an input utility (suitability) raster that satisfy a specified evaluation criterion and that meet identified shape, size, number, and interregion distance constraints.
This tool uses a parameterized region-growing (PRG) algorithm to grow candidate regions from seed cells by adding neighboring cells to the region that best preserves the specified shape but also maximizes utility for the region. Using a selection algorithm and an evaluation criterion—such as the highest average value—the best region or regions are selected from the candidate regions that meet identified size and spatial constraints. An example of a spatial constraint would be maintaining a certain minimum distance between regions.
Learn more about how the Locate Regions tool works parameter..
Syntax
LocateRegions(in_raster, {total_area}, {area_units}, {number_of_regions}, {region_shape}, {region_orientation}, {shape_tradeoff}, {evaluation_method}, {minimum_area}, {maximum_area}, {minimum_distance}, {maximum_distance}, {distance_units}, {in_existing_regions}, {number_of_neighbors}, {no_islands}, {region_seeds}, {region_resolution}, {selection_method})
Return Value
Code sample
LocateRegions example 1 (Python window)
The following Python window script demonstrates how to use the LocateRegions tool.
import arcpy from arcpy import env from arcpy.sa import * env.workspace = "C:/sapyexamples/data" outRegions = LocateRegions("suitsurface", 13.5, "SQUARE_MILES", 5, "CIRCLE", 0, 50, "HIGHEST_AVERAGE_VALUE", 2, 5, 1, 3, "MILES", "existingreg.shp", "EIGHT", "NO_ISLANDS", "SMALL", "LOW", "COMBINATORIAL") outRegions.save("C:/sapyexamples/output/outregions")
LocateRegions example 2 (stand-alone script)
Identifies the optimum eight regions from a suitability surface while meeting the spatial requirements.
# Name: LocateRegions_Ex_02.py # Description: Selects the best specified number of regions # Requirements: Spatial Analyst Extension # Import system modules import arcpy from arcpy import env from arcpy.sa import * # Set environment settings env.workspace = "C:/sapyexamples/data" # Set local variables InRaster1 = "suitsurface" InTotalArea2 = 13.5 InAreaUnits3 = "SQUARE_MILES" InNumberofRegions4 = 5 InRegionShape5 = "CIRCLE" InRegionOrientation6 = 0 InShapeTradeoff7 = 50 InEvaluationMethod8 = "HIGHEST_AVERAGE_VALUE" InMinimumArea9 = 2 InMaximumArea10 = 5 InMinimumDistance11 = 1 InMaximumDistance12 = 3 InDistanceUnits13 = "MILES" InExistingRegions14 = "existingreg.shp" InRegionofNeighbors15 = "EIGHT" InRegionNoIslands16 = "NO_ISLANDS" InRegionSeeds17 = "SMALL" InRegionResolution18 = "LOW" InCombinatorialThreshold19 = "COMBINATORIAL" # Check out the ArcGIS Spatial Analyst extension license arcpy.CheckOutExtension("Spatial") # Execute Locate Regions outRegions = LocateRegions(InRaster1, InTotalArea2, InAreaUnits3, InNumberofRegions4, InRegionShape5, InRegionOrientation6, InShapeTradeoff7, InEvaluationMethod8, InMinimumArea9, InMaximumArea10, InMinimumDistance11, InMaximumDistance12, InDistanceUnits13, InExistingRegions14, InRegionofNeighbors15, InRegionNoIslands16, InRegionSeeds17, InRegionResolution18, InCombinatorialThreshold19) # Save the output outRegions.save("C:/sapyexamples/output/outregions")
Environments
Licensing information
- Basic: Requires Spatial Analyst
- Standard: Requires Spatial Analyst
- Advanced: Requires Spatial Analyst | https://desktop.arcgis.com/en/arcmap/latest/tools/spatial-analyst-toolbox/locate-regions.htm | CC-MAIN-2022-33 | en | refinedweb |
Part 3 - Componentization¶
Our List of Pyroes can be displayed and individual Pyroes can be edited, but it is all an amalgam.
Since we are already doing it with our main application component,
AppComponent, and the listing component,
PyroesComponent and how it is
better done when creating large applications, we can separate the listing and
editing functionalities.
Copy the
top2 folder to
top3 and enter it. For example, with:
cp -r top2 top3 cd top3
Note
Under Windows and unless you have a proper shell installed (Cygwin, MSYS, GitBash, …) you are probably better off using the Windows Explorer to make a copy of the directory)
Adding a
PyroDetailComponent¶
Just as we did before to create
PyroesComponent, we can do it for
PyroDetailComponent. From inside the app directory create the skeleton for
a Component:
anpylar-component PyroDetail
The view of the project layout is
We’ll now move the details part from the html content of
PyroesComponent to
the html content of
PyroDetailComponent. Both html files.
<h2>My Pyroes</h2> <ul class="pyroes"> </ul> <pyro-detail></pyro-detail>
<div *_display=selected_.pyd_> <h2 {name}="selected_.name_.map(lambda x: x.upper())">{name} Details</h2> <div><span>pyd: </span><txt [selected_.pyd_]>{}</txt></div> <div> <label>name: <input *_fmtvalue=selected_.name_ </label> </div> </div>
Things to notice:
-
In
pyroes_component.htmlwe added a new tag:<pyro-detail></pyro-detail>
This is where the
PyroDetailComponentwill be auto-rendered.
To make sure this is the case, we will define the
selectorto have this specific value in
PyroDetailComponent
-
The html in
pyro_detail_component.htmlstill references the observable
selected_. Recall that this was defined in
PyroesComponentand in fact: it will still be.
Let’s see the Python counterparts before delivering the full explanation
from anpylar import Component, html from .pyroes import PyroesComponent from .pyro_detail import PyroDetailComponent class AppComponent(Component): title = 'Tour of Pyroes' bindings = {} def render(self, node): PyroesComponent()_)
from anpylar import Component, html class PyroDetailComponent(Component): selector = 'pyro-detail' bindings = {} def render(self, node): pass
Parent-Child Relationship¶
As mentioned above, the observable
selected_ is defined as a binding in
PyroesComponent. It is nowhere to be seen in
PyroDetailComponent, but
the associated html content uses it. A
Master-Child or
Parent-Child or
Component-SubComponent relationship is responsible.
Remember the html content for
PyroesComponent:
<h2>My Pyroes</h2> <ul class="pyroes"> </ul> <pyro-detail></pyro-detail>
... class PyroDetailComponent(Component): selector = 'pyro-detail' ...
The
<pyro-detail> tag and the
selector = 'pyro-detail' are the
keys. Because it happens inside the html code for
PyroesComponent, when the
associated component for the tag is instantiated (namely
PyroDetailComponent) it will become a child of the component in which
is being created.
Being a child, it can access the bindings from the parent. Hence the
capability to use the
selected_ observable.
Note
As done with
PyroesComponent, which is instantiated inside
AppComponent, we could have done the same with
PyroDetailComponent
In this case and to show an alternative, we have chosen to instantiate using a tag and defining a selector
Importing
pyro_detail_component¶
You may have noticed that we also shown
app_component.py in the code
samples above. And this is to show this
from anpylar import Component, html from .pyroes import PyroesComponent from .pyro_detail import PyroDetailComponent ...
PyroDetailComponent needs to be imported somewhere. It can be done in
this module or it could for example be done in
app_component.py or even in
pyroes_component.py: the choice is yours.
But if not imported: it will be just a file being idle in your file structure. Importing it, makes the component be part of the arsenal you can use in your app.
Let’s execute¶
We haven’t changed the functionality, simply how we distribute the functionality across components. The results are the same as in the previous example.
anpylar-serve top3
And go the browser
And our list of Pyroes will be displayed
Clicking on one of the Pyroes will:
- Show<<
Now that things have been broken down in different components, we can move on. | https://docs.anpylar.com/tutorial/top3/index.html | CC-MAIN-2020-45 | en | refinedweb |
Using Tailwind CSS With React
Subscribe On YouTube in the following way:
$ npx create-react-app react-tailwindcss
By using npx we’re able to execute the create-react-app script directly without needing to install it first. The new React project is named react-tailwindcss.
Let’s change into the newly created project folder by using the following command:
$ cd react-tailwindcss
Inside this folder you’ll find the basic React starter project template.
Adding Dependencies To The Project
The next step is to add dependencies to the project by using the yarn add command:
$ yarn add tailwindcss postcss-cli autoprefixer -D
Creating A Tailwind Configuration File
To further complete the project setup let’s also add a Tailwind CSS configuration file by executing the following command inside the project folder:
$ npx tailwind init —full
This command is creating a new file named tailwind.config.js with a basic Tailwind CSS configuration inside.
Configure PostCSS
Tailwind requires a CSS build process. To manage and configure this build process we’re using PostCSS. To be able
$ touch postcss.config.js
Insert the following code into the file:
module.exports = { plugins: [ require('tailwindcss'), require('autoprefixer') ], };
The PostCSS build process will make use of two plugins: tailwindcss and autoprefixer.
Injecting Tailwind CSS Into The Project
In the src folder we’re now creating a new subfolder styles. Inside that styles folder create a new file tailwind.css and insert the following lines of code:
@tailwind base; @tailwind components; @tailwind utilities;
Here we’re making use of the @tailwind directives to import CSS code from Tailwind’s base, components, and utilities packages.
Build Scripts
Now let’s add a corresponding build script to our package.json file.
First of all let’s take a look at the scripts section which is already available in package.json by default:
"scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" },
Then add the following new entry in this section:
"build:css": "postcss src/styles/tailwind.css -o src/styles/main.css"
This build:css script is associated with the command postcss src/styles/tailwind.css -o src/styles/main.css. This command used the PostCSS CLI to execute the CSS build for file src/styles/tailwind.css. The result of the CSS build is inserted into file src/styles/main.css.
Now the execution of the build:css script can be integrated into the start and build script as well, so that we’re making sure that the CSS build is executed everytime we’re starting the application server:
"start": "npm run build:css && react-scripts start", "build": "npm run build:css && react-scripts build",
Finally the scripts section should look like the following
"scripts": { "start": "npm run build:css && react-scripts start", "build": "npm run build:css && react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject", "build:css": "postcss src/styles/tailwind.css -o src/styles/main.css" },
The CSS build process can now be executed manually by executing:
$ yarn build:css
or by starting of the application server by executing:
$ yarn start
Using Tailwind CSS In A React Component
Now we’re ready to make use of Tailwind’s CSS classes in our React components, e.g. in App component like you can see in the following:
import React from 'react'; import './styles/main.css'; function App() { return ( > <button class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded">My Tailwind Button</button> </div> </div> ); } export default App;
The result should then look like what you can see in the following screenshot:
Advertisement:! | https://codingthesmartway.com/using-tailwind-css-with-react/ | CC-MAIN-2020-45 | en | refinedweb |
CVE-2019-1347: When a mouse over a file is enough to crash your system
by Luc
Categories: Technical -
Tags: Reven - Reverse Engineering - PE - Parsing - CVE - Taint - Kernel - PTE -
Discover Timeless Analysis Live. with an analysis with REVEN, our timeless analysis tool. For this analysis we recorded several short traces to isolate and understand how specific bytes in the PE led to the crash.
In total, we will show that exactly four locations are responsible for the crash, and how this can help understand the bug itself.
The minimal bytes to be modified from the original file are the following:
Note: Throughout this post, we will call these locations “byte”, even though the second one involves two bytes.
A fistful of bytes: First and second location
To begin with, we recorded the crash triggered by this PoC provided in the issue. From the KeBugCheck call we reach back the Page Fault and see that the address 0xfffff8035b2ae7ff is not mapped, and won’t be.
A closer look at the memcpy arguments shows that this address is built as 0xfffff8035b2a0000 + 0xe7ff, so we taint the value 0xe7ff to find where it comes from.
We instantly find out that this value comes unchanged from the PoC PE. Indeed, modifying this value in the PE disables the crash. Opening the PoC in any PE editor confirms that the value is related to the Relocation Directory RVA, as stated also by the issue.
The forward taint has an advantage compared to the backward taint: flags are tainted too. This can be tedious in many cases but here it turns out to be very effective when applied to 0xe7ff:
Firsts tests are just zero tests. The important one is the comparison with 0xf000. At first sight the alignment sounds like a simple overflow-related check, but it isn’t. We taint 0xf000, it actually comes from the PE:
The value 0xe03f is the field SizeOfImage from the PE header (modified in the PoC). The value 0xf000 is derived from this size as a 0x1000-aligned value. If we modify this entry to its original value, the crash doesn’t occur, proving that both of 0xe7ff and 0xe03f are directly linked to the crash.
We then tried to modify those two bytes in the original file, but unfortunately, it isn’t enough to trigger the BSOD. For the next parts we will determine the other needed bytes.
For a few bytes more
We decided to patch manually the file with the differences, in a -sort-of- dichotomic manner.
The idea is simple: using a PE editor, we compare the original file and the provided PoC. By applying/removing relevant modifications to the original file in correlation with the PoC, the two single bytes triggering the BSOD are isolated gradually. Note that we only had to perform this operation on the header part of the file, as the memory history reveals that the corpus of the file isn’t accessed.
This minimization could have been automated by a script that triggers and detect the BSOD in a VM, but in this case manual modification is enough.
Next part of this post analyzes how these two new bytes influence the CFG and eventually points out the bug. We recorded three traces more: the one induced by the minimal PoC (four locations modified), and two other traces where we let respectively the third byte and fourth byte unchanged. The two later tests don’t trigger the BSOD on purpose.
Third byte analysis
The third byte we modified is located at file offset 0x169, where we replaced 0x20 by 0xaf. A PE parser shows that it is related to a directory RVA, just like the relocation RVA mentioned earlier.
To detect where the CFG changed, we used the following naïve script with the Analysis Python API, that compares two traces instruction by instruction:
while (instructions_are_equal(tr1, tr2)): # Fetch next instruction from first trace and second trace tr1_id += 1 tr1 = rvn1.trace.transition(tr1_id) tr2_id += 1 tr2 = rvn2.trace.transition(tr2_id)
The full script is available in Appendix 1.
The algorithm is effective enough as we will only focus on one function: MiRelocateImage. In a few seconds, we get this output:
The results shows that the traces are divergent shortly after the beginning of MiRelocateImage, and the function exits almost right after a flag is tested:
Now we analyze why this flag is set to 0x1. We can follow it in memory in the trace that doesn’t crash:
This shows that the flag 0x1 comes from a check on a value, 0xb, and that 0xb comes itself from the file, unchanged. But 0xb isn’t the byte we modified, so we can ask, why is it linked to the flag?
The answer is that 0xb is a value in a structure, and the modified value decides where this structure starts:
The value 0x2008 - the 3rd byte that we modified to 0xaf08 -, is responsible for pointing the beginning of a structure, and a value from this structure is checked to decide whether or not relocating the image at the beginning of MiRelocateImage. When the byte 0x20 is changed to 0xaf, the offset pointing the start of the structure changes, the value in the structure is then different (with a high probability), the derived flag isn’t set, the execution of MiRelocateImage continues, resulting eventually in the crash.
This third byte analysis doesn’t show the bug as itself, but nevertheless, it shows that REVEN can explain why it is important. Actually, instead of modifying the 3rd byte from 0x2008 to 0xaf08, we can modify the aforementioned 0xb value to 0x0. This causes the system crash also, proving that our analysis is correct.
For the next part, we will analyze the crash itself, the bug, and how it is linked to the 4th byte.
The crash, The bug, and The fourth byte
Taint forward against the fourth byte
Analyzing the fourth byte is indeed tricky.
First we can taint forward this byte from the moment it is parsed:
With IDA synchronized, we see that this byte is used as an index into the array MiImageProtectionArray, and the value fetched is 0x6. Tainting this value 0x6 is also possible, yet in this case, the information given by the taint is verbose and difficult to analyze. We will continue this fourth byte analysis by having a look at the crash itself.
From the crash
From the beginning of this analysis, we only pointed out a page fault which couldn’t be resolved. But the page fault doesn’t seem to come from a common read overflow; the problem is elsewhere.
For this part we analyzed the code that precedes the KeBugCheckEx call:
We can see multiple checks on a zero value, leading to the crash. This value comes from memory at the end of an array containing what looks like PTE.
The question is, does the problem come from fetching the wrong PTE? (i.e. bad index in the PTE array?) or is it because there should be an entry there that doesn’t exist?
Next part answer this question.
Is offset in PTE array wrong?
First we can try to analyze where and how this PTE address is build. We can taint this address to see where it comes from:
The following code is responsible for the conversion:
0xfffff80359fa4ddb mov rcx, rdi 0xfffff80359fa4dde movabs rdx, 0xfffffb8000000000 0xfffff80359fa4de8 shr rcx, 9 0xfffff80359fa4dec movabs r8, 0x7ffffffff8 0xfffff80359fa4df6 and rcx, r8 0xfffff80359fa4df9 mov rax, rdx 0xfffff80359fa4dfc add rcx, rax 0xfffff80359fa4dff mov qword ptr [rbp - 0x28], rcx
At the beginning of this code, rdi contains the address 0xfffff8035b2ae7ff, that needs to be mapped. This code doesn’t seem to have any flaw, so we can deduce that probably, the problem is that the entry containing zero should have been populated, yet it isn’t.
Entry in PTE array isn’t populated
It is very probable that this array is populated in a loop, so we can find out where other entries have been set and see why the empty one isn’t:
We reach MiAddMappedPtes. Basically, it takes as argument the amount of PTE to add, and adds them. Let’s taint backward this number (i.e. 0xf), and almost immediately see that it comes from the first byte we’ve modified in the file:
The value 0xf corresponds to the required number of 0x1000-aligned blocks to handle a size of 0xe03f. This looks consistent, but the problem seems that even though 0xf entries should be added, only 0xe effectively are:
Another thing we can do is executing the previous script to detect where the execution differs from a trace that doesn’t crash:
We analyze these results:
At some point, a branch is taken to call GetSharedProtos instead of MiGetSubsectionDriverProtos. This is probably interesting but right now we don’t know enough about the context to exploit this information.
We need to analyze the loop termination conditions, at the end of the MiAddMappedPtes, to understand why only 0xe entries are added.
There are three consecutive checks:
0xfffff8035a50ea22 cmp rbx, rsi 0xfffff8035a50ea25 jae 0xfffff8035a50ea58 ($+0x33) 0xfffff8035a50ea27 cmp r11, rdi 0xfffff8035a50ea2a jae 0xfffff8035a50ea78 ($+0x4e) 0xfffff8035a50ea78 mov rbp, qword ptr [rbp + 0x10] 0xfffff8035a50ea7c test rbp, rbp 0xfffff8035a50ea7f je 0xfffff8035a65d5b6 ($+0x14eb37)
First condition check - SizeOfImage
It is pretty straightforward: the upper bound correspond to 0xf entries and it isn’t reached
The code just gets a pointer on the last entry of the array that will be filled.
Second condition check - Pointing out the chained list
The upper bound seems to be the number of 0x1000-aligned blocks needed for a section. Taint analysis shows that it is computed from the size of the section, located in the PE header.
More precisely, this number of page is set in a chained list entry, by MiParseImageSectionHeader:
This chained list entry is important as the last check depends on it.
Third condition check - Do we need to add one last PTE?
If the next entry in the chained list is empty (i.e. the current one is the last section), the algorithm checks whether or not it should add more pages. And this is where the bug occurs: instead of comparing the amount of page already effectively added with amount of page to be added, it compares the address in an array with the last entry of… a completely different array.
In the trace where the 4th byte isn’t tampered with (the crash is disabled), the code adds a new entry properly, but not in the trace that crashes. Here is what we can see in both:
Everything looks fine for the disabled test, but for the crash version, the first address compared is at a way higher address, hence the code considering that the upper boundary is reached and don’t need to add PTE anymore. The last entry isn’t populated, and the crash occurs when this last page is accessed, as no PTE exists for it.
Basically, what was intended:
// Check if another PTE is needed if (&array[current] < &array[last]) { add_entry(); }
But in this case, arrays may differ, hence the comparison being faulty.
The 4th byte, the last piece of the puzzle, is demystified now.
4th byte: Forcing the bad comparison
Impact
Recall that the naïve trace differential showed that GetSharedProtos is called instead of MiGetSubsectionDriverProtos. The two arrays that are incorrectly compared, come from these functions, respectively. The bug could have stayed unnoticed but the fourth byte tampering forces the usage of GetSharedProtos instead of MiGetSubsectionDriverProtos, leading to the faulty comparison.
Origin
The branch that calls GetSharedProtos is taken because of the value 0x2 in memory, we can use the memory history on this word:
Once again, we analyze the instructions right before this branch and see that the value in r8b is tested. This value is 0x6, which for an attentive reader may ring a bell. Taint analysis may help to trace where it comes from, or in this case we just fetch where it has been modified a few instruction before.
Actually, the value 0x6 comes from MiImageProtectionArray (a result found previously), and the index in this array was derived from the byte we modified in the file. This is how the 4th byte influences the CFG to force the comparison and trigger the bug.
Once upon a patch in the kernel
We recorded a last trace against an updated version of Microsoft Windows, to see how the bug is fixed.
Whilst we were expecting some checks around the faulty array comparison, the patch just avoids the need to reach that code.
In MiAddMappedPtes, we saw earlier that 0xf PTE are to be added. The code goes through a chained list containing structures that seem to represent each section, and the amount of PTE needed for each of these section. Let’s go through this chained list:
For each entry, we can see the pointer to the next entry at offset +0x10, and the number of blocks to add at offset +0x2c. In the crash version, this is a recap of blocks needed:
- section 1 needs 2 pages
- section 2 needs 8 pages
- section 3 needs 2 pages
- section 4 needs 2 pages
0xf - 2 - 8 - 2 - 2 = 0x1 last page to be added. Previously, the last page was added with an ad-hoc faulty piece of code that we described earlier.
In the patched version, we can see the following:
- section 1 needs 2 pages
- section 2 needs 8 pages
- section 3 needs 2 pages
- section 4 needs 3 pages
The last section has one more page, hence no need to add (through the faulty code) a last page. We can analyze the last round of MiAddMappedPtes and see where this 0x3 comes from:
For each section, MiParseImageSectionHeaders creates an entry, with (among other) the amount of 0x1000-block needed in it. This number is derived from the size defined in the PE header as we showed earlier.
Basically, the patch is: if there are still blocks to add compared to the SizeOfImage value (0xf blocks in total here), then when the last section is handled, the amount is replaced by the actual number of remaining needed pages.
The following pseudo code represents what is done in the patched version:
total_block = SizeToBlock(Image) for each section: // HERE IS THE PATCH total_block -= SizeToBlock(section) if (IsLastSection(section)): current_nblock = total_block else: current_nblock = SizeToBlock(section) BuildEntry([...] , current_nblock, [...])
This ad-hoc check and add used to be in MiAddMappedPtes, they are moved now, so the faulty code isn’t executed anymore.
As such, the last PTE is correctly added and no crash occurs.
Conclusion
There is no previous analysis for this vulnerability at the time of writing. We showed how we could analyze it precisely with REVEN, minimized the PoC and explained the influence of each faulty byte. In particular, we used the taint feature many times to quickly go through many memory manipulation and find the origin of some values.
Even though the CFG is usually tedious to follow, thanks to the Python Analysis API and other features, we were able to point out where key branches were taken and analyze why. Moreover, we did analyze precisely how Microsoft patched this issue; and it is now easier to figure out if this patch is enough or not.
Finally, this logical error wasn’t trivial to analyze. The capability to navigate the trace in time instead of restarting again and again the parsing with a debugger allowed us to spare a fair amount of time.
Appendix 1
Naïve script to perform simple trace differential analysis. Given two traces and two transitions, this script returns the transition (“instruction number”) when the bytecode differs.
import argparse import reven2 import logging logging.basicConfig(format='%(levelname)s:\t%(message)s', level=logging.INFO) def parse_args(): parser = argparse.ArgumentParser(description='Find the first different instruction between two traces\n') parser.add_argument('--host1', metavar='host1', dest='host1', help='Reven host 1, as a string ' '(default: "localhost")', default='localhost', type=str) parser.add_argument('--port1', metavar='port1', dest='port1', help='Reven port for first server' ', as an int (default: 13370)', type=int, default=13371) parser.add_argument('--host2', metavar='host2', dest='host2', help='Reven host 1, as a string ' '(default: "localhost")', default='localhost', type=str) parser.add_argument('--port2', metavar='port2', dest='port2', help='Reven port for second server' ', as an int (default: 13370)', type=int, default=13372) parser.add_argument('--tr1', metavar='tr1', dest='tr1', help='Start transition for the first trace', type=int) parser.add_argument('--tr2', metavar='tr2', dest='tr2', help='Start transition for the second trace', type=int) args = parser.parse_args() return args def instructions_are_equal(tr1, tr2): """ From 2 transitions, return True if instructions are identicals. """ return tr1.instruction.raw == tr2.instruction.raw if __name__ == '__main__': args = parse_args() logging.info("Finding difference between two traces...") # Get a server instance for both traces rvn1 = reven2.RevenServer(args.host1, args.port1) rvn2 = reven2.RevenServer(args.host2, args.port2) tr1_id = args.tr1 tr2_id = args.tr2 tr1 = rvn1.trace.transition(tr1_id) tr2 = rvn2.trace.transition(tr2_id) i = 0 while (instructions_are_equal(tr1, tr2)): # Fetch next instruction from both traces i += 1 tr1_id += 1 tr1 = rvn1.trace.transition(tr1_id) tr2_id += 1 tr2 = rvn2.trace.transition(tr2_id) if i % 100 == 0: logging.debug("{0} are identicals".format(i)) logging.info("Instructions are different at {0} - {1}".format(tr1.id, tr2.id)) logging.info("Done")
Discover Timeless Analysis Live. | https://blog.tetrane.com/2019/11/12/pe-parser-crash.html | CC-MAIN-2020-45 | en | refinedweb |
From: Aleksey Gurtovoy (agurtovoy_at_[hidden])
Date: 2002-10-04 21:29:40
David B. Held wrote:
> Ok, this is really frustrating. I'm trying trivial examples to understand
> how lambda expressions work, and I can't get it to compile on bcc or
> gcc. Here's what I'm doing:
>
> #include <boost/mpl/lambda.hpp>
> #include <boost/mpl/apply.hpp>
>
//--------------------------------------------------------------------------
> -
> namespace mpl = boost::mpl;
> using mpl::_;
>
> template <typename T>
> struct ownership
> { };
>
> int main(int argc, char* argv[])
> {
> typedef mpl::lambda<ownership<_> >::type f_;
> mpl::apply<f_, int>::type t;
> }
>
> gcc reports that 'type' is not a member of apply<...>. bcc reports:
>
> [C++ Error] apply.hpp(55): E2404 Dependent type qualifier
> 'ownership<_>' has no member type named 'apply'
>
> I've tried bind<> too, with no luck. Help!
There are several issues here. First, - and sorry for misleading you - I
completely forgot that even on a conforming compiler in order for the above
to work, 'ownership' class template should be a metafunction:
template <typename T>
struct ownership
{
typedef ownership type; // here
};
I was looking into a way to inform 'mpl::lambda<...>' that in certain cases
it doesn't need to insist on '::type' notation, but at this moment the
typedef is required.
With this change the example should compile cleanly on gcc.
Secondly, as Borland is not conforming, some more intrusive changes are
needed to make 'ownership' template lambda enabled:
template <typename T>
struct ownership
{
typedef ownership type;
struct rebind_;
};
BOOST_MPL_AUX_LAMBDA_SUPPORT(1,ownership)
But my main fault is that the above is not in the CVS yet - sorry! I'll try
to check it in tomorrow (there are still some things to be done about it).
Aleksey
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2002/10/37128.php | CC-MAIN-2020-45 | en | refinedweb |
Very cool! Thank you! :)
Thanks!
Thanks ! :)
Could you tell me how to use it, plese? Thank you!
Could you tell me how to use it by substance painter, please? Thank you!
Yeahhhh no idea how to actually use this.
Can someone advise where one can figure out what folders all of the resources go to, and how to use this? I continue to get errors:
[Substance Effect Presenter] Output ambient_occlusion was no recognized
[Effect Procedural View] Effect selected is not a substance effect designed
These should really come with instructions on how to utilize them, or at the very least a link to where one can figure it out. Otherwise its pretty pointless.
I was able to install this filter, but many of the sliders appear to do nothing even when I've made sure to toggle them on in the parameters... Has anyone been able to get this working fully? like the torn edges and burn / wet dont do anything and the effect looks water logged as soon as its applied to a layer..
Amazing filter! Very useful for decals.
Any tutorials?
import it as base material, and feed the 2 input: the base color, and the mask of your sticker
Very Very Very cool!!!!Nice!!!!!
Thank you very much! | https://share.substance3d.com/libraries/1102 | CC-MAIN-2020-45 | en | refinedweb |
Blog Post
Add Python support to Tekton Pipelines
Get a deeper understanding of the Tekton Pipeline architecture and experiment with adding Python support to Tekton Pipelines.
My colleague Priti Desai has been working on Tekton for more than a year and has made some great contributions. After seeing how much fun she was having, I decided to take a leap in the same direction. Priti already built a Tekton pipeline for Java and JavaScript applications, so I figured adding Python support to her pipeline was a great way to become familiar with Tekton.
What is Tekton
Tekton is a continuous integration and continuous delivery (CI/CD) pipeline that can operate natively inside your Kubernetes cluster. Tekton’s website offers a complete description of the open source project and there’s even this informative What is Tekton? lightboard video, but I’d still like to give my own brief description of the Tekton Pipeline architecture.
Kindly note that there are several common words such as “task” and “step”, which may become overloaded. To counter this, I will use capitalization to distinguish between Tekton resources and the common usage of those words. For example, in this sentence, Task is referring to Tekton Tasks. In this sentence, the task of writing this blog, task has its usual meaning.
Tasks and Steps
Tekton Pipelines are composed primarily of Tasks, Pipelines, and PipelineRuns that are written in YAML. A Task is a logical unit of work. They are analogous to functions – they can even take parameters! Each Task is composed of one or more Steps which are the actual containers that do the work. A Step can be a script, the running of a particular command, or any other appropriate container-based operation. Each Task has its own Kubernetes Pod in which the individual containers for each Step are run. In this programming analogy, a Step would correspond to individual statements that are composed to make up the function.
Pipelines
Pipelines are the way in which the individual Tasks are orchestrated to perform the end goal. This is roughly analogous to the main function in a program. This is where any other resources necessary to make the Pipeline function are declared. Keep in mind that this is just the general Task orchestration and not the actual instantiation of the Pipeline.
An additional nicety is using Tekton Workspaces. Workspaces allow for a single persistent volume to be shared across multiple Tasks that can be referenced by each Task in a unified way. This makes it easier to utilize the result of one Task within another, or even between Steps in the same Task.
PipelineRuns
Finally, PipelineRuns are the instantiation of a general Pipeline to perform a specific goal. Continuing the analogy, this is how you would actually run the Pipeline with specific inputs from the user. This is basically the difference between the source code or executable for the
sed command and actually running
sed 's/foo/bar/g' to get results.
Great! You now have a high-level understanding of the Tekton Pipeline I’m about to describe.
Python to Knative Tekton Pipeline
As stated earlier, I am building on the work documented by Priti. To learn more about the work I’m referencing, see Priti’s blog post discussing the Java and JavaScript Pipeline. She mentions that the Pipeline could be extended to work with other OpenWhisk runtimes and that is just what I have done! I have extended the pipeline to also work with the OpenWhisk Python runtime. In theory, this means you could seamlessly make your OpenWhisk Python Actions run on Knative.
In order to achieve this, you follow the same basic outline as described in Priti’s blog.
Note that labels 1 and 4 are done by Tekton itself, based on resources you specify in the Pipeline and PipelineRun, and are not explicit Tasks in your Pipeline.
Task 1
The first task in the Python Pipeline is to install the necessary app dependencies as described by the requirements.txt file. To do this, you must first install a virtual environment in the app folder. This is required to ensure any third-party libraries are retrieved and packaged with the application. There are four basic steps to process:
- Retrieve the app source.
- Install your virtual environment.
- Take note of the default packages installed in the virtual environment.
- Install the necessary requirements with pip.
The reason step 3 is required is because you do not want to package unnecessary standard libraries with your application since they will already be installed on the system. This is to reduce the size of the archive you create in the next step, and to reduce the startup time at invocation.
If you’d like to learn more about the package Python for OpenWhisk, I’d recommend reading Python Packages in OpenWhisk by James Thomas.
Task 2
The app source code and necessary third-party libraries are then packaged into a zip file that can be used in the OpenWhisk runtime.
- Create a list of packages in the virtual environment that you do not want to include in the zip file (these are the packages from step 3 above).
- Create a list of all packages that are present in the virtual environment and filter out the ones you want to exclude.
- Zip those packages, the app source code, and
virtualenv/bin/activate_this.py. This is done to inform Python about where to locate your third-party libraries.
Task 3
This is where you modify the Dockerfile for the OpenWhisk runtime to include the necessary environment variables and build that image before uploading it to a Docker repository.
Base64 the zip file and insert it into the __OW_ACTION_CODE environment variable. This is where OpenWhisk will unzip the action code so that it may be executed later. This step in particular is a little tricky because if you make the line too long, the Dockerfile will become invalid. This line length can be quickly exceeded by even the smallest of pip packages. To overcome this, use the
foldcommand to force each line to 80 characters, and use
sedto insert an escape to each line of the encoding. This prevents larger zip files from overwhelming the buffer.
You use Kaniko Executor to build this newly created Dockerfile that is then uploaded to the Docker repository of your choosing.
I’d like to share that I spent a fair bit of time piecing together step 1 in task 3. I actually did not know the
fold command existed until I found I needed it.
fold is an example of a good Unix tool; it doesn’t try to do more than folding long lines into a specific number of characters or bytes with a newline. Combined with a
sed script to escape your newlines and remove the final trailing newline, you are no longer bound like mere mortals to a specific buffer length. This is a good trick to have in your back pocket for when you are filling large environment variables (or any other large generated lines) in a Dockerfile and exceed the line limits. I had not seen this documented elsewhere, so I wanted to make a quick note of it in case you ever run into the same problem.
Example of a run
Using the example from James Thomas’ blog, I created a repository that has a simple joke program. Then I created a PipelineRun that specifies that repo as the app source:
apiVersion: tekton.dev/v1alpha1 kind: PipelineRun metadata: name: build-app-image spec: serviceAccountName: openwhisk-app-builder pipelineRef: name: build-openwhisk-app workspaces: - name: openwhisk-workspace persistentVolumeClaim: claimName: openwhisk-workspace params: - name: OW_APP_PATH value: "" - name: DOCKERFILE value: "core/python3Action/Dockerfile" - name: OW_ACTION_NAME value: "openwhisk-padding-app" resources: - name: app-git resourceSpec: type: git params: - name: url value: - name: runtime-git resourceSpec: type: git params: - name: url value: - name: app-image resourceSpec: type: image params: - name: url value: docker.io/pwplusni/openwhisk-jokes
I applied that PipelineRun to my Kubernetes cluster with Tekton installed, and the necessary assets from the
openwhisk-build repo were already applied. After a few minutes, I saw a successful build and a new image in my Docker image repository. I then applied the simple Knative service:
apiVersion: serving.knative.dev/v1alpha1 kind: Service metadata: name: openwhisk-python-app namespace: default spec: runLatest: configuration: revisionTemplate: spec: container: image: docker.io/pwplusni/openwhisk-padding-app:latest
Now I can curl that service for hilarious jokes, such as
{"joke": "I had a problem so I thought I'd use Java. Now I have a ProblemFactory."}, to my heart’s content. I hope that made you laugh!
Summary
I hope this blog has been informative and sparks your curiosity in learning more about Tekton and its capabilities. Tekton comes with a very welcoming community that I encourage you to explore! If you’d like to give this example a try and need a Kubernetes cluster, please grab a free cluster from IBM Cloud Kubernetes Service – no credit card needed! And finally, as always, stay safe, have fun, and happy hacking! | https://developer.ibm.com/depmodels/serverless/blogs/add-python-support-to-tekton-pipelines/ | CC-MAIN-2020-45 | en | refinedweb |
Formik is perhaps the leading choice of library to help implement forms in React. Version 2 was recently released and it introduces new hooks as well as improved support for checkboxes and select fields.
This post covers basic usage of Formik v2 with the
TextField,
Radio, and
Checkbox components provided by the Material UI library.
Starting with a blank Create React App project, add the appropriate dependencies:
yarn add formik yarn add @material-ui/core
You may also wish to add the Roboto font to Material UI per the installation guide.
Start by importing the
Formik component.
import { Formik } from 'formik'
Next add the
Formik component to your app. It has two required props:
initialValues and
onSubmit.
The
initialValues prop is for specifying an object with properties that correspond to each field in your form. Each key of the object should match the name of an element in your form.
The
onSubmit prop receives a function that is called when the form is submitted. The function is passed a
data parameter containing the submitted form’s data, and an object with properties that contain a number of functions that you can use to help disable the submit button, reset the form, and more (refer to the docs). In the example below, the function implementation simply logs the data to the console.
The
Formik component accepts a function as a child. Formik provides a number of properties as a parameter to the function. The most immediately relevant properties that can be pulled out using destructuring are
values (an object that represents the current state of the form), and the functions
handleChange,
handleBlur, and
handleSubmit.
For Material, import a
TextField and a
Button component:
import TextField from '@material-ui/core/TextField' import Button from '@material-ui/core/Button'
And incorporate them into Formik as follows:
function App() { return ( <div> <Formik initialValues={{ example: '' }} onSubmit={(data) => { console.log(data) }} >{({ values, handleChange, handleBlur, handleSubmit }) => ( <form onSubmit={handleSubmit}> <TextField name="example" onChange={handleChange} onBlur={handleBlur} value={values.example} /> <Button type="submit">Submit</Button> </form> )}</Formik> </div> ) }
To simplify the tedious process of adding
values,
handleChange,
handleBlur, and
handleSubmit you can use Formik’s helper components
Form and
Field.
The
Form component replaces the standard HTML
form tag. It is automagically passed the
onSubmit/
handleSubmit function (via internal use of the Context API) so you don’t need to add this every time.
The
Field component needs to only be passed a
name and
type prop. It automagically gets the
value,
onChange, and
onBlur.
A
Field component with
type “text” will render a default HTML5 input by default. To use Material, there’s another prop,
as, where you can pass a component that you want the field to render as. As long as the component you pass is capable of accepting
value,
onChange, and
onBlur props (as Material’s
TextField does) then you can use it. The
Field component will also pass any additional props it is given (e.g.
placeholder) to the component specified in the
as prop.
import { Formik, Form, Field } from 'formik'
function App() { return ( <div> <Formik initialValues={{ example: '' }} onSubmit={(data) => { console.log(data) }} >{({ values }) => ( <Form> <Field name="example" type="input" as={TextField} /> <Button type="submit">Submit</Button> </Form> )}</Formik> </div> ) }
The same technique works for checkboxes and radio buttons as the following example demonstrates:
import Radio from '@material-ui/core/Radio' import Checkbox from '@material-ui/core/Checkbox'
function App() { return ( <div> <Formik initialValues={{ example: '', name: '', bool: false, multi: [], one: '' }} onSubmit={(data) => { console.log(data) }} >{({ values }) => ( <Form> <div> <Field name="example" type="input" as={TextField} /> </div> <div> <Field name="name" type="input" as={TextField} /> </div> <div> <Field name="bool" type="checkbox" as={Checkbox} /> </div> <div> <Field name="multi" value="asdf" type="checkbox" as={Checkbox} /> <Field name="multi" value="fdsa" type="checkbox" as={Checkbox} /> <Field name="multi" value="qwerty" type="checkbox" as={Checkbox} /> </div> <div> <Field name="one" value="sun" type="radio" as={Radio} /> <Field name="one" value="moon" type="radio" as={Radio} /> </div> <Button type="submit">Submit</Button> </Form> )}</Formik> </div> ) }
However, if we want to show labels beside our fields, we run into an issue with how React Material is implemented. It uses a
FormControlLabel component that is in turn passed the component to render via its
control prop. Check the docs at:
This doesn’t jive well with our current paradigm. It is cleanest to implement a custom field.
Formik v2 adds a very convenient hook called
useField() to facilitate creating a custom field. The hook returns an array containing a
field object that contains the
value,
onChange, etc. and a
meta object which is useful for form validation. It contains properties such as
error and
touched.
import { useField } from 'formik'
In the example below, the
value,
onChange, etc properties are added to the
FormControlLabel as props using the spread operator:
{...field}.
import FormControlLabel from '@material-ui/core/FormControlLabel'
function ExampleRadio({ label, ...props }) { const [ field, meta ] = useField(props) return ( <FormControlLabel {...field} control={<Radio />} label={label} /> ) }
Now the
ExampleRadio component that was implemented with the help of the
useField() hook can replace the
Field component with
type “radio” in the above examples:
<ExampleRadio name="one" value="sun" type="radio" label="sun" />
So there you have it, a basic use of Formik 2 with React Material that works for the most popular form fields.
Refer to the docs to learn more about
useField and the
meta object and how it is relevant to form validation:
The docs also publish a validation guide: | https://firxworx.com/blog/coding/react/using-formik-2-with-react-material-design/ | CC-MAIN-2020-45 | en | refinedweb |
dp3t 0.1.1
Exposes the DP3T SDK API in Flutter.
flutter-dp3t #
Exposes the DP3T SDK API in Flutter.
Heavily inspired by this React-native homologous library.
Status #
Pre-alpha. Requires some manual setup to work. Not tested yet. Can change without notice. PRs are welcome!
The iOS SDK and Android SDK themselves are in alpha state.
Install #
We will publish this package in the future, but for now, you must install it locally:
$ git clone [email protected]:pgte/flutter-dp3t.git
Add
dependencies.dp3t to your
pubspec.yaml file:
dependencies: dp3t: path: <path to the locally-installed flutter-dp3t package>
Minimum deployment targets: #
- iOS 11.0
- Android: Minimum SDK version: 23. Target version: 29
Initialization #
Both iOS and Android require some native code to initialize the DP3T SDK. Here is an example from the embedded example app:
For these both, you will have to declare the dependency on the original SDK.
Permissions #
Both in iOS and Android you need to declare the permissions required for DP3T to work. Please look for them in the original SDKs:
Example app: #
See the included example app.
To run the example app from the terminal:
$ cd example $ flutter run
Known issues in the SDK #
- The error handling differs a lot between the iOS and the Andriod versions of the DP3T SDK. iOS halts the app on almost all the errors, while the Andriod version seems to handle them more properly.
- iOS needs initializing after resetting, while the Android versions does not.
- It doesn't look like the
jwtPublicKeyinitialization argument is being used in the Android version.
Use #
For the semantic of each API call, please consult the official DP3T documentation.
Import:
import 'package:dp3t/dp3t.dart';
API #
Future<void> initializeManually({String appId, String reportBaseUrl, String bucketBaseUrl, String jwtPublicKey}) #
Example:
await Dp3t.initializeManually({ appId: "some app id", reportBaseUrl: "", bucketBaseUrl: "", jwtPublicKey: jwtPublicKey}) // Base64-encoded JWT
Future<void> initializeWithDiscovery({ String appId, bool dev }) #
Example:
Dp3t.initializeWithDiscovery({ appId: "some app id", // used for discovery dev: true // true if in the development environment })
Future<void> reset() #
Example:
await Dp3t.reset()
Future<void> startTracing() #
Example:
await Dp3t.startTracing()
Future<void> stopTracing() #
Example:
await Dp3t.stopTracing()
Future<Map> status() #
Example:
status = await Dp3t.status()
The status map is an object with the following shape:
{ "tracingState", "numberOfHandshakes", "numberOfContacts", "healthStatus", "errors": Array<String>, "nativeErrors": Array<String>, "matchedContacts": Array< { "id", "reportDate" } >, "lastSyncDate", "nativeErrorArg", "nativeStatusArg" }
Future<void> iWasExposed({DateTime onset, String authentication}) #
Example:
await iWasExposed({onset: DateTime.now(), authentication: authenticationString }) | https://pub.dev/packages/dp3t | CC-MAIN-2020-45 | en | refinedweb |
Introduction to C
In this topic, we are going to learn about the Introduction to C. C language is one of the most popular high-level programming languages which was initially developed by the developer named Dennis Ritchie for the Unix OS primarily. The first time it was used on a Digital Equipment Corporation computer called PDP-11 in 1972. It is a procedural programming language whose main purpose was to be used as a system’s programming language to write an operating system. Many popular operating systems such as Unix operating system and all the Unix related applications are written in C language. It is among the most popular languages between developers as it’s easy to learn and code, produces efficient programs, is a structured language, able to handle low-level activities, can be compiled on a variety of computers.
Main Components of C
As we learned about Introduction to C in the above section here, Let us study the main components:
- This programming language was created so that Unix could be written using it.
- This language has B language as its immediate parent language which was developed in the 1970s.
- The ANSI(American National Standard Institute) formalized this language as an official programming language in 1988.
- When it comes about System friendly programming language, there is no better choice than C.
- The state of the art software is built using C language.
- The main reason for making use of C language as the system-specific programming language is because of its high speed and efficiency which is as close as the assembly language.
- The c programs have an extension of .c
Characteristics of C
Here are the main characteristics of the C language include
- Low-level memory access: The lightweight programming language requires a low level of memory access and hence is a good fit for system programming.
- Simplified keyword set: Rich and easy to understand and use a set of simplified keywords that meet one of the most important characteristics of this language
- The clean style: This language focuses on keeping the code neat and tidy and hence the code flow is clean.
- Pointer mechanism: The efficient use of a pointer and addressing mechanism in C language makes it a unique and a different characteristic from all other programming languages.
- An efficient language for compiler designing: The reasons such as the lightweight, rich and varied set of commands and features, ability to be able to work extremely well with hardware, low memory utilization makes it an ideal language for the development of compiler designing.
- It is a very robust language with a rich set of built-in operators and functions.
- The programs which are coded in C are fast and more efficient
- It is a highly portable language. It means that once the programs which are written in C can easily run on various other machines with next to no modification.
- It has a very huge collection of the library or built-in functions. It also provides us the capabilities to custom or create our own function and include it in the collection of C library.
- It is a highly extensible language.
Applications of C
The characteristics of this language there are many uses too:
- Operating systems: This language is used to develop operating systems because of its high flexibility and versatility.
- Microcontrollers: This language is used in system programming due to its efficiency and speed and hence at times replaces the need to use assembly language. The compiler of C directly converts into machine language. It also makes a good choice as it allows maximum control with a minimal set of commands.
- Scientific systems: This language is used in building and creating many scientific systems.
- Parent language for advanced languages: All the high-level programming languages are a result of C language, therefore knowing this language opens many doors for various other programming languages.
- Assemblers: All the assemblers which are put to use to execute machine-level hardware-specific systems are created in C language.
- Text Editors: One of the important feature or a characteristic of the text editor is that it’s lightweight and no language better than C can create the text editors.
- Print spoolers: The software program which is responsible to send the jobs to the printer once the command is fired is created with the help of C programming language.
- Network Drivers: The network drivers responsible for accessing the internet and running the WIFI and other kinds of drivers are all written in C language
- Modern Programs: Various modern programs whose prime requirement is to consume less memory and be closer to hardware communication are written in the C programming language.
- Databases: There are many databases which are required to store huge amount of data in them and thus are written in C language.
- Language interpreters: The various language interpreters are the ones who are responsible to change the language type from a high level to a machine level language.
- Utilities: Various command and program system specific utilities are also written in C language.
Advantages and Disadvantages
We are going to explore the advantages and disadvantages:
Advantages
Below are the advantages:
- C language forms as the building block for many major programming languages and have huge and powerful operators and data types and therefore makes it a fast and an efficient programming language.
- It is a highly portable language which means it is interoperable.
- The 32 keywords which are present as a part of built-in functions are present in ANSI-C. Alongside, user-built functions are also used widely.
- This language can be extended by making use of many other library functions.
- The modular structure of the programming language makes debugging, testing and programming of this language much easier.
Disadvantages
Below are the disadvantages:
- C language is devoid with the terminology and the concept of OOPS which is a very popular and an important concept these days among all high-level programming language.
- No Strict type checking possible.
- No checks for runtime
- It doesn’t give us the provision of having a namespace.
- It also doesn’t have the concept of the constructor as well as a destructor.
Recommended Articles
This has been a guide on the introduction to c. Here we have discussed characteristics, components, application, advantages, and disadvantages of c. You may also look at the following article to learn more – | https://www.educba.com/introduction-to-c/ | CC-MAIN-2020-45 | en | refinedweb |
I am looking for a clown to replace me as a regular on these forums. You won't receive payment but you can consider it a service to the cprogramming.com community! Anyone who's interested in this...
Type: Posts; User: Barney McGrew
I am looking for a clown to replace me as a regular on these forums. You won't receive payment but you can consider it a service to the cprogramming.com community! Anyone who's interested in this...
One other common use is to get the number of elements in an array. This can be used like so:
int x[10];
size_t i;
for (i = 0; i < sizeof x / sizeof x[0]; i++)
x[i] = 1;
What you have is fine as it is; it's an error to dereference currentP->next if currentP->next stores a null pointer. Your code will ensure that currentP->next isn't dereferenced in such a case.
...
qsort requires each element to be of the same length so, obviously, you can't move strings around in an array using it, since they may have different sizes. What you need to do is store a pointer to...
Try another source: qsort(3) - Linux manual page
Yeah. Make your program data-oriented, so that all the information you spam the user with is stored in one place, then write your program so that it operates on that data.
Why not use a growing array to store the input? Your program would be more portable since you won't need to use stat.
In regard to your compare function, try reading the example provided in 'man...
',' is evaluated after '='.
Why don't you just do this?:
#include <stdio.h>
int printmac(const unsigned char ptr[6])
{
return printf("%02x:%02x:%02x:%02x:%02x:%02x\n",
ptr[0], ptr[1], ptr[2], ptr[3], ptr[4],...
Some men annoy a lad called waitpid with their concerns.
Violence never solves anything. Try again when you're ready to use your keyboard properly.
How about completely rewriting it?
static int pali_(const char *start, const char *end)
{
return start >= end ? 1 : *start == *end && pali_(start + 1, end - 1);
}
int pali(const char *s)
{...
You can't do this in standard C, so find an external library that will do what you want and, if you have trouble with that library, you'll surely get better help by asking the people who maintain...
C's written in English, though compilers and interpreters for it are written in a variety of languages. You even see C implementations that use multiple programming languages, for instance, a...
You don't want to use calloc for pointer types, since it initialises them with all bits zero, which is a meaningless value for pointers. malloc would be a better choice and, if you need them...
char *strcat(char *s, const char *append)
{
return strcpy(s + strlen(s), append), s;
}
Figure out how to implement strcpy and strlen, and it should be simple enough to understand.
@megafiddle:
Well, I started writing a response, but most of what I was writing seemed to be the same points made in 'C For Smarties', so I'll just post a link to that.
C For Smarties
It's implementation-dependent. Your system should provide a way to redirect your program's output to a printer.
EDIT: On most Unix-like systems that's as easy as:
./program > /dev/lp
You could...
I see. This makes me wonder about code like:
int a[3][10];
a[0] < a[2];
Would you say it has undefined behaviour? On the one hand, both pointers point into separate aggregates, but on the other,...
'void *' and 'char *' are both able to represent all object pointer types. (6.2.5p28, 6.3.2.3p1)
Could it be that when two pointers are cast to 'void *', that the destination pointers are...
You'll need to include <stdlib.h> to make the declaration for qsort available as well. It's also a good idea to terminate your program's output with a new-line character.
Looks like somebody needs a diaper change!
In your first post you mentioned that the size would be eight bits for the header field. Assuming you're referring to the packed size and that CHAR_BIT is 8, I think you want the following:
unsigned...
Are you sure it has undefined behaviour, rather than unspecified behaviour? If it were true I think that would make it impossible to implement memmove() in a standard-compliant manner. | https://cboard.cprogramming.com/search.php?s=2813e785ed0e2067ea7a1f8841ccb4e8&searchid=2965602 | CC-MAIN-2020-10 | en | refinedweb |
Son-Hai Nguyen2,479 Points
I need help on this ```combiner()``` exercise
Please have a look at my following method. I tested it on Worksplaces, it worked just fine just like the requirement, but somehow the Recheck work kept saying there's a
TypeError: sequence item 0: expected str instance, list found
Is there anyone know what I missed?
Thank you!!!
def combiner(*args): numLs = [] strLs = [] for a in args: if isinstance(a, (int, float)): numLs.append(a) else: strLs.append(a) sumInt = str(sum(numLs)) strLs.append(sumInt) return ''.join(strLs) combiner("apple", 5.2, "dog", 8)
3 Answers
KRIS NIKOLAISEN53,322 Points
The function takes a single argument. If you want to test in a workspace try:
print(combiner(["apple", 5.2, "dog", 8]))
KRIS NIKOLAISEN53,322 Points
*args accepts a variable list of arguments (so there can be more than one). The challenge will pass in a single list (any number of items but they will be enclosed in brackets so there will only be one list)
The easiest fix to your code is just remove the * from your parameter
KRIS NIKOLAISEN53,322 Points
If you add print statements you better see what is going on:
def combiner(*args): numLs = [] strLs = [] for a in args: print(a) if isinstance(a, (int, float)): numLs.append(a) else: strLs.append(a) sumInt = str(sum(numLs)) strLs.append(sumInt) print(strLs) return ''.join(strLs) print(combiner(["apple", 5.2, "dog", 8]))
In the loop a is a list so since it since it isn't an int or float it is appended to strLs. Since that is the only argument there are no numbers to sum so sumInt = '0'. Append this to strLs you end up with [['apple', 5.2, 'dog', 8], '0']. At which point using join on a list and a string causes the error.
Son-Hai Nguyen2,479 Points
Son-Hai Nguyen2,479 Points
Thanks Kris, for your answer. But what does it mean? Doesnt the
*argsstand for a list of arguments? How to iterate through it (since it's still a list right? I saw the [] here) if it's a single argument? | https://teamtreehouse.com/community/i-need-help-on-this-combiner-exercise | CC-MAIN-2020-10 | en | refinedweb |
- Do Unto Others
- Undo Stack
- An Example Program
- Conclusion
An Example Program
Example program UndoRedo, shown in Figure 1, uses SOAP (simple object access protocol) serializations to provide undo and redo features. Select a drawing tool from the toolbar. Then click and drag to draw a new object. Click and drag again to create more objects of the same type, or select a new tool.
Figure 1 Program UndoRedo uses SOAP serializations to provide undo and redo features.
After you have drawn something, the program enables the Edit menu's Undo command. Use that command to remove the last thing you drew. Use Undo repeatedly to remove other objects.
After you have undone an object, the program enables the Edit menu's Redo command. Use the Redo command to restore the last object you removed.
The File menu's Save As command lets you save your current drawing in a file. The Open command lets you load a saved picture.
SOAP
Example program UndoRedo uses a SoapFormatter to save its serializations in a SOAP format. I decided to use the SoapFormatter for two reasons:
First, the program saves objects that are subclassed from other objects. These classes confuse the XmlSerializer so that rules out XmlSerializer. That's a shame because XmlSerializer produces a more concise and readable result than SoapFormatter.
Second, the other candidate, BinaryFormatter, produces binary serializations that you cannot read. The program could use a binary serialization, but it's easier to debug a program when you can read and modify its output. When there's a problem saving or loading a serialization, you can take a look and see what's going wrong. You can even make changes to the text serialization to see what happens.
Serializations created by the BinaryFormatter take less space than SOAP serializations, so you may prefer the binary serialization when space is at a premium. You will probably be better off debugging your program with the SoapFormatter first and then switching to a BinaryFormatter after everything works. In most cases, the space saving is a minor issue anyway.
Listing 1 shows the SOAP serialization for a small drawing containing a rectangle and an ellipse. I have added indentation to make the result easier to read.
The SOAP-ENC:Array tag represents the program's main serialization object. The SOAP-ENC:ArrayType attribute indicates that this is an array containing three Drawable objects.
The three item tags contained inside the SOAP-ENC:Array tag represent the objects contained in the main Drawable array. The first item's xsi:null attribute indicates that its item is empty. In the program, the first item in the array is not used so the array does contain an empty entry. The second and third items refer to the XML elements named ref-3 and ref-4 that follow in the serialization.
The next a1:DrawableRectangle element represents the first drawing object. Its id attribute indicates that this is the ref-3 object to which the second item element refers. This item has four properties: X1, Y1, X2, and Y2.
The final object in the serialization is an a1:DrawableEllipse element. This is the ref-4 object to which the third item element refers. This item also has four properties: X1, Y1, X2, and Y2.
Listing 1. SOAP Serialization Representing a Rectangle and an Ellipse
<SOAP-ENV:Envelope xmlns: <SOAP-ENV:Body> <SOAP-ENC:Array SOAP-ENC: <item xsi: <item href="#ref-3"/> <item href="#ref-4"/> </SOAP-ENC:Array> <a1:DrawableRectangle <X1>23</X1> <Y1>31</Y1> <X2>262</X2> <Y2>204</Y2> </a1:DrawableRectangle> <a1:DrawableEllipse <X1>43</X1> <Y1>39</Y1> <X2>129</X2> <Y2>192</Y2> </a1:DrawableEllipse> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
SoapFormatter Namespace
Normally, when you use a class such as SoapFormatter, you use an Imports statement at the beginning of the module to tell Visual Basic to include the class' namespace. Unfortunately, Visual Basic doesn't initially know about the SoapFormatter. To make it available, you need to add a reference to the DLL that defines it.
Right-click on the Solution Explorer's References entry, and select Add Reference. Locate the entry named System.Runtime.Serialization.Formatters.Soap.dll, double-click it, and click the OK button. Now, you can add this Imports statement to include the namespace:
Imports System.Runtime.Serialization.Formatters.Soap
Your program can now create SoapFormatter objects, as in the following code:
Dim soap_formatter As New SoapFormatter()
Drawable Class
The Drawable class and its derivative classes, shown in Listing 2, represent the objects drawn by the UndoRedo program. The Drawable class declares the Draw subroutine with the MustOverride key word, so the derived classes must implement their own versions of this routine. Drawable has four public variables, X1, Y1, X2, and Y2, which define the area the object occupies.
The DrawableEllipse, DrawableRectangle, and DrawableLine classes implement their Draw subroutines in different ways to draw different shapes.
Listing 2. The Drawable Class and its Derivatives Represent Drawing Objects
' Drawable is an object that can draw itself. <Serializable()> Public MustInherit Class Drawable ' Draw the specific shape. Public MustOverride Sub Draw(ByVal gr As Graphics) ' The bounding box. Public X1 As Integer Public Y1 As Integer Public X2 As Integer Public Y2 As Integer End Class <Serializable()> Public Class DrawableEllipse Inherits Drawable ' Draw the shape. Public Overrides Sub Draw(ByVal gr As Graphics) gr.DrawEllipse(Pens.Black, _ Math.Min(X1, X2), _ Math.Min(Y1, Y2), _ Math.Abs(X2 - X1), _ Math.Abs(Y2 - Y1)) End Sub End Class <Serializable()> Public Class DrawableRectangle Inherits Drawable ' Draw the shape. Public Overrides Sub Draw(ByVal gr As Graphics) gr.DrawRectangle(Pens.Black, _ Math.Min(X1, X2), _ Math.Min(Y1, Y2), _ Math.Abs(X2 - X1), _ Math.Abs(Y2 - Y1)) End Sub End Class <Serializable()> Public Class DrawableLine Inherits Drawable ' Draw the shape. Public Overrides Sub Draw(ByVal gr As Graphics) gr.DrawLine(Pens.Black, X1, Y1, X2, Y2) End Sub End Class
You could extend these classes to provide other drawing features such as fill style, fill color, outline color, drawing style, and so forth. The classes shown in Listing 2 are good enough for this example.
Snapshots
Listing 3 shows the UndoRedo program's code that deals most directly with snapshots. The m_DrawingObjects array holds the current picture's Drawable objects. For easier indexing, entry 0 in the array is allocated but not used.
The m_UndoStack collection holds the program's snapshots. Each entry in the collection is a serialization of the m_DrawingObjects array. The variable m_CurrentSnapshot gives the index in m_UndoStack of the snapshot representing the currently displayed drawing.
The Serialization property makes saving and restoring the program's drawing easy. The Serialization property get procedure returns a SOAP serialization for the m_DrawingObjects array. The Serialization property set procedure takes a serialization and deserializes it to reinitialize the m_DrawingObjects array.
When you select the File menu's Save As command, the mnuFileSaveAs_Click event handler saves the Serialization property into the selected file. Some programs, such as WordPad, empty the undo stack when you save. Other programs, such as Word, leave the undo buffer unchanged. This program takes the second approach.
When you select the File menu's Open command, the mnuFileOpen_Click event handler reads the contents of the file you select into a string. It then sets the program's Serialization property to that string, and the Serialization property set procedure uses the serialization from the file to restore the saved drawing.
The SaveSnapshot subroutine adds a snapshot of the current picture to the m_UndoStack collection. The routine begins by removing any snapshots that come after the current one. If you undo several changes and then draw a new shape, the program calls SaveSnapshot. At that point, the program discards the actions you undid earlier.
Next, SaveSnapshot saves the Serialization value into the m_UndoStack collection. If the stack is too big, the routine removes some of the oldest serializations. This program saves at most 10 serializations, so it is easy for you to see how this works. A real application could probably save far more snapshots. For this program, each Drawable object adds about 125 bytes to the serialization. If a picture has 100 Drawable objects, its serialization takes around 12,500 bytes. Saving 100 serializations of this size in the undo stack would take up about 1.25MB of memory. You could probably afford that much memory, although you might want to stop short of 1,000 serializations, which would take up about 12.5MB.
After shrinking the undo stack, if necessary, the event handler sets m_CurrentSnapshot to the index of the most recent snapshot and calls subroutine EnableUndoMenuItems. That routine enables or disables the Undo and Redo commands as appropriate.
The Undo subroutine decrements m_CurrentSnapshot so it points to the previous snapshot. It sets the Serialization property to that snapshot to restore the picture in its previous state and redraws the picture.
The Redo subroutine increments m_CurrentSnapshot so it points to the next snapshot. It sets the Serialization property to that snapshot and redraws the picture.
Listing 3. The UndoRedo Program Uses this Undo/Redo and File Saving/Opening Code
' The drawing objects. Private m_DrawingObjects() As Drawable Private m_MaxDrawingObject As Integer ' The undo stack. Private m_UndoStack As Collection Private m_CurrentSnapshot As Integer ' Get ready. Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles MyBase.Load ' Code omitted... ' Make an empty 1-entry array of drawing objects. ReDim m_DrawingObjects(0) m_MaxDrawingObject = 0 ' Make the empty undo stack. m_UndoStack = New Collection() m_CurrentSnapshot = 0 ' Save a blank snapshot. SaveSnapshot() End Sub ' Get or set our serialization. Private Property Serialization() As String ' Return a serialization for the current objects. Get Dim soap_formatter As New SoapFormatter() Dim memory_stream As New MemoryStream() ' Serialize the m_DrawingObjects. soap_formatter.Serialize(memory_stream, m_DrawingObjects) ' Rewind the memory stream to the beginning. memory_stream.Seek(0, SeekOrigin.Begin) ' Return a textual representation. Dim stream_reader As New StreamReader(memory_stream) Return stream_reader.ReadToEnd() End Get ' Load objects from the new serialization. Set(ByVal Value As String) Dim string_reader = New StringReader(Value) Dim soap_formatter As New SoapFormatter() ' Load the new objects. Dim memory_stream As New MemoryStream() Dim stream_writer As New StreamWriter(memory_stream) ' Write the serialization into the ' StreamWriter and thus the MemoryStream. stream_writer.Write(Value) stream_writer.Flush() ' Rewind the MemoryStream. memory_stream.Seek(0, SeekOrigin.Begin) ' Deserialize. m_DrawingObjects = soap_formatter.Deserialize(memory_stream) ' Save the new objects. m_MaxDrawingObject = m_DrawingObjects.GetUpperBound(0) ' Display the new objects. DrawObjects(picCanvas.CreateGraphics()) End Set End Property ' Open a saved file. Private Sub mnuFileOpen_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles mnuFileOpen.Click ' Let the user pick a file. If dlgOpen().ShowDialog = DialogResult.OK Then ' Set our serialization to the file's contents. Try Dim stream_reader As New StreamReader(dlgOpen.FileName) Me.Serialization = stream_reader.ReadToEnd() ' Remove all snapshots. m_CurrentSnapshot = 0 ' Save a snapshot of the new objects. SaveSnapshot() Catch exc As Exception MsgBox("Error loading file " & _ dlgOpen.FileName & vbCrLf & exc.Message) End Try End If End Sub ' Save the current drawing objects. Private Sub mnuFileSaveAs_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles mnuFileSaveAs.Click ' Let the user pick a file. If dlgSave().ShowDialog = DialogResult.OK Then ' Save the objects into the file. Try Dim stream_writer As New StreamWriter(dlgSave.FileName) stream_writer.Write(Me.Serialization) stream_writer.Close() Catch exc As Exception MsgBox("Error saving file " & _ dlgSave.FileName & vbCrLf & exc.Message) End Try End If ' Some programs flush the undo stack at this point. ' This program does not. End Sub ' Save a snapshot. Private Sub SaveSnapshot() Const MAX_SNAPSHOTS = 10 ' Remove all snapshots items after the current one. Do While (m_CurrentSnapshot < m_UndoStack.Count) m_UndoStack.Remove(m_CurrentSnapshot + 1) Loop ' Save the new snapshot. m_UndoStack.Add(Serialization) ' If we have too many snapshots, remove the oldest. Do While (m_UndoStack.Count > MAX_SNAPSHOTS) m_UndoStack.Remove(1) Loop ' Save the index of the current snapshot. m_CurrentSnapshot = m_UndoStack.Count ' Enable the proper undo/redo menu items. EnableUndoMenuItems() End Sub ' Enable the undo and redo menu commands appropriately. Private Sub EnableUndoMenuItems() mnuEditUndo.Enabled = (m_CurrentSnapshot > 1) mnuEditRedo.Enabled = (m_CurrentSnapshot < m_UndoStack.Count) End Sub ' Restore the previous snapshot. Private Sub Undo() ' Do nothing if there are no more snapshots. If m_CurrentSnapshot < 2 Then Exit Sub ' Restore the previous snapshot. m_CurrentSnapshot = m_CurrentSnapshot - 1 Serialization = m_UndoStack(m_CurrentSnapshot) ' Redraw. DrawObjects(picCanvas.CreateGraphics) End Sub ' Restore the next snapshot. Private Sub Redo() ' Do nothing if there are no more snapshots. If m_CurrentSnapshot >= m_UndoStack.Count Then Exit Sub ' Restore the next snapshot. m_CurrentSnapshot = m_CurrentSnapshot + 1 Serialization = m_UndoStack(m_CurrentSnapshot) ' Redraw. DrawObjects(picCanvas.CreateGraphics) End Sub
If you look closely at the Serialization property get procedure, you'll see that it doesn't depend on the structure of the m_DrawingObjects array that it is serializing. The procedure tells the SoapFormatter to serialize m_DrawingObjects, and the formatter figures out how to do that. Similarly, the Serialization property set procedure tells the SoapFormatter to re-create m_DrawingObjects from a serialization, and the SoapFormatter figures out how to do it.
This means you do not need to change the serialization code if you change the way the program stores its data. For example, if you change the definition of the Drawable class or its derived classes, or if you create new derived classes, the Serialization procedures will still work. You may need to do some work to load older files using the new class definitions, but the serialization code will run unchanged.
Download the source file here: | http://www.informit.com/articles/article.aspx?p=25047&seqNum=4 | CC-MAIN-2020-10 | en | refinedweb |
3.9 Key Point Summary
C++ functions have definitions and declarations. Function prototypes must always precede function calls.
Inline functions give you the type safety of functions with the performance of macros.
Recursive functions make extensive use of the run-time stack but are often simple and elegant solutions to complex problems.
Pointers to functions allow you to call functions through data structures, such as dispatch tables.
With function prototypes and definitions, all sizes of a multidimensional array argument are necessary except the first one. All sizes must be constant integer expressions, and the first size is optional.
Function signatures with default arguments obey the Positional Rule: arguments without default values must appear to the left of all default arguments.
The Standard Args technique with <stdarg.h> lets you design functions that accept a variable number of arguments.
Structures encapsulate data members and functions. Structures may nest, and structure copy and assignment is legal between structures of the same type.
Unions are like structures, except they allocate only enough memory to accommodate the largest data member. Unions may be anonymous.
References may appear in function signatures and function return values. References give you the efficiency of pointers without pointer notation. References with structures and unions eliminate structure copy and assignment.
The C++ keywords auto, static, register, extern, and mutable are storage class specifiers. Automatic variables do not retain their values between function calls, but static variables do. Register variables improve performance, and externals provide access of variables between modules. The mutable keyword allows constant member functions to modify data members of constant structures.
The keywords try, catch, and throw implement exception handling. Exception specifications document which, if any, exceptions a function may throw.
Namespace definitions divide global namespace into distinct parts. Each namespace defines a named scope, allowing names (variables, structures, enums, functions, classes, and typedefs) to exist outside local scope without polluting the global namespace.
Namespace using directives and using declarations provide easy access to namespace members. Namespace aliases provide shorter names for longer namespace qualifiers.
Operator new allocates memory from free store dynamically, and operator delete releases memory. Operator new throws standard exception bad_alloc when it fails to allocate free store memory.
Operator new allocates one object or an array of objects, which can be pointer arrays or multidimensional arrays. Operator delete lets you release one object or an array of objects in free store. | http://www.informit.com/articles/article.aspx?p=31783&seqNum=9 | CC-MAIN-2020-10 | en | refinedweb |
Improving the UI/UX of an Ionic Component
By Josh Morony
In this tutorial, we are going to improve upon the flash message service that we created in the last tutorial. I offered some suggestions for improving the service at the end of that tutorial which included:
- Allowing for different messages styles (success, danger, warning, etc.)
- Displaying a bar to indicate how long the message will be displayed
- Adding an indicator to communicate that the message can be closed by being tapped
We will be adding all of these features to the flash message service in this tutorial, and this will also include building another custom component to serve as the timing indicator. When we are done, the flash messages will look like this:
versus what it looked like in the last tutorial:
This certainly looks a lot cooler, but it is important to keep in mind how these UI (User Interface) changes are going to positively affect the UX (User Experience). Styling the messages appropriately allow us to more easily communicate the intent of the message – i.e. is this just a friendly reminder? or should the user be worried about something? By adding the time indicator bar we achieve two things: this suggests to the user that the message will disappear on its own, and it also communicates how long until that happens. Adding some kind of indicator that the message can be closed also communicates that fact to the user, otherwise they may always just wait until it closes on its own.
Once we have finished this tutorial, we will be able to include an optional extra parameter in our flash messages to determine the style of the message, i.e:
this.flashProvider.show('Base is under attack!', 2000, 'danger');
Later on in this tutorial, we will also get a chance to look at an interesting feature of
@ViewChild called a setter, which will allow us to circumvent an issue that arises due to the nature of our flash message service..
NOTE: This is a continuation of a previous tutorial, if you want to follow along step-by-step you will need to complete this tutorial first.
Build the Time Bar Component
I struggled with what to actually call this component. It is an established UI pattern – a bar that shrinks to indicate time remaining – but I wasn’t able to find what the accepted term for it is. A few suggestions on Twitter were just “progress bar”, or “release bar”, or “reverse progress bar”… personally, it reminds me of a fuse (like dynamite), but whatever, we are just going to build the thing.
Run the following command to generate the component:
ionic g component TimeBar
The general idea behind this component is quite simple. We have a bar that takes up
100% width, and then we want to animate it down to
0% width over a period of time. The tricky part is that we need to make that animation time customisable, so that it matches the length that the flash message will be displayed.
Let’s start by implementing the template and styling for the template, and then we will get into the logic.
Modify src/components/time-bar/time-bar.html to reflect the following:
<div class="time-bar"></div>
Modify src/components/time-bar/time-bar.scss to reflect the following:
time-bar { width: 100%; height: 8px; .time-bar { height: 100%; width: 100%; background-color: #fff; opacity: 0.3; } }
Nice and simple. We have the parent container set to
100% width, and inside of that we have the
.time-bar which is also
100% width for now but we will be animating that width. Now let’s take a look at the logic.
Modify src/components/time-bar/time-bar.ts to reflect the following:
import { Component, ElementRef, Renderer2 } from '@angular/core'; @Component({ selector: 'time-bar', templateUrl: 'time-bar.html' }) export class TimeBarComponent { constructor(private renderer: Renderer2, private element: ElementRef) { } startTimer(time){ this.renderer.setStyle(this.element.nativeElement.children[0], 'transition', 'width ' + time + 'ms linear'); setTimeout(() => { this.renderer.setStyle(this.element.nativeElement.children[0], 'width', '0%'); }, 0); } }
We just have a single method here that will trigger the timer to start called
startTimer and we supply it with the length of time that we want the timer to last. All we need to do is set some styles on the
.time-bar element. We first set the
transition property with the appropriate amount of time, so that the width change will be animated appropriately. Then we just change the width to
0% and it will animate to that length over the specified time. However, we do put it into a
setTimeout so that the code is run asynchronously – if we don’t do this, the
transition property won’t take effect properly. Please keep in mind that you generally shouldn’t rely on
setTimeout to deal with timing issues, but sometimes it is necessary.
If you are unfamiliar with using
Renderer and
ElementRef you should check out this tutorial (it is important to use
Renderer rather than modifying elements directly).
Integrate Time Bar with the Flash Message Service
Our time bar should now be working as we want. You could even just drop it anywhere you like by adding:
<time-bar></time-bar>
But, we are going to integrate it into our existing flash message service. We will start by adding our new component to our flash message component.
Modify src/components/flash/flash.html to reflect the following:
<div (click)="hide()" @messageState * <time-bar></time-bar> <div class="message"> {{message}} </div> </div>
We will also need to add some styling to the time bar for it to display properly, but we will get to that in the next section. For now, let’s focus on the logic for integrating the time bar.; constructor(private flashProvider: FlashProvider) { this.flashProvider.show = this.show.bind(this); this.flashProvider.hide = this.hide.bind(this); } show(message, duration){ this.message = message; this.active = true; this.duration = duration; this.timeout = setTimeout(() => { this.active = false; }, duration); } hide(){ this.active = false; clearTimeout(this.timeout); } }
Most of what is here is just stuff we covered in the last tutorial, but we are actually doing something pretty interesting here to get the time bar working. In order to start the timer on the time bar, we need to grab a reference to it and then call the
startTimer method. You might think that we would trigger the
startTimer method inside of
show, but instead, we do this:
@ViewChild(TimeBarComponent) set tb(timeBar: TimeBarComponent) { if(typeof(timeBar) !== 'undefined'){ timeBar.startTimer(this.duration); } }
If you are familiar with
@ViewChild then it probably won’t surprise you that we are using it to grab a reference to the time bar. If you are not familiar with
@ViewChild then I would recommend reading this tutorial – in short,
@ViewChild allows you to grab a reference to components in the view. The difference here is that we aren’t just using
@ViewChild to grab a reference, we are using a “setter function” to trigger the
startTimer method.
The issue we are trying to solve is that the entire flash message component, including our time bar, is surrounded in an
*ngIf structural directive. This means that when we first attempt to grab a reference to the time bar, it isn’t going to exist in the DOM. This setter function will be triggered when the time bar is added to the DOM, which happens as soon as we trigger the flash message, so we can instead use that to trigger the starting of the timer.
Typically, you would probably use the setter function to set the value of a class member like
this.timeBar and then access that elsewhere in the class, but since we just need to trigger the behaviour immediately we can do it from within the setter function.
Add Message Styles
Creating the time bar was the most complicated of the changes we wanted to make, now we will be adding in the different message types as well as a “tap to dismiss” message. We will be setting up styles to reflect the default colour variables that are included in Ionic, e.g:
primary,
secondary,
danger,
light,
dark. We are also going to add in the styles for our time bar so that it displays more nicely inside of the flash message.
First, let’s modify our flash component so that the
show method accepts an optional
type parameter – you could supply the
type as
primary, for example. We will set whatever parameter is passed in on a class member called
activeClass.; private activeClass = 'secondary'; constructor(private flashProvider: FlashProvider) { this.flashProvider.show = this.show.bind(this); this.flashProvider.hide = this.hide.bind(this); } show(message, duration, type?){ this.message = message; this.active = true; this.duration = duration; if(type){ this.activeClass = type; } this.timeout = setTimeout(() => { this.active = false; }, duration); } hide(){ this.active = false; clearTimeout(this.timeout); } }
Now, the
activeClass class member will specify the type of message we want to display. We can then use
ngClass in our template to add that class to the flash message component.
Modify src/components/flash/flash.html to reflect the following:
<div (click)="hide()" @messageState * <time-bar></time-bar> <div class="message"> {{message}} </div> <p class="dismiss">tap to dismiss</p> </div>
If you are unfamiliar with using
ngClass, you might be interested in checking our this video. Now if we have
activeClass set to
primary then our container for the flash bar component will be given a class of
primary. We have also added the “tap to dismiss” message above as well. Now we just need to add the appropriate styles.
Modify src/components/flash/flash.scss to reflect the following:
flash { .flash-container { position: absolute; top: 0; width: 100%; height: 56px; color: #fff; z-index: 1; display: flex; align-items: center; justify-content: center; time-bar { position: absolute; top: 0; } .dismiss { position: absolute; bottom: 0; margin: 0 0 5px 0; font-size: 0.7em; opacity: 0.5; } } .primary { background-color: map-get($colors, primary); } .secondary { background-color: map-get($colors, secondary); } .danger { background-color: map-get($colors, danger); } .light { background-color: map-get($colors, light); } .dark { background-color: map-get($colors, dark); } }
We’ve added a class for each of the colour variables – you could also add more if you wish. We have also styled the dismiss message, and the time bar. Now we can use the updated version of our flash message service., 'danger'); } }
You can supply any of the styles we added to the
show method, or if you supply none it will use the default of
secondary. If you test it out now, it should look something like this:
Summary
With a few relatively simple changes, we have greatly improved the experience that our flash message service provides to the user. We haven’t just created something that looks cooler, we’ve also put consideration into how those changes are actually going to create a better experience. We also now have a generic time bar component that we can use elsewhere in our applications! | https://www.joshmorony.com/improving-the-ui-ux-of-an-ionic-component/ | CC-MAIN-2020-10 | en | refinedweb |
Java Functional Programming
The term Java functional programming refers to functional programming in Java. Functional programming in Java has not been easy historically, and there were even several aspects of functional programming that were not even really possible in Java. In Java 8 Oracle made an effort to make functional programming easier, and this effort did succeed to some extent. In this Java functional programming tutorial I will go through the basics of functional programming, and what parts of it that are possible in Java.
Functional Programming Basics
Functional programming contains the following key concepts:
- Functions as first class objects
- Pure functions
- Higher order functions
Pure functional programming has a set of rules to follow too:
- No state
- No side effects
- Immutable variables
- Favour recursion over looping
These concepts and rules will be explained throughout the rest of this tutorial
Even if you do not follow all of these rules all the time, you can still benefit from the functional programming ideas in your applications. As you will see, functional programming is not the right tool for every problem out there. Especially the idea of "no side effects" makes it hard to e.g. write to a database (that is a side effect). You need to learn what problems functional programming is good at solving, and which it is not.
Functions as First Class Objects
In the functional programming paradigm, functions are first class objects in the language. That means that you can create an "instance" of a function, as have a variable reference that function instance, just like a reference to a String, Map or any other object. Functions can also be passed as parameters to other functions.
In Java, methods are not first class objects. The closest we get is Java Lambda Expressions. I will not cover Java lambda expressions here, as I have covered them in both text and video in my Java Lambda expression tutorial.
Pure Functions
A function is a pure function if:
- The execution of the function has no side effects.
- The return value of the function depends only on the input parameters passed to the function.
Here is an example of a pure function (method) in Java:
public class ObjectWithPureFunction{ public int sum(int a, int b) { return a + b; } }
Notice how the return value of the
sum() function only depends on the input parameters.
Notice also that the
sum() has no side effects, meaning it does not modify any state
(variables) outside the function anywhere.
Contrarily, here is an example of a non-pure function:
public class ObjectWithNonPureFunction{ private int value = 0; public int add(int nextValue) { this.value += nextValue; return this.value; } }
Notice how the method
add() uses a member variable to calculate its return value,
and it also modifies the state of the
value member variable, so it has a side effect.
Higher Order Functions
A function is a higher order function if at least one of the following conditions are met:
- The function takes one or more functions as parameters.
- The function returns another function as result.
In Java, the closest we can get to a higher order function is a function (method) that takes one or more lambda expressions as parameters, and returns another lambda expression. Here is an example of a higher order function in Java:
public class HigherOrderFunctionClass { public <T> IFactory<T> createFactory(IProducer<T> producer, IConfigurator<T> configurator) { return () -> { T instance = producer.produce(); configurator.configure(instance); return instance; } } }
Notice how the
createFactory() method returns a lambda expression as result. This is the first
condition of a higher order function.
Notice also that the
createFactory() method takes two instances as parameters which are both implementations
of interfaces (
IProducer and
IConfigurator). Java lambda expressions have to implement
a functional interface, remember?
Imagine the interfaces looks like this:
public interface IFactory<T> { T create(); }
public interface IProducer<T> { T produce(); }
public interface IConfigurator<T> { void configure(T t); }
As you can see, all of these interfaces are functional interfaces. Therefore they can be implemented by
Java lambda expressions - and therefore the
createFactory() method is a higher order function.
Higher order functions are also covered with different examples in the text about Higher Order Functions
No State
As mentioned in the beginning of this tutorial, a rule of the functional programming paradigm is to have no state. By "no state" is typically meant no state external to the function. A function may have local variables containing temporary state internally, but the function cannot reference any member variables of the class or object the function belongs to.
Here is an example of a function that uses no external state:
public class Calculator { public int sum(int a, int b) { return a + b; } }
Contrarily, here is an example of a function that uses external state:
public class Calculator { private int initVal = 5; public int sum(int a) { return initVal + a; } }
This function clearly violates the no state rule.
No Side Effects
Another rule in the functional programming paradigm is that of no side effects. This means, that a function cannot change any state outside of the function. Changing state outside of a function is referred to as a side effect.
State outside of a function refers both to member variables in the class or object the function, and member variables inside parameters to the functions, or state in external systems like file systems or databases.
Immutable Variables
A third rule in the functional programming paradigm is that of immutable variables. Immutable variables makes it easier to avoid side effects.
Favour Recursion Over Looping
A fourth rule in the functional programming paradigm is to favour recursion over looping. Recursion uses function calls to achieve looping, so the code becomes more functional.
Another alternative to loops is the Java Streams API. This API is functionally inspired.
Functional Interfaces
A functional interface in Java is an interface that only has one abstract method. By an abstract method is meant only one method which is not implemented. An interface can have multiple methods, e.g. default methods and static methods, both with implementations, but as long as the interface only has one method that is not implemented, the interface is considered a functional interface.
Here is an example of a functional interface:
public interface MyInterface { public void run(); }
Here is another example of a functional interface with a default method and a static method too:
public interface MyInterface2 { public void run(); public default void doIt() { System.out.println("doing it"); } public static void doItStatically() { System.out.println("doing it statically"); } }
Notice the two methods with implementations. This is still a functional interface, because only
run()
is not implemented (abstract). However, if there were more methods without implementation, the interface would
no longer be a functional interface, and could thus not be implemented by a Java lambda expression. | http://tutorials.jenkov.com/java-functional-programming/index.html | CC-MAIN-2020-10 | en | refinedweb |
How To Setup A Stripe Checkout Page From Scratch
Aman Mittal—
June 17, 2019
Crowdbotics App Builder platform has a lot to offer when it comes to building an application. It helps both developers and non-developers to build, deploy, and scale applications by providing maintainable templates to make either your web or mobile application. Current web technologies such as Django, Nodejs, React, as well as to build a mobile app, React Native, and Swift templates are all supported as templates.
In this tutorial, you are going to learn how to setup a React and Nodejs template using Crowdbotics platform. Using that template project, we will setup a Stripe Payments Checkout Page from scratch. Make sure you checkout the requirements section before proceeding with the rest of the tutorial.
Table of Contents
- Requirements
- Setting up a Web with Crowdbotics App Builder Platform
- Enable Test Mode in Stripe
- Setting up the server
- Creating a Stripe Route
- Build a Checkout Component
- Testing the Checkout Component
- Conclusion
Requirements
To follow this tutorial, you are required to have installed the following on your local machine:
- Nodejs
v8.x.xor higher installed along with npm/yarn as the package manager
- Postgresql app installed
- Crowdbotics App builder Platform account (preferably log in with your valid Github ID)
- Stripe Developer Account and API key Access
What are we building? Here is a short demo.
Setting up a Web with Crowdbotics App Builder Platform
To setup, a new project on Crowdbotics app builder platform, visit this link and create a new account. Once you have an individual account, access the app building platform with those credentials, and the dashboard screen will welcome you like below.
If this is your first time using Crowdbotics platform, you might not be having any projects as shown above. Click on the button Create New Application. You will are going to be directed to the following screen.
This screen lets you select a template to create an application. For our current requirement, we are going to build a web application that is based on Nodejs and Reactjs. Select the Nodejs template in the Web App, scroll down the bottom of the page and fill in the name
stripe-checkout-demo and click on the button Create App.
Once the project is setup by the platform, you will be redirected back to the dashboard screen, as shown below. This screen contains all the details related to the new application you are setting up right now.
The reason I told you earlier to login with your Github account is that you can directly manage Crowdbotics app with your Github account. In the above image, you can see that even in the free tier, there many basic functionalities provided by Crowdbotics. Once the Github project is created, you will be able to either download or clone that Github repository to your local development environment.
After you have cloned the repository, execute the commands below in the order, they are specified but first, navigate inside the project directory from the terminal window. Also, do not forget to rename the file
.env.example to
.env before you run below commands in the project directory.
# navigate inside the project directorycd stripe-checkout-demo-4738# install dependenciesnpm install# open postgresql.app first# even though we only require the database for user login# for non-mac userspsql -f failsafe.sql# for mac userspsql postgres -f failsafe.sql# to run the applicationnpm start
The Crowdbotics scaffolded Nodejs project uses a custom a webpack server configuration to bootstrap the web app. Visit from a browser window to see the application in action.
Create a new account if you want and login in the app as a user, you will get a success toast alert at the bottom of the screen.
This section completes, how to setup a Nodejs and React app with Crowdbotics.
Enable Test Mode in Stripe
Before you start with the rest of this tutorial, please make sure you have a Stripe account. Login into the account and go to the dashboard window. From the left sidebar menu, make sure you have enabled the test mode like below.
In Stripe, you have access to two modes. Live and test. When in test mode, you will only see payments that were from the test application (like the app we are going to build in this tutorial). The developer menu gives you access to API keys that required to create the test application. These two types of API keys are:
- Publishable Key: used on the frontend (React client side of the application).
- Secret Key: used on the backend to enable charges (Nodejs side of the application).
Also, note that these API key changes when you change modes between live and test.
Setting up the server
To start building the server application, all we need are the following packages.
express
body-parser
cors
stripe
The first,
express and
body-parser are already available with current Crowdbotics generated the project. For more information, what
npm dependencies this project comes with, go through
package.json file. We need to install the other two. Make sure you run the following command at the root of your project.
npm install -S cors stripe
At the server side, we are going to create a RESTful API endpoint. The package
stripe will help us communicate with the Stripe Payment API. The
cors package in scenarios where your server and front-end are not sharing the same origins.
Inside
server/config folder create a new file called
stripe.js. This file will hold the configuration and secret key for the Stripe API. In general terms, this file will help to enable the configuration between the server side part of the application and the Stripe API itself.
const configureStripe = require('stripe')const STRIPE_SECRET_KEY = 'sk_text_XXXX'const stripe = configureStripe(STRIPE_SECRET_KEY)module.exports = stripe
In the above snippet, just replace the
sk_text_XXXX with your secret key. Lastly, to make the server work to add the following snippet of code by replacing the default middleware function as shown below. Open
app.js in the root of your project directory.
// ...app.use(bodyParser.json())app.use(bodyParser.urlencoded({extended: true}))// ...
Add this line of code will help to parse the incoming body with an HTTP request. The incoming body will contain the values like token, the amount, and so on. We do not have to get into details here since we are going to take a look at the Stripe dashboard which logs everything for us. But we will do this later after the frontend part is working.
Creating a Stripe Route
The second missing part in the backend of our application is route configuration for the payments to happen. First, create a new file called
payment.js inside
routes folder. Then, add the following snippet of code to it.
const stripe = require('../server/config/stripe')const stripeCharge = res => (stripeErr, stripeRes) => {if (stripeErr) {res.status(500).send({ error: stripeErr })} else {res.status(200).send({ success: stripeRes })}}const paymentAPI = app => {app.get('/', (req, res) => {res.send({message: 'Stripe Checkout server!',timestamp: new Date().toISOString})})app.post('/', (req, res) => {stripe.charges.create(req.body, stripeCharge(res))})return app}module.exports = paymentAPI
In the above snippet, we start by importing stripe instance that is configured with the secret key. Then, define a function that has a callback with one argument
res called
stripeCharge. This function is responsible for handling any incoming post request that will come from the client side when the user makes an official payment using Stripe API. The incoming request contains a payload of user's card information, the amount of the payment, and so on. This function further associates to a callback that runs only when the request to charge the user either fails or succeeds.
The post route uses this function with the argument
res. Next, inside the already existing
index.js file, import the
paymentAPI as shown below.
var express = require('express')var router = express.Router()var path = require('path')var VIEWS_DIR = path.resolve(__dirname, '../client/public/views')// import thisconst paymentAPI = require('./payment')module.exports = function(app) {// API Routesapp.use('/api/user', require(path.resolve(__dirname, './api/v1/user.js')))/* GET home page. */app.route('/*').get(function(req, res) {res.sendFile(path.join(VIEWS_DIR, '/index.html'))})// after all other routes, add thispaymentAPI(app)}
The configuration part required to make the backend work is done.
Build a Checkout Component
In this section, let us build a checkout component that will handle the communication by sending payment requests to the server as well as represent a UI on the client side of the application. Before you proceed, make sure you have installed the following dependencies that will help to build this checkout component. Go the terminal window, and execute the following command.
npm install -S axios react-stripe-checkout
axios is a promised based library that helps you make AJAX requests from the browser on the frontend side. This library is going to be used to make the payment request to the backend.
react-stipe-checkout is a ready-to-use UI component to capture a user's information at the time of the payment. The gathered information here which include user's card number and other details is then sent back to the backend.
Now, create a new component file called
client/app/components/. Add the following code to that file.
import React from 'react'import axios from 'axios'import StripeCheckout from 'react-stripe-checkout'const STRIPE_PUBLISHABLE = 'XXXX'const PAYMENT_SERVER_URL = ''const CURRENCY = 'USD'const successPayment = data => {alert('Payment Successful')console.log(data)}const errorPayment = data => {alert('Payment Error')console.log(data)}const onToken = (amount, description) => token =>axios.post(PAYMENT_SERVER_URL, {description,source: token.id,currency: CURRENCY,amount: amount}).then(successPayment).catch(errorPayment)const Checkout = ({ name, description, amount }) => (<StripeCheckoutname={name}description={description}amount={amount}token={onToken(amount, description)}currency={CURRENCY}stripeKey={STRIPE_PUBLISHABLE}/>)export default Checkout
In the above snippet, we import the required components from different libraries, but the most notable is
StripeCheckout. This is a UI component that
react-stripe-checkout consist. It accepts props such as
amount,
token,
currency and most importantly the
stripeKey. This stripe key is different from the one we used in the server side part of the application. The in the above snippet
STRIPE_PUBLISHABLE is the publishable key provided by the Stripe Payment API. This type of key used on the client side of an application irrespective of the framework you are using to build one.
You are also required to declare a
PAYMENT_SERVER_URL on which
axios will make a post request with different user information on the checkout. The methods
successPayment and
errorPayment are for testing purposes to see if things work the way we want them. The
token prop is an important one. It has its own method
onToken inside which the payment request is made. It gets triggered in both the cases whether the payment is successful or not.
react-stripe-checkout library creates this token on every request.
Testing the Checkout Component
The last piece of the puzzle is to make this whole application work is to import the Checkout component inside
App.jsx and the following snippet.
// ... after other importsimport Checkout from './Checkout.jsx'// ...render() {console.log(this.state.isLoading);return (<div><NavBar {...this.state} /><main className="site-content"><Checkout name='Crowdbotics' description='Stripe Checkout Example' amount={1000} /><div className="wrap container-fluid">{this.state.isLoading ? "Loading..." : this.props.children && React.cloneElement(this.props.children, this.state)}</div></main><Footer {...this.state} /><MainSnackbar {...this.state} /></div>);}
Once you have added the snippet and modified the render function as shown, go back to the terminal window and run the command
npm start. Visit the URL from your browser window, and you will notice that there is a new button, as shown below. Notice the
amount prop. The value of
1000 here represents only
$10.00. Fun fact, to make a valid stripe payment, you least amount required is more than 50 cents in American dollars.
Now, click on the the button Pay With Card and enter the following test values.
- Card number: 4242 4242 4242 4242 (Visa)
- Date: a future date
- CVC: a random combination of three numbers
On completion, when hit the pay button, there will be an alert message whether the payment was successful or not. See the below demo.
If you go to the Stripe Dashboard screen, in the below screen, you can easily notice the amount of activity logged.
There are proper logs generated with accurate information coming this web application.
Conclusion
This completes the step-by-step guide to integrate Stripe Payment API in a web application built using Reactjs and Nodejs. In this tutorial, even though we used Crowdbotics generated project to focus more on the topic rather than building a complete fullstack application from scratch. You can easily use the code snippets and knowledge gained in your own use cases.
Originally published at Crowdbotics | https://amanhimself.dev/how-to-setup-a-stripe-checkout-page-in-a-crowdbotics-app/ | CC-MAIN-2020-10 | en | refinedweb |
Returns whether the selectable is currently 'highlighted' or not.
Use this to check if the selectable UI element is currently highlighted.
//Create a UI element. To do this go to Create>UI and select from the list. Attach this script to the UI GameObject to see this script working. The script also works with non-UI elements, but highlighting works better with UI.
using UnityEngine; using UnityEngine.Events; using UnityEngine.EventSystems; using UnityEngine.UI;
//Use the Selectable class as a base class to access the IsHighlighted method public class Example : Selectable { //Use this to check what Events are happening BaseEventData m_BaseEvent;
void Update() { //Check if the GameObject is being highlighted if (IsHighlighted(m_BaseEvent) == true) { //Output that the GameObject was highlighted, or do something else Debug.Log("Selectable is Highlighted"); } } } | https://docs.unity3d.com/kr/2018.1/ScriptReference/UI.Selectable.IsHighlighted.html | CC-MAIN-2020-16 | en | refinedweb |
- ×
HTTP mocking and expectations library
Filed under development aidsShow All.
Table of Contents
- How does it work?
- Install
- Usage
- READ THIS! - About interceptors
- Specifying hostname
- Specifying path
- Specifying request body
- Specifying request query string
- Specifying replies
- Specifying headers
- HTTP Verbs
- Support for HTTP and HTTPS
- Non-standard ports
- Repeat response n times
- Delay the response body
- Delay the response
- Delay the connection
- Socket timeout
- Chaining
- Scope filtering
- Conditional?
Nock works by overriding Node's
http.requestfunction. Also, it overrides
http.ClientRequesttoo to cover for modules that use it directly.
Install
$ npm install --save-dev nock
Node version support
The latest version of nock supports all currently maintained Node versions, see Node Release Schedule
Here is a list of past nock versions with respective node version support
Usage
On your test, you can setup your mocking object like this:
const nock = require('nock') const scope = nock('') .get('/repos/atom/atom/license') .reply(200, { license: { key: 'mit', name: 'MIT License', spdx_id: 'MIT', url: '', node_id: 'MDc6TGljZW5zZTEz', }, })
This setup says that we will intercept every HTTP call to.
It will intercept an HTTPS GET request to
/repos/atom/atom/license, reply with a status 200, and the body will contain a (partial) response in JSON.
READ THIS! - About interceptors
When you setup an interceptor for a URL and that interceptor is used, it is removed from the interceptor list. This means that you can intercept 2 or more calls to the same URL and return different things on each of them. It also means that you must setup one interceptor for each request you are going to have, otherwise nock will throw an error because that URL was not present in the interceptor list. If you don’t want interceptors to be removed as they are used, you can use the .persist() method.
Specifying hostname
The request hostname can be a string or a RegExp.
const scope = nock('') .get('/resource') .reply(200, 'domain matched')
const scope = nock(/example\.com/) .get('/resource') .reply(200, 'domain regex matched')
Note: You can choose to include or not the protocol in the hostname matching.
Specifying path
The request path can be a string, a RegExp or a filter function and you can use any HTTP verb.
Using a string:
const scope = nock('') .get('/resource') .reply(200, 'path matched')
Using a regular expression:
const scope = nock('') .get(/source$/) .reply(200, 'path using regex matched')
Using a function:
const scope = nock('') .get(uri => uri.includes('cats')) .reply(200, 'path using function matched')
Specifying request body
You can specify the request body to be matched as the second argument to the
get,
putor
deletespecifications. There are five types of second argument allowed:
String: nock will exact match the stringified request body with the provided string', body => string
Nock understands query strings. Search parameters can be included as part of the path:
nock('') .get('/users?foo=bar') .reply(200)' }] })
A
URLSearchParamsinstance can be provided.
const params = new URLSearchParams({ foo: 'bar' }) nock('') .get('/') .query(params) .reply(200)
Nock supports passing a function to query. The function determines if the actual query matches or not.
nock('') .get('/users') .query' }] })
A query string that is already URL encoded can be matched by passing the
encodedQueryParamsflag in the options when creating the Scope.
nock('', { encodedQueryParams: true }) .get('/users') .query('foo%5Bbar%5D%3Dhello%20world%21') .reply(200, { results: [{ id: 'pgte' }] })
Specifying replies
You can specify the return status code for a path on the first argument of reply like this:
const scope = nock('') .get('/users/1') .reply(404)
You can also specify the reply body as a string:
const scope = nock('') .get('/') .reply(200, 'Hello from Google!')
or as a JSON-encoded object:
const scope = nock('') .get('/') .reply(200, { username: 'pgte', email: '[email protected]', _id: '4324243fsd', })
or even as a file:
const scope = nock('') .get('/') .replyWithFile(200, __dirname + '/replies/user.json', { 'Content-Type': 'application/json', })
Instead of an object or a buffer you can also pass in a callback to be evaluated for the value of the response body:
const scope = nock('') .post('/echo') .reply(201, (uri, requestBody) => requestBody) = nock('') .post('/echo') .reply(201, (uri, requestBody, cb) => { fs.readFile('cat-poems.txt', cb) // Error-first callback }) = nock('') .post('/echo') .reply((uri, requestBody) => { return [ 201, 'THIS IS THE REPLY BODY', { header: 'value' }, // optional headers ] })
or, use an error-first callback that also gets the status code:
const scope = nock('') .post('/echo') .reply((uri, requestBody, cb) => { setTimeout(() => cb(null, [201, 'THIS IS THE REPLY BODY']), 1000) })
A Stream works too:
const scope = nock('') .get('/cat-poems') .reply(200, (uri, requestBody) => { return fs.createReadStream('cat-poems.txt') })
Access original request and headers
If you're using the reply callback style, you can access the original client request using
this.reqlike this:
const scope = nock('') .get('/cat-poems') .reply(function(uri, requestBody) { console.log('path:', this.req.path) console.log('headers:', this.req.headers) // ... })
Note: Remember to use normal
functionin that case, as arrow functions are using enclosing scope for
thisbinding.
Replying with errors
You can reply with an error like this:
nock('') .get('/cat-poems') .replyWithError('something awful happened')
JSON error responses are allowed too:
nock('') .get('/cat-poems') .replyWithError({ message: 'something awful happened', code: 'AWFUL_ERROR', })
Note: This will emit an
errorevent on the
requestobject, not the reply.
Specifying headers
Header field names are case-insensitive
Per HTTP/1.1 4.2 Message Headers specification, all message headers are case insensitive and thus internally Nock uses lower-case for all field names even if some other combination of cases was specified either in mocking specification or in mocked requests themselves.
Specifying Request Headers
You can specify the request headers like this:
const scope = nock('', { reqheaders: { authorization: 'Basic Auth', }, }) .get('/') .reply(200)
Or you can use a regular expression or function to check the header values. The function will be passed the header value.
const scope = nock('', { reqheaders: { 'X-My-Headers': headerValue => headerValue.includes('cats'), 'X-My-Awesome-Header': /Awesome/i, }, }) .get('/') .reply(200)
If
reqheadersis not specified or if
hostis not part of it, Nock will automatically add
hostvalue to request header.
If no request headers are specified for mocking then Nock will automatically skip matching of request headers. Since the
hostheader is a special case which may get automatically inserted by Nock, its matching is skipped unless it was also specified in the request being mocked.
You can also have Nock fail the request if certain headers are present:
const scope = nock('', { badheaders: ['cookie', 'x-forwarded-for'], }) .get('/') .reply(200)
When invoked with this option, Nock will not match the request if any of the
badheadersare present.
Basic authentication can be specified as follows:
const scope = nock('') .get('/') .basicAuth({ user: 'john', pass: 'doe' }) .reply(200)
Specifying Reply Headers
You can specify the reply headers like this:
const scope = nock('') .get('/repos/atom/atom/license') .reply(200, { license: 'MIT' }, { 'X-RateLimit-Remaining': 4999 })
Or you can use a function to generate the headers values. The function will be passed the request, response, and response body (if available). The body will be either a buffer, a stream, or undefined.
const scope = nock('') .get('/') .reply(200, 'Hello World!', { 'Content-Length': (req, res, body) => body.length, ETag: () => `${Date.now()}`, })
Default Reply Headers
You can also specify default reply headers for all responses like this:
const scope = nock('') .defaultReplyHeaders({ 'X-Powered-By': 'Rails', 'Content-Type': 'application/json', }) .get('/') .reply(200, 'The default headers should come too')
Or you can use a function to generate the default headers values:
const scope = nock('') .defaultReplyHeaders({ 'Content-Length': (req, res, body) => body.length, }) .get('/') .reply(200, 'The default headers should come too')
Including Content-Length Header Automatically
When using
scope.reply()to set a response body manually, you can have the
Content-Lengthheader calculated automatically.
const scope = nock('') .replyContentLength() .get('/') .reply(200, { hello: 'world' })
NOTE: this does not work with streams or other advanced means of specifying the reply body.
Including Date Header Automatically
You can automatically append a
Dateheader to your mock reply:
const scope = nock('') .replyDate() .get('/') .reply(200, { hello: 'world' })
Or provide your own
Dateobject:
const scope = nock('') .replyDate(new Date(2015, 0, 1)) .get('/') .reply(200, { hello: 'world' })
HTTP Verbs
Nock supports any HTTP verb, and it has convenience methods for the GET, POST, PUT, HEAD, DELETE, PATCH, OPTIONS and MERGE HTTP verbs.
You can intercept any HTTP verb using
.intercept(path, verb [, requestBody [, options]]):
const scope = nock('') .intercept('/path', 'PATCH') .reply(304)
Support for HTTP and HTTPS
By default nock assumes HTTP. If you need to use HTTPS you can specify the like this:
const scope = nock('') // ...
Non-standard ports
You are able to specify a non-standard port like this:
const scope = nock('') ...
Repeat')
To repeat this response for as long as nock is active, use .persist().
Delay
delayConnection(1000)is equivalent to
delay({ head: 1000 }).
Socket timeout
You are able to specify the number of milliseconds that your connection should be idle, to simulate a socket timeout.
nock('') .get('/') .socketDelay(2000) // 2 seconds .reply(200, '<html></html>')
To test a request like the following:
req = http.request('', res => { ... }) req.setTimeout(1000, () => { req.abort() }) req.end()
NOTE: the timeout will be fired immediately, and will not leave the simulated connection idle for the specified period of time.
Chaining
You can chain behaviour like this:', })
Scope filtering
You can filter the scope (protocol, domain or port) of nock through a function. The filtering function is accepted at the
filteringScopefield of the
optionsargument.
This can be useful if you have a node module that randomly changes subdomains to which it sends requests, e.g., the Dropbox node module behaves like this.
const scope = nock('', { filteringScope: scope => /^https:\/\/api[0-9]*.dropbox.com/.test(scope), }) .get('/1/metadata/auto/Photos?include_deleted=false&list=true') .reply(200)
Conditional scope filtering
You can also choose to filter out a scope based on your system environment (or any external factor). The filtering function is accepted at the
conditionallyfield of the
optionsargument.
This can be useful if you only want certain scopes to apply depending on how your tests are executed.
const scope = nock('', { conditionally: () => true, })
Path filtering
You can also filter the URLs based on a function.
This can be useful, for instance, if you have random or time-dependent data in your URL.
You can use a regexp for replacement, just like String.prototype.replace:
const scope = nock('') .filteringPath(/password=[^&]*/g, 'password=XXX') .get('/users/1?password=XXX') .reply(200, 'user')
Or you can use a function:
const scope = nock('') .filteringPath(path => '/ABC') .get('/ABC') .reply(200, 'user')
Note that
scope.filteringPathis not cumulative: it should only be used once per scope.
Request Body filtering
You can also filter the request body based on a function.
This can be useful, for instance, if you have random or time-dependent data in your request body.
You can use a regexp for replacement, just like String.prototype.replace:
const scope = nock('') .filteringRequestBody(/password=[^&]*/g, 'password=XXX') .post('/users/1', 'data=ABC&password=XXX') .reply(201, 'OK')
Or you can use a function to transform the body:
const scope = nock('') .filteringRequestBody(body => 'ABC') .post('/', 'ABC') .reply(201, 'OK')
If you don't want to match the request body you should omit the
bodyargument from the method function:
const scope = nock('') .post('/some_uri') // no body argument .reply(200, 'OK')
Request Headers Matching
If you need to match requests only if certain request headers match, you can.
const scope = nock('') .matchHeader('accept', 'application/json') .get('/') .reply(200, { data: 'hello world', })
You can also use a regexp for the header body.
const scope = nock('') .matchHeader('User-Agent', /Mozilla\/.*/) .get('/') .reply(200, { data: 'hello world', })
You can also use a function for the header body.
const scope = nock('') .matchHeader('content-length', val => val >= 1000) .get('/') .reply(200, { data: 'hello world', })
Optional Requests
By default every mocked request is expected to be made exactly once, and until it is it'll appear in
scope.pendingMocks(), and
scope.isDone()will return false (see expectations). In many cases this is fine, but in some (especially cross-test setup code) it's useful to be able to mock a request that may or may not happen. You can do this with
optionally(). Optional requests are consumed just like normal ones once matched, but they do not appear in
pendingMocks(), and
isDone()will return true for scopes with only optional requests pending.. const getMock = optional => example .get('/pathC') .optionally(optional) .reply(200) getMock(true) example.pendingMocks() // [] getMock(false) example.pendingMocks() // ["GET"]
Allow unmocked requests on a mocked hostname
If you need some request on the same host name to be mocked and some others to really go through the HTTP stack, you can use the
allowUnmockedoption like this:
const scope = nock('', { allowUnmocked: true }) .get('/my/url') .reply(200, 'OK!') // GET /my/url => goes through nock // GET /other/url => actually makes request to the server
Note: When applying
{allowUnmocked: true}, if the request is made to the real server, no interceptor is removed.
Expectations
Every time an HTTP request is performed for a scope that is mocked, Nock expects to find a handler for it. If it doesn't, it will throw an error.
Calls to nock() return a scope which you can assert by calling
scope.done(). This will assert that all specified calls on that scope were performed.
Example:
const scope = nock('') .get('/') .reply(200, 'Hello from Google!') // do some stuff setTimeout(() => { // Will throw an assertion error if meanwhile a "GET" was // not performed. scope.done() }, 5000)
.isDone()
You can call
isDone()on a single expectation to determine if the expectation was met:
const scope = nock('') .get('/') .reply(200) scope.isDone() // will return false
It is also available in the global scope, which will determine if all expectations have been met:
nock.isDone()
.cleanAll()
You can cleanup all the prepared mocks (could be useful to cleanup some state after a failed test) like this:
nock.cleanAll()
.abortPendingRequests()
You can abort all current pending request like this:
nock.abortPendingRequests()
.persist()
You can make all the interceptors for a scope persist by calling
.persist()on it:
const scope = nock('') .persist() .get('/') .reply(200, 'Persisting all the way')
Note that while a persisted scope will always intercept the requests, it is considered "done" after the first interception.
If you want to stop persisting an individual persisted mock you can call
persist(false):
const scope = nock('') .persist() .get('/') .reply(200, 'ok') // Do some tests ... scope.persist(false)
You can also use
nock.cleanAll()which removes all mocks, including persistent mocks.
To specify an exact number of times that nock should repeat the response, use .times().
())
.activeMocks()
You can see every mock that is currently active (i.e. might potentially reply to requests) in a scope using
scope.activeMocks(). A mock is active if it is pending, optional but not yet completed, or persisted. Mocks that have intercepted their requests and are no longer doing anything are the only mocks which won't appear here.
You probably don't need to use this - it mainly exists as a mechanism to recreate the previous (now-changed) behavior of
pendingMocks().
console.error('active mocks: %j', scope.activeMocks())
It is also available in the global scope:
console.error('active mocks: %j', nock.activeMocks())
.isActive()
Your tests may sometimes want to deactivate the nock interceptor. Once deactivated, nock needs to be re-activated to work. You can check if nock interceptor is active or not by using
nock.isActive(). Sample:
if (!nock.isActive()) { nock.activate() }
Logging
Nock can log matches if you pass in a log function like this:
const scope = nock('') .log(console.log) ...
Restoring
You can restore the HTTP interceptor to the normal unmocked behaviour by calling:
nock.restore()
note 1: restore does not clear the interceptor list. Use nock.cleanAll() if you expect the interceptor list to be empty.
note 2: restore will also remove the http interceptor itself. You need to run nock.activate() to re-activate the http interceptor. Without re-activation, nock will not intercept any calls.
Activating
Only for cases where nock has been deactivated using nock.restore(), you can reactivate the HTTP interceptor to start intercepting HTTP calls using:
nock.activate()
note: To check if nock HTTP interceptor is active or inactive, use nock.isActive().
Turning Nock Off (experimental!)
You can bypass Nock completely by setting the
NOCK_OFFenvironment variable to
"true".
This way you can have your tests hit the real servers just by switching on this environment variable.
$ NOCK_OFF=true node my_test.js
Enable/Disable real HTTP requests
By default, any requests made to a host that is not mocked will be executed normally. If you want to block these requests, nock allows you to do so.
Disabling requests
For disabling real http requests.
nock.disableNetConnect()
So, if you try to request any host not 'nocked', it will throw a
NetConnectNotAllowedError.
nock.disableNetConnect() const req = http.get('') req.on('error', err => { console.log(err) }) // The returned `http.ClientRequest` will emit an error event (or throw if you're not listening for it) // This code will log a NetConnectNotAllowedError with message: // Nock: Disallowed net connect for "google.com:80"
Enabling requests
For enabling any real HTTP requests (the default behavior):
nock.enableNetConnect()
You could allow real HTTP requests for certain host names by providing a string or a regular expression for the hostname, or a function that accepts the hostname and returns true or false:
// Using a string nock.enableNetConnect('amazon.com') // Or a RegExp nock.enableNetConnect(/(amazon|github)\.com/) // Or a Function nock.enableNetConnect( host => host.includes('amazon.com') || host.includes(
When you're done with the test, you probably want to set everything back to normal:
nock.cleanAll() nock.enableNetConnect()
Recording
This is a cool feature:
Guessing what the HTTP calls are is a mess, especially if you are introducing nock on your already-coded tests.
For these cases where you want to mock an existing live system you can record and playback the HTTP calls like this:
nock.recorder.rec() // Some HTTP calls happen and the nock code necessary to mock // those calls will be outputted to console
Recording relies on intercepting real requests and responses and then persisting them for later use.
In order to stop recording you should call
nock.restore()and recording will stop.
ATTENTION!: when recording is enabled, nock does no validation, nor will any mocks be enabled. Please be sure to turn off recording before attempting to use any mocks in your tests.
dont_printoption
If you just want to capture the generated code into a var as an array you can use:
nock.recorder.rec({ dont_print: true, }) // ... some HTTP calls const nockCalls = nock.recorder.play()
The
nockCallsvar will contain an array of strings representing the generated code you need.
Copy and paste that code into your tests, customize at will, and you're done! You can call
nock.recorder.clear()to remove already recorded calls from the array that
nock.recorder.play()returns.
(Remember that you should do this one test at a time).
output_objectsoption
In case you want to generate the code yourself or use the test data in some other way, you can pass the
output_objectsoption to
rec:
nock.recorder.rec({ output_objects: true, }) // ... some HTTP calls const nockCallObjects = nock.recorder.play()
The returned call objects have the following properties:
scope- the scope of the call including the protocol and non-standard ports (e.g.
'')
method- the HTTP verb of the call (e.g.
'GET')
path- the path of the call (e.g.
'/pgte/nock')
body- the body of the call, if any
status- the HTTP status of the reply (e.g.
200)
response- the body of the reply which can be a JSON, string, hex string representing binary buffers or an array of such hex strings (when handling
content-encodedin reply header)
headers- the headers of the reply
reqheader- the headers of the request
If you save this as a JSON file, you can load them directly through
nock.load(path). Then you can post-process them before using them in the tests. For example, to add request body filtering (shown here fixing timestamps to match the ones captured during recording):
nocks = nock.load(pathToJson) nocks.forEach(function(nock) { nock, function(match, key, value) { return key + ':' + recordedTimestamp }) } else { return body } } })
Alternatively, if you need to pre-process the captured nock definitions before using them (e.g. to add scope filtering) then you can use
nock.loadDefs(path)and
nock.define(nockDefs). Shown here is scope filtering for Dropbox node module which constantly changes the subdomain to which it sends the requests:
// Pre-process the nock definitions as scope filtering has to be defined before the nocks are defined (due to its very hacky nature). const nockDefs = nock.loadDefs(pathToJson) nockDefs.forEach(def => { // Do something with the definition object e.g. scope filtering. def.options = { ...def.options, filteringScope: scope => /^https:\/\/api[0-9]*.dropbox.com/.test(scope), } }) // Load the nocks from pre-processed definitions. const nocks = nock.define(nockDefs)
enable_reqheaders_recordingoptionin
recorder.rec()options.
nock.recorder.rec({ dont_print: true, output_objects: true, enable_reqheaders_recording: true, })
Note that even when request headers recording is enabled Nock will never record
user-agentheaders.
user-agentvalues change with the version of Node and underlying operating system and are thus useless for matching as all that they can indicate is that the user agent isn't the one that was used to record the tests.
loggingoption
Nock will print using
console.logby default (assuming that
dont_printis
false). If a different function is passed into
logging, nock will send the log string (or object, when using
output_objects) to that function. Here's a basic example.
const appendLogToFile = content => { fs.appendFile('record.txt', content) } nock.recorder.rec({ logging: appendLogToFile, })
use_separatoroption
By default, nock will wrap its output with the separator string
<<<<<<-- cut here -->>>>>>before and after anything it prints, whether to the console or a custom log function given with the
loggingoption.
To disable this, set
use_separatorto false.
nock.recorder.rec({ use_separator: false, })
.removeInterceptor()
This allows removing a specific interceptor. This can be either an interceptor instance or options for a url. It's useful when there's a list of common interceptors shared between tests, where an individual test requires one of the shared interceptors to behave differently.
Examples:
nock.removeInterceptor({ hostname: 'localhost', path: '/mockedResource', })
nock.removeInterceptor({ hostname : 'localhost', path : '/login' method: 'POST' proto : 'https' })
const interceptor = nock('').get('somePath') nock.removeInterceptor(interceptor)
Events
A scope emits the following events:
emit('request', function(req, interceptor, body))
emit('replied', function(req, interceptor))
Global no match event
You can also listen for no match events like this:
nock.emitter.on('no match', req => {})
Nock Back
Fixture recording support and playback.
Setup
You must specify a fixture directory before using, for example:
In your test helper
const nockBack = require('nock').back nockBack.fixtures = '/path/to/fixtures/' nockBack.setMode('record')
Options
nockBack.fixtures: path to fixture directory
nockBack.setMode(): the mode to use
Usage
By default if the fixture doesn't exist, a
nockBackwill create a new fixture and save the recorded output for you. The next time you run the test, if the fixture exists, it will be loaded in.
The
thiscontext of the callback function will have a property
scopesto access all of the loaded nock scopes.
const nockBack = require('nock').back const request = require('request') nockBack.setMode('record') nockBack.fixtures = __dirname + '/nockFixtures' //this only needs to be set once in your test helper // recording of the fixture nockBack('zomboFixture.json', nockDone => { request.get('', }) }) })
If your tests are using promises then use
nockBacklike this:
return nockBack('promisedFixture.json') .then(({ nockDone, context }) => { // do your tests returning a promise and chain it with // `.then(nockDone)` }) })
Options
As an optional second parameter you can pass the following options
before: a preprocessing function, gets called before nock.define
after: a postprocessing function, gets called after nock.define
afterRecord: a postprocessing function, gets called after recording. Is passed the array of scopes recorded and should return the intact array, a modified version of the array, or if custom formatting is desired, a stringified version of the array to save to the fixture
recorder: custom options to pass to the recorder
Example
function prepareScope(scope) { scope, (match, key, value) => `${key}:${recordedTimestamp}` ) } else { return body } } } nockBack('zomboFixture.json', { before: prepareScope }, nockDone => { request.get('', function(err, res, body) { // do your tests nockDone() } }
Modes
To set the mode call
nockBack.setMode(mode)or run the tests with the
NOCK_BACK_MODEenvironment variable set before loading nock. If the mode needs to be changed programmatically, the following is valid:
nockBack.setMode(nockBack.currentMode)
wild: all requests go out to the internet, don't replay anything, doesn't record anything
dryrun: The default, use recorded nocks, allow http calls, doesn't record anything, useful for writing new tests
record: use recorded nocks, record new nocks
lockdown: use recorded nocks, disables all http calls even when not nocked, doesn't record
Commoninvocations will disable retrying, e.g.:
await got("", { retry: 0 })
If you need to do this in all your tests, you can create a module
got_client.jswhich exports a custom got instance:
const got = require('got') module.exports = got.extend({ retry: 0 })
This is how it's handled in Nock itself (see #1523).
Axios
To use Nock with Axios, you may need to configure Axios to use the Node adapter as in the example below:
import axios from 'axios' import nock from 'nock' import test from 'ava' // You can use any test framework. // If you are using jsdom, axios will default to using the XHR adapter which // can't be intercepted by nock. So, configure axios to use the node adapter. // // References: // // axios.defaults.adapter = require('axios/lib/adapters/http') test('can fetch test response', async t => { // Set up the mock request. const scope = nock('') .get('/test') .reply(200, 'test response') // Make the request. Note that the hostname must match exactly what is passed // to `nock()`. Alternatively you can set `axios.defaults.host = ''` // and run `axios.get('/test')`. await axios.get('') // Assert that the expected request was made. scope.done() })
Debugging
Nock uses
debug, so just run with environmental variable
DEBUGset to
nock.*.
$ DEBUG=nock.* node my_test.js
Contributing
Thanks for wanting to contribute! Take a look at our Contributing Guide for notes on our commit message conventions and how to run tests.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
Contributors
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!
Sponsors
Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]
License | https://www.javascripting.com/view/nock | CC-MAIN-2020-16 | en | refinedweb |
Answered
Hi,
i use one library - with name Joost for streaming xml transformations and it allows scripting.
The config for this library looks like XSLT, but has it's own name spaces.
And the scripting tag has usual name <script>, but the name space is joost:
<?xml version="1.0"?>
<!-- test for recursive stx:process-siblings -->
<stx:transform xmlns:stx=""
xmlns:
<joost:script
// some script
</joost:script>
</stx:transform>
And there is dilemma - i can remove the namespace - but specifying it like this:
xmlns=""
But this breaks output, however allowing me to use Javascript code completion.
Or i can use it as is is supposed to be - but Idea treats content as usual text - no JS completion.
Is there any way to specify idea that this is JS script - ignoring the namespace of the tag?
You can add this tag to language injection settings, just like the default one:
Thanks,
That's amazing! | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206608469-XHTML-file-type-script-tag-namespace | CC-MAIN-2020-16 | en | refinedweb |
From: Guillaume Melquiond (gmelquio_at_[hidden])
Date: 2003-05-18 10:30:57
According to the paragraph 3.7.3.1-3 of the Standard, an 'operator new'
can return a null pointer or throw an exception to report a failed
allocation; but it isn't allowed to adopt the two behaviors.
Unfortunately, it's exactly what 'stateless_integer_add' does for the sake
of avoiding warnings; and gcc complains about it. So the line
"return 0; // suppress warnings is wrong." doesn't do what it's supposed
to do: it doesn't suppress warnings. Here is patch that returns another
pointer so that gcc doesn't complain. I'm not sure it's the best way to
choose a non-null pointer, but at least gcc doesn't complain anymore, and
the compilers that needed a return statement still have it.
Index: libs/function/test/stateless_test.cpp
===================================================================
RCS file: /cvsroot/boost/boost/libs/function/test/stateless_test.cpp,v
retrieving revision 1.6
diff -u -r1.6 stateless_test.cpp
--- libs/function/test/stateless_test.cpp 30 Jan 2003 14:25:00 -0000 1.6
+++ libs/function/test/stateless_test.cpp 18 May 2003 15:25:01 -0000
@@ -24,7 +24,7 @@
void* operator new(std::size_t, stateless_integer_add*)
{
throw std::runtime_error("Cannot allocate a stateless_integer_add");
- return 0; // suppress warnings
+ return (void*)1; // suppress warnings
}
void operator delete(void*, stateless_integer_add*) throw()
Regards,
Guillaume
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2003/05/47941.php | CC-MAIN-2020-16 | en | refinedweb |
Testing Kotlin With Spock (Part 1): Object
Testing Kotlin With Spock (Part 1): Object
Like using Spock for your tests? See how you can use it to test Kotlin code. This introduction focuses on making tests work with Kotlin's object keyword.
Join the DZone community and get the full member experience.Join For Free
The
object keyword in Kotlin creates a singleton in a very convenient way. It can be used, for example, as a state of an operation. Spock is one of the most expressive and readable test frameworks available in the Java ecosystem. Let's see how Kotlin's
object can be used in Spock tests.
What Do We Want to Test?
We have a single method,
validate, in our
Validator interface that: The: The Naive Approach
We need instances instead, so we modify the test a little:
def 'should validate age #age'() { expect: sut.validate(age) == result where: age | result 0 | new Error() 17 | new Error() 18 | new Ok() 19 | new Ok() }
And again, this one fails as well. Why? Because the
Error and
Ok classes do not have overridden
equals methods. But why? We expect Kotlin objects (those created with the
object keyword, not plain objects) to have it implemented correctly. What is more, it works correctly in Kotlin:
fun isOk(status:ValidationStatus) = status == Ok
Third: The a comparison, then we should access the class static property
INSTANCE:
def 'should validate age #age'() { expect: sut.validate(age) == result where: age | result 0 | Error.INSTANCE 17 | Error.INSTANCE 18 | Ok.INSTANCE 19 | Ok.INSTANCE }
Now the test passes.
Fourth: An Alternative Approach
We can also check the method result without a }
Show me the Code
The code is available here.
Published at DZone with permission of Dominik Przybysz . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/testing-kotlin-with-spock-part-1-object?fromrel=true | CC-MAIN-2020-16 | en | refinedweb |
Subject: Re: [boost] Formal Review of Proposed Boost.Process library
From: Max Sobolev (macsmr_at_[hidden])
Date: 2011-02-19 10:55:49
(First, please, excuse me for my "english".)
This variant of the Boost.Process library should NOT (--never--) be
accepted.
The Boost.Process must be implemented as a DSEL (probably through the
Boost.Proto expression templates framework) with a nice (commonly known)
syntax like:
using boost::process;
using process::arg;
namespace fs = boost::filesystem;
process ls("ls"), grep("grep");
fs::path libs = "/usr/lib";
auto pipe = ls [--arg("reverse") % -arg('l') % libs] | grep ["^d"];
run(pipe);
// or:
// pipe();
// or:
// ls();
This approach provide a declarative, not an imperative programming style. (A
lot of work done by expression templates' inner mechanics.)
The library must support:
* i/o redirection
* process piping; [implicit pipes]
* named/unnamed pipe objects; [explicit pipes]
* runned processes surfing (boost::process::processes_iterator):
- CreateToolhelp32Snapshot()/Process32First()/Process32Next() API
functions in Windows
- /proc filesystem parsing in linux/some unix
* loaded shared libraries surfing (boost::process::modules_iterator):
- CreateToolhelp32Snapshot()/Module32First()/Module32Next() API functions
in Windows
- /proc filesystem parsing in linux/some unix
* child processes surfing (boost::process::children_iterator)
- parsing processes_iterator's range results
- /proc filesystem parsing in linux/(perhaps) some unix
- empty iterator range by default
* runned thread per process surfing (boost::process::threads_iterator)
- filtered results of
CreateToolhelp32Snapshot()/Thread32First()/Thread32Next() API functions in
Windows
- /proc filesystem parsing in linux (see /proc/[pid]/task filesystem
branch since linux 2.6.0-test6)
- empty iterator range by default
* runned thread surfing (boost::threads_iterator)
- CreateToolhelp32Snapshot()/Thread32First()/Thread32Next() API functions
in Windows
- FOR-EACH process IN system:
back_inserter(boost::process::threads_iterator(process),
boost::process::threads_iterator(),
threads);
* (any) process stats
* process creation
* daemonization
- through Service API on Windows
* process security and privacy aspects (probably this is subject for an
another separate library, part of which must be integrated with the
Boost.Process)
_________________________________
More examples:
using boost::process;
using process::arg;
using process::env;
namespace fs = boost::filesystem;
// echo $PATH # (%PATH% in Windows)
process("echo") [env("PATH")]
();
// or: run(process("echo") [env("PATH")]);
// ps -aux > ps.out
process ps("ps");
run(ps [-arg('a', 'u', 'x')] > "ps.out");
process::_this_ > "file.out" < "file.in"; // probably static_assert() on
Windows platform
process::_this_ >> "file.out" < "file.in";
std::cin >> ...; // read from "file.in"
std::cout << ...; // write to "file.out"
// grep -i -n searched_word file.in > file.out:
process grep("grep");
run(
grep [-arg('i') % -arg('n') % "searched_word" % "file.in"] >
"file.out");
// cat /etc/rc.d/xinetd | grep -v '^#' | sed '/^$/d' > file.out
process cat("cat"), grep("grep"), sed("sed");
fs::path conf("/etc/rc.d/xinetd");
auto pipe = cat [conf] | grep [-arg('v') % "^#"] | sed ["/^$/d"] >
"file.out";
pipe();
using namespace process::placeholders;
process rm("rm");
rm [arg("filename") % "non_existent"] > "file.out", _2 >& _1;
rm(); // or: run(rm);
_________________________________
Some mini-review (concerning interface design only) on the proposed variant
of the library:
* process::find_executable_in_path() is a redundant function.
His job must be done by default (when the filename() path's suffix supplied
only instead of the full/relative path), if it's supposed on the target
platform when the command shell gets the prog name from the user. In rare
case the user can manipulates paths through the env(ironment variables)
class (or put the full path explicitly), if default behavior is not
suitable.
By the way, a parameter and an return value must have been the
boost::filesystem::basic_path<>, not a std::string. Obviously, the library
must be integrated with the Boost.Filesystem.
* A "stream behavior" boost::function<> mechanism is not obvious and not
straightforward; unwarrantly hard to use.
* environment(-variables) and args-list are solid abstractions (with a sharp
behavior), not data structures.
* child process subtype is a wrong abstraction (his "child" property isn't
enough for his existence). Name it process simply. I.e. delete a child
subtype from an inheritance tree; In Unix environment (similar in Windows)
all processes, except INIT, have a parent: each process is a child almost
without exceptions.
* ----code excerpt from the User Guide--
int exit_code = child.wait();
#if defined(BOOST_POSIX_API)
if (WIFEXITED(exit_code))
exit_code = WEXITSTATUS(exit_code);
#endif
std::cout << exit_code << std::endl;
----------------------------------------
IFDEFs must be incapsulated in the wait() member function. Signal
terminating/stopping status of process can be abstracted into another terms,
which are clear enough for a non "signal-conformant" platforms.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2011/02/177326.php | CC-MAIN-2020-16 | en | refinedweb |
56486/how-to-import-json-file-to-mongodb-using-pyspark-or-python
I'm try to import json in the file to mongodb using pyspark after connection pyspark with mongodb, I'm using Ubuntu, my file in () I write it like this ('home/user/Downloads/newdb/hale.json'),
I want to try using this code but in JSON
df = spark.read.csv(path = '/home/ahmad/Downloads/newdb/hale.csv', header=True, inferSchema=True)
??
You can use the same format as in your question but replace csv with json.
df = spark.read.json(path = '/home/ahmad/Downloads/newdb/hale.json', header=True, inferSchema=True)
I have this code, and I want ...READ MORE
In Python 2, use urllib2 which comes ...READ MORE
You can get the changing time from ...READ MORE
Hi, good question.
It is a very simple ...READ MORE
I found this sample code:
import zipfile
import io
def ...READ MORE
suppose you have a string with a ...READ MORE
if you google it you can find. ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
subprocess.Popen(["hadoop", "fs", "-cat", "/path/to/myfile"], stdout ...READ MORE
down voteacceptedFor windows:
you could use winsound.SND_ASYNC to play them ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/56486/how-to-import-json-file-to-mongodb-using-pyspark-or-python | CC-MAIN-2020-16 | en | refinedweb |
And one additional note: I can get my add-on to work as it should if I copy my add-on dll to bin folder of Alloy. So somehow this is related to assembly scanning or something like that. But as I have not dived deep into to inner workings of .net / EPiServer assembly/reference scannning, I'm soooooooo puzzled with this.
Note 2: Probably I could get this working just by creating everything again from scratch, but then I would not learn anything, I really would like to understand what is happening here.
Personally, I'd first strip out that AddOn (remove folders, all copies of the bin, edit the packages config), rebuild (ie clean solution) and make sure the site runs fine. Then put it back in and try again.
It's a weird problem so I suspect you've got some kind of conflict or similar... maybe an old copy of that assembly that's gotten copied somewhere?
Also check to see if it works as a proper AddOn as well, without using the project-based deployment technique.
Hi Janne
Have a look at your view web.config : YOUR_SITE_ROOT\modules\YOUR_MODULE\Views\Web.config
Have a look at the compilation assemblies section (system.web->compilation->assemblies)
you should have here : <add assembly="your_addon_assembly" />
if you don't have it you will get that CS0246 error, so if it was missing then add it.
This is just a wild quess what is happening, another option might be that you have changed your addons assembly name and just forgot to change it here.
Hope this helps / is the cause to your error.
Damn you Antti, you saved my monday! It was definitely about the missing assembly setting under compilation. I had it there at some point, and just as you said, I changed my addon name, ended up with errors, then recreated the whole shit from scratch again and this time forgot the assembly setting. So stupid and frustrating. I will kiss you next time we see! :-)
Hi,
this is driving me crazy. I've been developing quite simple add-on that contain one block. I've setup the environment as described here: so I have my add-on project home under Alloy sample site "modules" folder. I'm also copying the addon dll to "modulesbin" of Alloy site on every build. All this was working nicely for a while, block type was visible on alloy edit view and view for the block was rendered nicely. But now suddenly, I started getting " CS0246: The type or namespace name 'MyAddOnNameSpace' could not be found (are you missing a using directive or an assembly reference?)" from the block view .cshtml compilation. The controller from the same assembly is executed nicely (set some breakpoints) but when compiling the view I get this error.
I really cannot understand why. My dll is in modulesbin, i have <scanAssembly forceBinFolderScan="true" probingPath="modulesbin" /> episerverframework.config of Alloy site and so forth.
Any ideas what could cause this? | https://world.episerver.com/forum/legacy-forums/Episerver-7-CMS/Thread-Container/2013/9/Are-you-missing-reference-when-running-block-view-created-as-on-addon/ | CC-MAIN-2020-16 | en | refinedweb |
On Fri, Dec 23, 2005 at 12:03:27PM +0800, Zhang, Yanmin wrote: > >>> +static struct sysfs_ops topology_sysfs_ops = { > >>> + .show = topology_show, > >>> + .store = topology_store, > >>> +}; > >>> + > >>> +static struct kobj_type topology_ktype = { > >>> + .sysfs_ops = &topology_sysfs_ops, > >>> + .default_attrs = topology_default_attrs, > >>> +}; > >>> + > >>> +/* Add/Remove cpu_topology interface for CPU device */ > >>> +static int __cpuinit topology_add_dev(struct sys_device * sys_dev) > >>> +{ > >>> + unsigned int cpu = sys_dev->id; > >>> + > >>> + memset(&cpu_topology_kobject[cpu], 0, sizeof(struct kobject)); > >>> + > >>> + cpu_topology_kobject[cpu].parent = &sys_dev->kobj; > >>> + kobject_set_name(&cpu_topology_kobject[cpu], "%s", "topology"); > >>> + cpu_topology_kobject[cpu].ktype = &topology_ktype; > >>> + > >>> + return kobject_register(&cpu_topology_kobject[cpu]); > >>> +} > >> > >>Can't you just use an attribute group and attach it to the cpu kobject? > >>That would save an array of kobjects I think. > As you know, current i386/x86_64 arch also export cache info under > /sys/device/system/cpu/cpuX/cache. Is it clearer to export topology > under a new directory than under cpu directly? No, the place in the sysfs tree you are putting this is just fine. I'm just saying that you do not need to create a new kobject for these attributes. Just use an attribute group, and you will get the same naming, without the need for an extra kobject (and the whole array of kobjects) at all. Does that make more sense? > >>> +static int __cpuinit topology_cpu_callback(struct notifier_block *nfb, > >>> + unsigned long action, void *hcpu) > >>> +{ > >>> + unsigned int cpu = (unsigned long)hcpu; > >>> + struct sys_device *sys_dev; > >>> + > >>> + sys_dev = get_cpu_sysdev(cpu); > >>> + switch (action) { > >>> + case CPU_ONLINE: > >>> + topology_add_dev(sys_dev); > >>> + break; > >>> +#ifdef CONFIG_HOTPLUG_CPU > >>> + case CPU_DEAD: > >>> + topology_remove_dev(sys_dev); > >>> + break; > >>> +#endif > >> > >>Why ifdef? Isn't it safe to just always have this in? > If no ifdef here, gcc reported a compiling warning when I compiled it > on IA64 with CONFIG_HOTPLUG_CPU=n. Then you should probably go change it so that CPU_DEAD is defined on non-smp builds, otherwise the code gets quite messy like the above :) thanks, greg k-h - To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to [email protected] More majordomo info at on Sat Dec 24 06:18:29 2005
This archive was generated by hypermail 2.1.8 : 2005-12-24 06:18:37 EST | http://www.gelato.unsw.edu.au/archives/linux-ia64/0512/16310.html | CC-MAIN-2020-16 | en | refinedweb |
I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this?
Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core
You must make sure that django is in your PYTHONPATH.
To test, just do a
import django from a python shell. There should be no output:
ActivePython 2.5.1.1 (ActiveState Software Inc.) based on Python 2.5.1 (r251:54863, May 1 2007, 17:47:05) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import django >>>
If you installed django via
setuptools (
easy_install, or with the
setup.py included with django), then check in your
site-packages if the
.pth file (
easy-install.pth,
django.pth, ...) point to the correct folder.
HIH.
I have the same problem on Windows and it seems I've found the problem. I have both 2.7 and 3.x installed. It seems it has something to do with the associate program of .py:
In commandline type:
assoc .py
and the result is:
.py=Python.File
which means .py is associated with Python.File
then I tried this:
ftype Python.File
I got:
Python.File="C:\Python32\python.exe" "%1" %*
which means in commandline .py is associated with my Python 3.2 installation -- and that's why I can't just type "django-admin.py blah blah" to use django.
ALL you need to do is change the association:
ftype Python.File="C:\Python27\python.exe" "%1" %*
then everythong's okay! | https://pythonpedia.com/en/knowledge-base/312549/no-module-named-django-core | CC-MAIN-2020-16 | en | refinedweb |
49038/is-it-possible-to-concatenate-querysets
You can start with this-
from itertools import chain
then replace
myQuerySet = myQuerySet + myQuerySetTwoD[j]
with
BgpAsnList = chain(BgpAsnList,BgpAsnListTwoD[j])
A better idea is to use the ...READ MORE
Good question, glad you brought this up.
I ...READ MORE
You cannot have two methods with the ...READ MORE
You can use
np.zeros(4,3)
This will create a 4 ...READ MORE
To install Django, you can simply open ...READ MORE
Try to install an older version i.e., ...READ MORE
ALLOWED_HOSTS as in docs is quite self ...READ MORE
Go to your project directory
cd project
cd project
ALLOWED_HOSTS ...READ MORE
I assume you are asking about the ...READ MORE
my_list = [1,2,3,4,5,6,7]
len(my_list)
# 7
The same works for ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/49038/is-it-possible-to-concatenate-querysets | CC-MAIN-2020-16 | en | refinedweb |
#include <JoystickPort.hh>
Definition at line 22 of file JoystickPort.hh.
Definition at line 13 of file JoystickPort.cc.
A Connector belong to a certain class.
Only Pluggables of this class can be plugged in this Connector.
Implements openmsx::Connector.
Definition at line 26 of file JoystickPort.cc.
Get a description for this connector.
Implements openmsx::Connector.
Definition at line 21 of file JoystickPort.cc.
Definition at line 31 of file JoystickPort.cc.
References openmsx::Connector::getPlugged().
Referenced by plug(), and read().
This plugs a Pluggable in this Connector.
The default implementation is ok.
Reimplemented from openmsx::Connector.
Definition at line 36 of file JoystickPort.cc.
References getPluggedJoyDev(), openmsx::Connector::plug(), and openmsx::JoystickDevice::write().
Implements openmsx::JoystickPortIf.
Definition at line 42 of file JoystickPort.cc.
References getPluggedJoyDev(), and openmsx::JoystickDevice::read().
Definition at line 58 of file JoystickPort.cc.
References openmsx::Connector::getPluggingController().
Implements openmsx::JoystickPortIf.
Definition at line 47 of file JoystickPort.cc. | http://openmsx.org/doxygen/classopenmsx_1_1JoystickPort.html | CC-MAIN-2020-16 | en | refinedweb |
C++ supports in-class initialization of static integral constant members. It is nearly the same as enum, with the following difference (quoted from C++03 9.4.2.4 ([class.static.data])):
The member shall still be defined in a namespace scope if it is used in the program and the namespace scope definition shall not contain an initializer.
For example:
struct S {
static const int i = 0;
};
const int S::i; // the definition is required if S::i is used
int main()
{
return &S::i ? 1 : 0;
}
However, if you compile the above code in VC, you'll find that it links successfully even without the definition!
The magic is told in <xstddef> (under VSInstallFolder\VC\include)
#define _STCONS(ty, name, val) static const ty name = (ty)(val)
#if !defined(_MSC_EXTENSIONS) && !(defined(_DLL) && !defined(_STATIC_CPPLIB))
// Under /Ze, static const members are automatically defined, so provide a
// definition only under /Za, and only when __declspec(dllimport) not used.
#define _STCONSDEF(cls, ty, name) __declspec(selectany) const ty cls::name;
#else
#define _STCONSDEF(cls, ty, name)
#endif
This "evil" extension eases the usage of static constant member. Without it, you have to ensure that you define the constant exactly once somewhere. It is hard for header-only libary.
One side effect of this evil extension is, you should no longer define static constant member by yourselves. Otherwise it will conflict with the implitly defined one by the compiler and you'll get a linker error because of the violation of one definition rule (ODR). Like:
main.obj : error LNK2005: "public: static int const S::i" (?i@S@@2HB) already defined in other.obj
BTW, template is different. See the following code:
struct S {
static const int i;
};
const int S::i = 0; // redefinition error if the code is in a header and the header is included by multiple source files
template<typename T>
struct ST {
static const T i;
};
template<typename T>
const T ST<T>::i(0); // OK
The standard says: "There can be more than one definition of ..., or static data member of a class template, ... provided that each definition appears in a different translation unit, and provided the definitions satisfy the following requirements ..." | https://blogs.msdn.microsoft.com/xiangfan/2010/03/03/vcs-evil-extension-implicit-definition-of-static-constant-member/ | CC-MAIN-2018-09 | en | refinedweb |
JavaScript in Mobile First Development explores the best of the APIs offered by the browser, but there are different browsers and so many devices. There are a lot of APIs to explore the device capability.
In Bootstrap 3, the JavaScript jQuery plugins for Bootstrap have fixed a lot of bugs. One of the biggest changes was the addition of namespace events to provide a no-conflict environment for Bootstrap JavaScript plugins.
In this chapter, you will learn how to enhance the behavior of your mobile-to-desktop experience. Get the best optimized JavaScript to achieve the right direction to your web application. Let's get started with Bootstrap JavaScript!
Bootstrap, as a frontend framework, ...
No credit card required | https://www.safaribooksonline.com/library/view/mobile-first-bootstrap/9781783285792/ch03.html | CC-MAIN-2018-09 | en | refinedweb |
I have a RoR app. And in app users can create posts. I've connected Posts table in my
routes.rb
resources :posts
link
@post.link = @post.theme.parameterize.underscore
post/1
@post.link
The technique is referred to as
slugifying and you need to do three things...
(1) create a new field called
slug in your
posts table.
(2) add this code to your
Post model...
after_validation :generate_slug def generate_slug self.slug = theme.parameterize.underscore end def to_param slug end
(3) finally, in your controllers where you have
find_post methods, rewrite it to be...
def find_post post.find_by_slug(params[:id]) end
The
to_param method in the model is how things like
post_path(@post) build the url... the
to_param if not replaced substituted the
id field but by writing your own
to_param method you can ensure that the
slug field is substituted instead. | https://codedump.io/share/0it4kIwoocYO/1/rails-how-to-change-automatic-generated-link | CC-MAIN-2018-09 | en | refinedweb |
Don't you hate writing import lines and not being sure how many dot-dot-slashes you need to get to the right place? Sure, you can look over at the project tree but there are so many files that you've got to scroll and scroll.
Oh no... Was that
'../../../../' or
'../../../../../'? [scrolls back down]. It drives me bananas.
Things are even worse if you restructure your project tree and some files move higher or lower. Or you copy-paste import sections between files that are at different levels in the project tree. Now your watch window is a sea of red.
But... it shouldn't matter if your dependency is up four directories vs. five. Unfortunately, that's just how module resolution works in Node.js.
Wouldn't it be great if you could forget about relative paths entirely?
You could have the deepest, most complex project structure… bring it on.
Instead of importing like this:
import { getUsers } from '../../../selectors/userSelectors';import { loadUsersRequest } from '../../../actions/userActions';import { ErrorMessage } from '../shared/messages';import { UserList } from './userList';import { logger } from '../../../../util/logger';
You could import like this:
import { getUsers } from 'selectors/userSelectors';import { loadUsersRequest } from 'actions/userActions';import { ErrorMessage } from 'ui/shared/messages';import { UserList } from 'ui/users/userList';import { logger } from 'logger';
No matter where your file sits in the tree. It Just Works.
The solution is to define the
paths and
baseUrl properties in the
compilerOptions section in your tsconfig.json file. These properties first showed up in TypeScript 2.0.
{ "compilerOptions": { "module": "commonjs", "moduleResolution": "node", "jsx": "react", "baseUrl": "src", "paths": { "actions/*": [ "app/actions/*" ], "selectors/*": [ "app/selectors/*" ], "ui/*": [ "app/ui/*" ], "logger": [ "util/logger" ], } }}
Notice that we can specify both an exact string (e.g. 'logger') and a path with an asterisk to match all subpaths (e.g. 'ui/*' matches 'ui/users/userList' and 'ui/shared/messages' etc).
You will also need to configure the
resolve.alias section in your webpack.config.js file because the TypeScript compiler doesn't rewrite the paths in the emitted JavaScript.
const path = require('path');// This helper function is not strictly necessary.// I just don't like repeating the path.join a dozen times.function srcPath(subdir) { return path.join(__dirname, "src", subdir);}module.exports = { resolve: { alias: { actions: srcPath('app/actions'), selectors: srcPath('app/selectors'), ui: srcPath('app/ui'), logger: srcPath('util/logger'), }, // ... }, // ...}; | https://decembersoft.com/posts/say-goodbye-to-relative-paths-in-typescript-imports/ | CC-MAIN-2018-09 | en | refinedweb |
Has Not Benefited From Peer ReviewA blog of random writing by William Grover.
On the occasion of John Portman's death
with Python's sys.getrefcount()
Python has a function called
sys.getrefcount()that tells you the reference count of an object. For example, the following code,
import sys print sys.getrefcount(24601)
has the output
3
That basically means that 3 things in Python currently have the integer value 24601. Why would Python keep track of how many things have a value that’s a particular integer? Since integers are one of the immutable data types in Python, Python can save computing resources by having all the variables that contain 24601 refer to the same data in the computer’s memory. For example, if we keep running the above and now assign 24601 to the variable
foolike this:
foo = 24601 print sys.getrefcount(24601)
the output is now
4
because we’ve now created another variable with a value of 24601. We didn’t dedicate any new memory to storing the value 24601; we just created a new reference to the same memory that already contained 24601. And if we change the value of
footo be something other than 24601, like this:
foo = 12345 print sys.getrefcount(24601)
the output is back to
3
meaning that Python got rid of the unneeded reference to 24601 (and made a new reference to 12345).
In my experience, 3 is the smallest number of references that I ever get out of
sys.getrefcount(). It’s greater than 1 because it includes temporary references created when
sys.getrefcountis called. Receiving 3 from
sys.getrefcount(number)basically means that
numberis used in your current code but isn’t used anywhere else in Python. So based on our experiments above, it looks like the integer 24601 isn’t used anywhere by default in Python.
What happens if we run
sys.getrefcount()on some smaller integers?
import sys print sys.getrefcount(1) print sys.getrefcount(2) print sys.getrefcount(3)
593 87 28
meaning that, in Python on my computer at this time, there are 593 references to the integer 1, 87 references to 2, and 28 references to 3! This means that these small integers are used elsewhere in Python (i.e., in the guts of the program); there are lots of internal Python variables that contain 1, and many (but fewer) variables that contain 2 or 3. I thought that this gave a pretty interesting insight into the guts of Python, and it raised some interesting questions: which integers are most commonly used in Python’s internals? Which ones are least common? Barn: 1917 and 2017
In an email sent to the faculty, staff, and students on May 26, 2017, UC Riverside Chancellor Kim Wilcox shared a 100-year-old photograph of The Barn. Today, The Barn is a campus lunch spot where we sometimes treat campus visitors to lunch. But 100 years ago, it was—well—a barn, a working barn on the grounds of the then-ten-year-old Citrus Experiment Station. Nearly 40 years would pass before The Barn and the land around it became the University of California, Riverside. Then The Barn became a noteworthy venue for live music with performers like No Doubt, Sublime, and Doc Watson appearing there. Today it’s mostly a lunch spot, but in his email, Chancellor Wilcox announced plans for renovating the barn next year and returning live entertainment to this historic venue.
Rolling around campus on scooters with my sons this Sunday, I decided to stop at The Barn and take a photograph from roughly the same spot as the 1917 photo was taken. After some trial-and-error in Photoshop, I made a little animation that compares the 1917 barn with the 2017 barn:… | http://groverlab.org/hnbfpr/ | CC-MAIN-2018-09 | en | refinedweb |
Histograms - 2: Histogram Equalization
Histograms Equalization
cv2.equalizeHist()
CLAHE (Contrast Limited Adaptive Histogram Equalization)
So to solve this problem, adaptive histogram equalization is used. In this, image is divided into small blocks called “tiles” (tileSize is 8x8 by default in OpenCV). Then each of these blocks are histogram equalized as usual. So in a small area, histogram would confine to a small region (unless there is noise). If noise is there, it will be amplified. To avoid this, contrast limiting is applied. If any histogram bin is above the specified contrast limit (by default 40 in OpenCV), those pixels are clipped and distributed uniformly to other bins before applying histogram equalization. After equalization, to remove artifacts in tile borders, bilinear interpolation is applied.
import numpy as np import cv2 img = cv2.imread('tsukuba_l.png',0) # create a CLAHE object (Arguments are optional). clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) cl1 = clahe.apply(img) cv2.imwrite('clahe_2.jpg',cl1)
Python+OpenCV学习(2)---图像的合并与拆分
OpenCV图像增强算法实现(直方图均衡化、拉普拉斯、Log、Gamma) | http://www.voidcn.com/article/p-pjjjareo-bro.html | CC-MAIN-2018-09 | en | refinedweb |
Subsets and Splits